Wednesday, 13 July 2016

It isn’t just other people’s claims on social media

ext week, the Republican party is expected to formally pick Trump as its candidate for president. In the race to the White House, fact checkers like Binkowksi are our last defence against online misinformation. What are we up against?
Trump himself has given fact checkers plenty to do over the past eight months, making “an inordinate number” of false claims, according to Eugene Kiely at FactCheck.org. Another website, PolitiFact.com, looked into 158 claims made by Trump since the start of his campaign and found that four out of five were at best “mostly false”.
Fact checkers correct stories doing the rounds both on social media and in the mainstream press. In the hunt for the truth, they spend their days poring over interview transcripts, scouring video footage and checking references. It can be a thankless job. Rarely are Binkowski’s attempts to present the facts received sympathetically.
For example, she recently posted an article debunking claims that a crowd started shouting “English only!” at an event addressed by the Hispanic civil rights veteran Dolores Huerta. “We got hundreds of emails calling us unprofessional, saying we were biased, saying we were anti-Latina,” says Binkowski.
With roughly six in 10 US adults getting their news primarily from social media, according to a recent Pew Research survey, the issue of accuracy might seem to be ever more important. “One of the things that give social media potency to impact political views is the immediacy of it,” says Julia Shaw, a psychologist at London South Bank University. Then there is the issue of blending fact with opinion. “You might even get an opinion before the information,” says Shaw, which can colour people’s judgement.
Walter Quattrociocchi, a computational social scientist at the IMT School for Advanced Studies in Lucca, Italy, is one of a growing number of researchers concerned about this trend. He and his team have spent the last few years trawling through data from sites like Facebook. In particular, Quattrociocchi has explored how social networks can give rise to echo chambers – spaces in which groups of individuals share stories that reinforce their world view, and rarely get to see anything that challenges their beliefs.

Algorithms not guilty

This phenomenon is often blamed on the algorithms used by social media sites to filter our newsfeeds. But Quattrociocchi’s most recent study suggests it’s partly down to our own behaviour. His team compared identical videos posted on Facebook and YouTube, and found that they led to the formation of echo chambers on both sites, meaning that the algorithm used for content promotion was not a factor.
“It’s the narrative that is attracting the users, not the content,” says Quattrociocchi. “The narrative is identified by the group and the group frames the narrative.”
In his book Lies Incorporated: The world of post-truth politics, US radio host Ari Rabin-Havt talks of an industry of misinformation, although he agrees that we bring much of it on ourselves. “When people are given a choice, they’re going to choose what’s comforting and easy for them,” he says. “They’re going to avoid information that challenges them and therefore get stuck in echo chambers.”
And since people only hear what they to want to hear, it isn’t straightforward to counter falsehoods spreading online. Shaw says that politicians exploit our willingness to remember something that appeals to us, regardless of whether it will eventually prove unfounded.
“Trump, for example, consistently says things that are demonstrably untrue and then takes them back,” she says. “He is getting people to believe things and relying on them to forget that he, or someone else, may correct it later on,” says Shaw.
What’s more, any sharing of the results of fact-checking typically lags the misinformation by 10 to 20 hours, according to a recent study by Chengcheng Shao at the National University of Defense Technology in Changsha, China, and colleagues.
Still, there are those who challenge the idea that online media has dramatically sidelined truth from politics. After all, the popularity of conspiracy theories is nothing new.
“Many of the same things were happening before Facebook,” says David Lazer, a computer and political scientist at Northeastern University in Boston. “I have not seen a compelling answer to whether this has really changed.”
Binkowski thinks otherwise. “There’s something about this perfect storm of identity politics plus the internet,” she says.
“What the post-truth era allows is for politicians to get away with it with no consequence,” says Rabin-Havt. It’s all just part of politics – but the the web speeds everything up.
Even if the truth is more of a hard sell than ever, Binkowski says it’s worth it if snopes.com’s efforts to set the record straight reach just 1 per cent of people. Kiely at FactCheck hasn’t lost hope either. “We’re seeing huge spikes in our traffic,” he says.

Biased bots

It isn’t just other people’s claims on social media that we should be wary of. Misinformation is increasingly circulating via social media accounts run by bots. Political bots were particularly active prior to the UK’s European Union referendum, for example.
A recent analysis by staff at the investigative website sadbottrue.com found that Trump has retweeted bots 150 times. They also claim that a recent Hillary Clinton tweet, in which she invited Trump to delete his Twitter account, was quickly retweeted by many bots.
Emilio Ferrara, a computer scientist at the University of Southern California in Los Angeles, thinks that political bots could influence the outcome of elections – and that this has been going on for several years. “We suspect bots were involved in spreading some form of misinformation or in some cases very explicit smear campaigns during the 2012 [presidential] election – on both sides,” he says.

It isn’t just other people’s claims on social media

ext week, the Republican party is expected to formally pick Trump as its candidate for president. In the race to the White House, fact checkers like Binkowksi are our last defence against online misinformation. What are we up against?
Trump himself has given fact checkers plenty to do over the past eight months, making “an inordinate number” of false claims, according to Eugene Kiely at FactCheck.org. Another website, PolitiFact.com, looked into 158 claims made by Trump since the start of his campaign and found that four out of five were at best “mostly false”.
Fact checkers correct stories doing the rounds both on social media and in the mainstream press. In the hunt for the truth, they spend their days poring over interview transcripts, scouring video footage and checking references. It can be a thankless job. Rarely are Binkowski’s attempts to present the facts received sympathetically.
For example, she recently posted an article debunking claims that a crowd started shouting “English only!” at an event addressed by the Hispanic civil rights veteran Dolores Huerta. “We got hundreds of emails calling us unprofessional, saying we were biased, saying we were anti-Latina,” says Binkowski.
With roughly six in 10 US adults getting their news primarily from social media, according to a recent Pew Research survey, the issue of accuracy might seem to be ever more important. “One of the things that give social media potency to impact political views is the immediacy of it,” says Julia Shaw, a psychologist at London South Bank University. Then there is the issue of blending fact with opinion. “You might even get an opinion before the information,” says Shaw, which can colour people’s judgement.
Walter Quattrociocchi, a computational social scientist at the IMT School for Advanced Studies in Lucca, Italy, is one of a growing number of researchers concerned about this trend. He and his team have spent the last few years trawling through data from sites like Facebook. In particular, Quattrociocchi has explored how social networks can give rise to echo chambers – spaces in which groups of individuals share stories that reinforce their world view, and rarely get to see anything that challenges their beliefs.

Algorithms not guilty

This phenomenon is often blamed on the algorithms used by social media sites to filter our newsfeeds. But Quattrociocchi’s most recent study suggests it’s partly down to our own behaviour. His team compared identical videos posted on Facebook and YouTube, and found that they led to the formation of echo chambers on both sites, meaning that the algorithm used for content promotion was not a factor.
“It’s the narrative that is attracting the users, not the content,” says Quattrociocchi. “The narrative is identified by the group and the group frames the narrative.”
In his book Lies Incorporated: The world of post-truth politics, US radio host Ari Rabin-Havt talks of an industry of misinformation, although he agrees that we bring much of it on ourselves. “When people are given a choice, they’re going to choose what’s comforting and easy for them,” he says. “They’re going to avoid information that challenges them and therefore get stuck in echo chambers.”
And since people only hear what they to want to hear, it isn’t straightforward to counter falsehoods spreading online. Shaw says that politicians exploit our willingness to remember something that appeals to us, regardless of whether it will eventually prove unfounded.
“Trump, for example, consistently says things that are demonstrably untrue and then takes them back,” she says. “He is getting people to believe things and relying on them to forget that he, or someone else, may correct it later on,” says Shaw.
What’s more, any sharing of the results of fact-checking typically lags the misinformation by 10 to 20 hours, according to a recent study by Chengcheng Shao at the National University of Defense Technology in Changsha, China, and colleagues.
Still, there are those who challenge the idea that online media has dramatically sidelined truth from politics. After all, the popularity of conspiracy theories is nothing new.
“Many of the same things were happening before Facebook,” says David Lazer, a computer and political scientist at Northeastern University in Boston. “I have not seen a compelling answer to whether this has really changed.”
Binkowski thinks otherwise. “There’s something about this perfect storm of identity politics plus the internet,” she says.
“What the post-truth era allows is for politicians to get away with it with no consequence,” says Rabin-Havt. It’s all just part of politics – but the the web speeds everything up.
Even if the truth is more of a hard sell than ever, Binkowski says it’s worth it if snopes.com’s efforts to set the record straight reach just 1 per cent of people. Kiely at FactCheck hasn’t lost hope either. “We’re seeing huge spikes in our traffic,” he says.

Biased bots

It isn’t just other people’s claims on social media that we should be wary of. Misinformation is increasingly circulating via social media accounts run by bots. Political bots were particularly active prior to the UK’s European Union referendum, for example.
A recent analysis by staff at the investigative website sadbottrue.com found that Trump has retweeted bots 150 times. They also claim that a recent Hillary Clinton tweet, in which she invited Trump to delete his Twitter account, was quickly retweeted by many bots.
Emilio Ferrara, a computer scientist at the University of Southern California in Los Angeles, thinks that political bots could influence the outcome of elections – and that this has been going on for several years. “We suspect bots were involved in spreading some form of misinformation or in some cases very explicit smear campaigns during the 2012 [presidential] election – on both sides,” he says.

Saturday, 21 May 2016

Self-driving cars for this generation






Yes, they’ve been around for ages, but now we have on-the-road testing and the beginnings of a legislative framework for the cars, they could soon be an everyday reality. Google has announced it’s teaming up with Ford to build self-driving vehicles, hinting at large-scale commercial production in the near future.

While self-driving cars are grabbing the headlines, ordinary cars are also stepping up their game. Tesla’s latest in-car software offers a hands-free autopilot mode, while Audi’s Q7 SUV will also brake on behalf of the driver and nudge you back into the correct lane. This type of gradual automation may make fully self-driving cars an easier sell in the long run.

Li-Fi technology for new world





This OWC technology uses light from LED's as a medium to deliver networked, mobile, high-speed communication in a similar manner to wi-fi. The Li-Fi market is projected to have a compound annual growth rate  of 82% from 2013 to 2018 and to be worth over $6 billion per year by 2018. Visible light communications (VLC) works by switching the current to the LEDs off and on at a very high rate,too quick to be noticed by the human eye. Although Li-Fi LED's would have to be kept on to transmit data, they could be dimmed to below human visibility while still emitting enough light to carry data.The light waves cannot penetrate walls which makes a much shorter range, though more secure from hacking, relative to Wi-Fi. Direct line of sight is not necessary for Li-Fi to transmit a signal; light reflected off the walls can achieve 70 Mbit/s .
Li-Fi has the advantage of being useful in electromagnetic sensitive areas such as in aircraft cabins, hospitals and nuclear power plants without causing  electromagnetic interference. Both Wi-Fi and Li-Fi transmit data over the electromagnetic spectrum, but whereas Wi-Fi utilizes radio waves, Li-Fi uses visible light. While the US Federal Communications Commission has warned of a potential spectrum crisis because Wi-Fi is close to full capacity, Li-Fi has almost no limitations on capacity'.

The visible light spectrum is 10,000 times larger than the entire radio frequency  spectrum.Researchers have reached data rates of over 10 Gbit/s, which is much faster than typical fast broadband in 2013.Li-Fi is expected to be ten times cheaper than Wi-Fi. Short range, low reliability and high installation costs are the potential downsides. 

Pure LiFi demonstrated the first commercially available Li-Fi system, the Li-1st, at the 2014  mobile world congress in Barcelona. Bg-Fi is a Li-Fi system consisting of an application for a mobile device, and a simple consumer product, like an device, with color sensor, microcontroller, and embedded software. Light from the mobile device display communicates to the color sensor on the consumer product, which converts the light into digital information. Light emitting diodes enable the consumer product to communicate synchronously with the mobile device.