What is Alphabet doing to combat fake news, propaganda and extremism

NewImage

‘It’s mostly good people making bad decisions who join violent extremist groups.’ — Yasmin Green

I think this is so important that I am reposting this Wired Business Conference summary by Emily Dreyfuss originally titled “Hacking Online Hate Means Talking to the Humans Behind It”. Alphabet appears to be investing significant resources to fixing the dark side of the Internet. 

Yasmin Green leads a team at Google’s parent company with an audacious goal: solving the thorniest geopolitical problems that emerge online. Jigsaw, where she is the head of research and development, is a think tank within Alphabet tasked with fighting the unintended unsavory consequences of technological progress. Green’s radical strategy for tackling the dark side of the web? Talk directly to the humans behind it.

That means listening to fake news creators, jihadis, and cyber bullies so that she and her team can understand their motivations, processes, and goals. “We look at censorship, cybersecurity, cyberattacks, ISIS—everything the creators of the internet did not imagine the internet would be used for,” Green said today at WIRED’s 2017 Business Conference in New York.

Last week, Green traveled to Macedonia to meet with peddlers of fake news, those click-hungry opportunists who had such a sway over the 2016 presidential election in the US. Her goal was to understand the business model of fake news dissemination so that she and her team can create algorithms to identify the process and disrupt it. She learned that these content farms utilize social media and online advertising—the same tools used by legit online publishers. “[The problem of fake news] starts off in a way that algorithms should be able to detect,” she said. Her team is now working on a tool that could be shared across Google as well as competing platforms like Facebook and Twitter to thwart that system.

Along with fake news, Jigsaw is intensely focused on combatting online pro-terror propaganda. Last year, Green and her team travelled to Iraq to speak directly to ex-ISIS recruits. The conversations led to a tool called the Redirect Method, which uses machine learning detect extremist sympathies based on search patterns. Once detected, the Redirect Method serves these users videos that show the ugly side of ISIS—a counternarrative to the allure of the ideology. At the point that they are buying a ticket to join the caliphate, she said, it was too late.

“It’s mostly good people making bad decisions who join violent extremist groups,” Green says. “So the job was: let’s respect that these people are not evil and they are buying into something and lets use the power of targeted advertising to reach them, the people who are sympathetic but not sold.”

Since its launch last year, 300,000 people have watched videos served up by the Redirect Method—a total of more than half a million minutes, Green said.

Beyond fake news and extremism, Green’s team has also created a tool to target toxic speech in comment sections on news organizations’ sites. They created Perspective, a machine-learning algorithm that uses context and sentiment training to detect potential online harassment and alert moderators to the problem. The beta version is being used by the likes of the New York Times. But as Green explained, it’s a constantly evolving tool. One potential worry is that it could be itself biased against certain words, ideas, even tones of speech. To counteract that risk, Jigsaw decided not to open up the API to allow others to set the parameters themselves, fearing that an authoritarian regime might use the tool for full-on censorship.

“We have to take measures to keep these tools from being misused,” she said. Just like the internet itself, which has been used in destructive ways its creators could never have imagined, Green is aware that the solutions her team creates could also be abused. That risk is always on her mind, she says. But it’s not a reason to stop trying.

My personal view is that Facebook is by far the bigger problem. I don’t expect much action by Facebook because their #1 incentive is to keep people on the site and feed the ads. Fake news and anger-inducing content do just that. I sincerely hope I’m wrong about that.

Amazon.com’s Jeff Bezos can save America’s largest source of clean power. Here’s how.

Bezos

“These big companies have huge, sometimes much bigger, influence on supply chain decisions than governments do.” — Michael Shellenberger

Does the above headline look hyperbolic to you? I think it’s true — and we can help make it true by joining a long list of prominent scientists like James Hansen and Kerry Emanuel to sign the petition urging Amazon to save Ohio’s clean power. Consider the consequences if Amazon amends its 100 percent renewable electricity goal to include nuclear. Amazon’s goal would then function as a Clean Energy Standard (CES), not just a renewable goal (RPS). As it stands, nuclear is excluded from Ohio’s 12.5% RPS — as well as federal renewable subsidies. If Amazon redefines their corporate goal to include nuclear that will open a badly needed discussion: “Does humanity need to decarbonize, or to build more renewables?”

Jeff Bezos is one of America’s most-respected company leaders. When he explains why Amazon decided to focus on serious decarbonization people everywhere will listen. Certainly Ohio politicians will listen to a fast-growing company that’s bringing jobs to Ohio. And I think Apple, Facebook and Google will pay close attention when Jeff Bezos puts the Amazon brand on CES policies that support nuclear + solar + wind. More people should know that Jeff Bezos is one of the backers of Breakthrough Energy Ventures which intends to invest in promising advanced nuclear fission. Bezos has also invested in the nuclear fusion startup General Fusion.

I think that respected companies can change public policy much faster than working only through Washington DC. By adopting CES policies they exert leverage on public opinion while giving state leaders political cover to resist the powerful renewable lobbies.

You probably know people who work at Google, Apple, Amazon, Facebook. Ask them whether they think our goal should be to  decarbonize, or is it just to build more and more renewables? They can promote discussions inside these companies. They can raise questions at TGIF or similar (perhaps that’s too much to ask, but it would be very interesting to hear Larry explain why Alphabet doesn’t support nuclear in their clean data center portfolio).

Please donate at Environmental Progress and please sign the petition! The more people who sign the petition the more that signals to Jeff Bezos that there are a lot of people who care deeply about serious decarbonization. Associating the Amazon brand with clean, reliable nuclear will be positive for Amazon.

Mat Honan had his digital life dissolved by hackers

MatHonan v4edit

Don’t let this happen to you. Here’s Mat Honan writing for Wired on How Apple and Amazon Security Flaws Led to My Epic Hacking.

In the space of one hour, my entire digital life was destroyed. First my Google account was taken over, then deleted. Next my Twitter account was compromised, and used as a platform to broadcast racist and homophobic messages. And worst of all, my AppleID account was broken into, and my hackers used it to remotely erase all of the data on my iPhone, iPad, and MacBook.

Mat Honan is a tech journalist. You would think Mat would have his cyber defenses well-secured. He did not. Study what happened to Mat so you can do whatever you must to protect yourself from a similar fate.

 

For a concise summary of how Honan was hacked read this: Apple Responds To Journalist Victim of “Epic” Apple ID Hack Apple responded today to Honan via a spokesperson, Natalie Kerris. In a statement to Wired, where Honan posted an account of his experiences, Apple promised to look into how users can protect their data and security better when they need to reset their account passwords.

“Apple takes customer privacy seriously and requires multiple forms of verification before resetting an Apple ID password,” said Apple, via Kerris. “In this particular case, the customer’s data was compromised by a person who had acquired personal information about the customer. In addition, we found that our own internal policies were not followed completely. We are reviewing all of our processes for resetting account passwords to ensure our customers’ data is protected.”

This all happened because the hackers were able to get a hold of Honan’s email address, his billing address and the last four digits of a credit card he has on file. Once the hacker had this info, he or she called Apple, asked for a reset to the iCloud account in Honan’s name, and was given a temporary password.

“In many ways, this was all my fault,” Honan wrote. “My accounts were daisy-chained together. Getting into Amazon let my hackers get into my Apple ID account, which helped them get into Gmail, which gave them access to Twitter. Had I used two-factor authentication for my Google account, it’s possible that none of this would have happened, because their ultimate goal was always to take over my Twitter account and wreak havoc. Lulz.”

The real problem here, he noted, is that the companies he relied on to keep his data safe have competing security practices. “In short, the very four digits that Amazon considers unimportant enough to display in the clear on the web are precisely the same ones that Apple considers secure enough to perform identity verification,” he wrote. “The disconnect exposes flaws in data management policies endemic to the entire technology industry, and points to a looming nightmare as we enter the era of cloud computing and connected devices.”

If you have protected all your accounts and devices with 1Password – in particular with unique strong passwords, then you are well on your way securing your digital life. If you are not using 1Password or similar state of the art password management then you need to fix that right now.

Pro tip: do NOT use the same username for sensitive logins. If you use say ‘janedoe@gmail.com’ for both Amazon and Google you have made life much easier for the Russkie mafia. When they seduce customer support to help them get your Amazon password they are almost home. If your Google account uses a different email/username and a different strong password, then the mafia hackers have to start over to break into your Google. Gmail is happy to give you a unique email address for every one of your sensitive accounts. Use them.

Udacity spins out self-driving taxi startup Voyage

Image jpg

UPDATE: Sebastian Thrun answers student questions for 24 minutes in this video Self-Driving Car Nanodegree: Q&A with Sebastian Thrun. This video is probably the most informative insider perspective on the fast-moving autonomous vehicle space.

One example of how fast the field of AI is moving: Udacity’s “school for robo-cars has been so successful that it’s now spinning out of Udacity into its own company, Voyage.” Here’s a snippet from Business Insider:

(…snip…) The new spin-out will be lead by Oliver Cameron, a Udacity VP that was spearheading a lot of its self-driving car curriculum. The company broke the news to its employees Wednesday morning.

Udacity will have a stake in the newly-formed company as part of the deal, said the Udacity’s CMO Shernaz Daver. Voyage also recently closed a seed round of funding that included Khosla Ventures, Initialized Capital, and Charles River Ventures.

Voyage has been hot in Silicon Valley investor circles because of one big name linked to Udacity: Sebastian Thrun. Thrun, who founded the education startup, is also nicknamed the “Godfather of self-driving cars” for the work he did at Google and helped launch the self-driving car nanodegree program at Udacity.

Thrun, though, says he’ll have no connection with Voyage even though it’s spinning out of his company. “Because of personal conflicts, I have excused myself from any involvement in Voyage. I wish Oliver and his team all the best,” Thrun said in a statement to Business Insider.

The autonomous taxi startup wants to bring about the end goal where autonomous cars can carry people anywhere for a very low cost, Cameron said. It already has permission to deploy its self-driving cars to ferry passengers in a few places over the next few months, but Cameron declined to specify where.

“We want to deploy these not within five years, but very soon. We think in terms of weeks, not in terms of years or months,” he told Business Insider in an interview.

Pure guess: one reason Oliver Cameron decided to take this risk is because Udacity is open-sourcing it’s own self driving car project. All the code is there for all of us to use and improve. Including the new startup Voyage. And Oliver Cameron has a pretty good idea how successful the Udacity project is going to be.

Update: this week BMW has announced they plan to ship self driving cars in four years, in 2021. That’s similar to plans already announced by GM, Ford, Chrysler, Mercedes, Volvo and Chinese ride-sharing giant Didi Chuxing.

2017 Udacity Intersect: the future is closer than you think

We meat-bodies generally over-estimate short-term progress (one to three years) and underestimate medium-term technology progress (ten to twenty years). My particular interest is AI. That is partly because five decades ago Artificial Intelligence was my academic field at Carnegie Mellon. The logic-based AI that I was investigating with Herb Simon and Allen Newell is now known as GOFAI (good old-fashioned AI, that’s the AI that didn’t work very well). What motivates my current interest is that Machine Learning (ML), is starting to be really useful, and the rate of progress in narrow AI applications of ML is accelerating. You can see for yourself the rate of progress in ML in the explosion of speech recognition gadgets like Amazon Alexa, the voice service that powers the home Echo device. Now any of us can access voice and image recognition. In perhaps five years we will begin benefiting from Self Driving Cars (SDC) if we live in the right places.

I think that open-source, low-cost initiatives like Google’s release of TensorFlow mark a major inflection point in the rate of ML progress. A teenager can now prototype her ML-based idea to see if it really works. She doesn’t need to go to Sand Hill Road to raise venture capital. In fact, the VC community isn’t going to pay any attention to you unless you have already built your project to a level where it can be tested.

Another powerful indicator of the inflection point thesis is Udacity’s Nanodegree offerings. Outstanding example: the Self-Driving Car Engineer Nanodegree. For $800/term you can graduate with a qualification that dozens of leading companies are eager to hire. Here’s some of the hiring-partners of this SDC Engineer Nanodegree [they helped build the course]:

NewImage

How can you learn more? Well, I suggest having a look at videos from the March 8th 2017 Udacity Intersect conference. This was a remarkable event, opening the window so all of us can get visibility on what is happening. The Computer History Museum was vibrating with the energy of companies recruiting Udacity students and Nanodegree graduates. Pretty much every panel and keynote of the Agenda was packed with tech industry insiders exchanging views about their projects, priorities and especially the people they want to hire.

I’m highlighting Udacity Intersect 2017 because the conference offers a concise and fun way to get a look into the future and behind the curtain. What are the leading tech companies thinking and doing? Where are we likely to be in 10 to 20 years?

You can find links to videos of every segment of the conference at the main Intersect page. If you’re not sure this is for you, please check out the final session Fireside Chat: Astro Teller and Sebastian Thrun. These 33 minutes gives you access to two of the leading innovators who have convinced me that “the future is closer than you think”.

NewImage

Update: Udacity spins out self-driving taxi startup Voyage. Also, this week BMW has announced they plan to ship self driving cars in four years, in 2021. That’s similar to plans already announced by GM, Ford, Chrysler, Mercedes, Volvo and Didi.

Gene drive technology is too powerful to obtain the social license to deploy – is the Daisy Drive a solution?

NewImage

Gene drive technology has the potential to eliminate scourges like malaria – if we can develop the technology without provoking social license problems. It’s the lack of social license that has hobbled bioengineering to a small fraction of what we could have accomplished in the last couple of decades. Similarly, it’s the lack of social license that has hobbled global deployment of nuclear power.

Making gene drives practical is not yet a solved problem, though Bill Gates has said publicly that he thinks the foundation will be ready for trial releases in “a couple of years”. But how do we test the design of a CRISPR gene drive without a whole series of test releases? If people fear that testing means global impacts we will never be able to complete the initial tests.

That’s why I think the general category of gene drive inhibitor techniques are so important. Gene drive innovator Kevin Esvelt has a clear view of the social problem so he is investing effort to develop community support for the very first tests. Kevin is working with the community of Nantucket to suppress Lyme disease by releasing genetically engineered white-footed mice (the principal reservoir of Lyme disease).  Bringing the community to a “let’s proceed” consensus is a slow process, but I’m sure Kevin is right. If we don’t do this right we risk losing access to a valuable technology.

Michael Specter did a terrific New Yorker article on the topic “Rewriting the Code of Life”. And don’t miss Kevin Esvelt’s nuanced interview with Joi Ito on the realities of developing practical gene drives and social license.

Follow Kevin Esvelt on Twitter @kesvelt.

Update: Here are my sources documenting the public position of Bill Gates on gene drive research and deployment:

Bill Gates: Some People Think Eradicating Mosquitoes With Genetics Is Scary, But I Don’t Think It Will Be

Gates noted that the regulatory path for the technology is “unclear,” and that it’s not certain what will need to be done from a legal perspective before exterminating some species of mosquitoes in this way. However, he said, “I would deploy it two years from now.”

(…snip…) “I have to always show respect for people who think it is a scary thing to do,” Gates said. “I don’t think it will be. I think the way we’re doing the construct will make it a very key tool for malaria eradication.”

Bill Gates Doubles His Bet on Wiping Out Mosquitoes with Gene Editing

The new money will help Target Malaria “explore the potential development of other constructs, as well as to start mapping out next steps for biosafety, bioethics, community engagement, and regulatory guidance,” says Callahan. “It’s basically a lot of groundwork.” The Gates Foundation views the technology as a “long shot” that won’t necessarily work but, if it does, could effectively end malaria.

The foundation previously said it plans to have a gene-drive approved for field use by 2029 somewhere in Africa. But Gates, the founder of Microsoft, offered more enthusiastic prognostications in comments made this summer, saying the technology might be ready in just two years

This Analysis Shows How Viral Fake Election News Stories Outperformed Real News On Facebook

NewImage

Up until those last three months of the campaign, the top election content from major outlets had easily outpaced that of fake election news on Facebook. Then, as the election drew closer, engagement for fake content on Facebook skyrocketed and surpassed that of the content from major news outlets.

…All the false news stories identified in BuzzFeed News’ analysis came from either fake news websites that only publish hoaxes or from hyperpartisan websites that present themselves as publishing real news. The research turned up only one viral false election story from a hyperpartisan left-wing site.

Transparency kudos to Buzzfeed for releasing the data behind the above graphic-bait. There is a good bit of detail in the Buzzfeed analysis.

Carole Cadwalla: how big data technology influences what we see and how we vote

Jonathon Albright’s network graph of fake-news in relation to real-news websites

If you are looking for a carefully researched but readable accounting of this complex topic I recommend Carole Cadwalla’s Guardian index. Researching the fake news issue I found her December article “Google, democracy and the truth about internet search”. That reporting led me to some of the researchers in the field like Martin Moore at the Policy Institute at King’s College London and Jonathan Albright at Elon University. In the December article Carole asks my initial questions:

Did such micro-targeted propaganda – currently legal – swing the Brexit vote? We have no way of knowing. Did the same methods used by Cambridge Analytica help Trump to victory? Again, we have no way of knowing. This is all happening in complete darkness.

I am also asking “is there a positive-feedback servo loop where Cambridge Analytica [CA] exploits the fake news ecosystem to reinforce the dark posts that CA sends to micro-targeted Facebook accounts?” This excerpt from Google, democracy and the truth about internet search got my attention, beginning with Jonathon Albright’s comments: 

And the constellation of websites that Albright found – a sort of shadow internet – has another function. More than just spreading rightwing ideology, they are being used to track and monitor and influence anyone who comes across their content. “I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables data-mining and influencing companies like Cambridge Analytica to precisely target individuals, to follow them around the web, and to send them highly personalised political messages. This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go.”

Cambridge Analytica, an American-owned company based in London, was employed by both the Vote Leave campaign and the Trump campaign. Dominic Cummings, the campaign director of Vote Leave, has made few public announcements since the Brexit referendum but he did say this: “If you want to make big improvements in communication, my advice is – hire physicists.”

Steve Bannon, founder of Breitbart News and the newly appointed chief strategist to Trump, is on Cambridge Analytica’s board and it has emerged that the company is in talks to undertake political messaging work for the Trump administration. It claims to have built psychological profiles using 5,000 separate pieces of data on 220 million American voters. It knows their quirks and nuances and daily habits and can target them individually.

“They were using 40-50,000 different variants of ad every day that were continuously measuring responses and then adapting and evolving based on that response,” says Martin Moore of Kings College. Because they have so much data on individuals and they use such phenomenally powerful distribution networks, they allow campaigns to bypass a lot of existing laws.

“It’s all done completely opaquely and they can spend as much money as they like on particular locations because you can focus on a five-mile radius or even a single demographic. Fake news is important but it’s only one part of it. These companies have found a way of transgressing 150 years of legislation that we’ve developed to make elections fair and open.”

Did such micro-targeted propaganda – currently legal – swing the Brexit vote? We have no way of knowing. Did the same methods used by Cambridge Analytica help Trump to victory? Again, we have no way of knowing. This is all happening in complete darkness. We have no way of knowing how our personal data is being mined and used to influence us. We don’t realise that the Facebook page we are looking at, the Google page, the ads that we are seeing, the search results we are using, are all being personalised to us. We don’t see it because we have nothing to compare it to. And it is not being monitored or recorded. It is not being regulated. We are inside a machine and we simply have no way of seeing the controls. Most of the time, we don’t even realise that there are controls.

There is no question that micro-targeted ads were deployed in the Brexit and Trump campaigns. We know there are controls on this machine. Who is operating those controls? Who built and operates the fake news ecosystem? If you have sources or insights, please comment. I plan to post anything definitive that I’m able to find. As I write Carole has published eight articles on this general topic. And Jonathan Albright has published a number of articles that delve into the machinery of behavioral micro-targeting and fake news propaganda. These are some of Jonathan Albright’s Medium articles I’m studying:

What’s Missing From The Trump Election Equation? Let’s Start With Military-Grade PsyOps

The #Election2016 Micro-Propaganda Machine

#Election2016: Propaganda-lytics & Weaponized Shadow Tracking

Data is the Real Post-Truth, So Here’s the Truth About Post-#Election2016 Propaganda

Left + Right: The Combined Post-#Election2016 News “Ecosystem”

FakeTube: AI-Generated News on YouTube

“Fake News” Sites: Certified Organic?

How to be an Errorist: if anti-nuclear content was factually true it wouldn’t be anti-nuclear

Errorist

I see far too many anti-nuclear press reports. It truly looks like all the big media journos have their favorite UCS and Greenpeace contacts in their Rolodex. And it is a fact that “Fear Sells”, whether clicks or newsprint. So I had a chuckle today when I read this little essay How to be an Errorist from the Northwest Energy folks. They were motivated to write this June 17, 2015 by the satirical New Yorker piece “Scientists: Earth Endangered by New Strain of Fact-Resistant Humans.”

While the story is made-up, many of these fact-resistant folks seem to be radically opposed to nuclear energy. This normally wouldn’t be of great concern, anyone can believe what they want. But when that ignorance (deception?) is given legitimacy through public policy discussions, then it can create a problem for society as a whole (impeding the development of new nuclear energy resources to combat climate change comes to mind).

So, I have a challenge for you Dear Reader: please email or Tweet me if you have encountered an anti-nuclear article that is factually correct. I’ve been scratching my head trying to remember such an instance — but I can’t think of a single case. If the content was factually true it wouldn’t be anti-nuclear.