While artificial intelligence can help journalists build a consistent fake news detector, it can also empower others to disseminate and even create new forms of misinformation, writes Francesco Marconi (@fpmarconi), manager of strategy and development at Associated Press.
Fake news is nothing new. The Roman Emperor Augustus led a campaign of misinformation against Mark Antony, a rival politician and general. The KGB used disinformation throughout the Cold War to enhance its political standing. Today fake news continues to serve as a political tool around the world, and new technologies are enabling individuals to propagate that fake news at unprecedented rates.
One of those new developments, artificial intelligence (AI), can help journalists build a consistent fake news detector, but AI can also empower others to disseminate and even create new forms of misinformation. To understand how, we need to take a quick detour and explain machine learning — one of the most important sub-domains of artificial intelligence
Machine learning is, in the most basic sense, a system that learns from its actions and makes decisions accordingly, and it relies in turn on a process called deep learning which breaks down a single complex idea into a series of smaller, more approachable tasks.
Thus, conceptually, machine learning can help detect fake news! An intelligent system that takes news stories as its input and a big ol' 'Fake' or 'Not fake' sticker as output.
Machine learning (and deep-learning) relies mostly on algorithms, a set of rules that when followed leads to a desired output. But constructing algorithms is exceptionally difficult and the results can be catastrophic, especially when we rely on them to determine what news stories should be broadcast to our readers.
The two most common errors in this sort of machine learning are terms that we borrow from statisticians — Type I (false negative) and Type II (false positive) errors.
A false negative would mean that your machine labels a fake news item as not fake. We don’t want that.
A false positive means your machine labels a real news story as fake. We don’t want that, either.
What we want is a system that can, with a high level of accuracy, label fake news as being fake, and real stories as not being fake. Again, we ask: How?
AI machines can best make decisions like what is and is not fake news when you define fake and real news — you do so by showing the machine tens of thousands of examples of each.
For that, you will need a data set of high quality journalism, as well as another collection with a sample of fake news, which could be sourced from a predetermined list of fake news.
Remember, algorithms are written by humans, and humans make errors. Therefore, our AI machine may well make an error, especially in its early stages.
There’s also an editorial decision to be made here. No system is going to be 100 per cent accurate, so which would you rather tend towards, false positives or false negatives? Would you rather a fake story be labelled as real or would you rather have a real story labelled fake?
Algorithmic errors aside, AI can help detect fake news. But as we mentioned earlier AI can also help disseminate it (all the more reason to understand AI, then!)
If you’ve worked in the journalism industry long enough you’ve probably been fooled by a doctored video, photo or sound bite. And every day the technology available to produce those fake news items is becoming easier to use and more publicly accessible. For instance, Adobe recently announced an AI project that is able to replicate the same tone of voice by simply analysing a sample of a speech, while a project developed by Stanford University researchers enables the manipulation of someone’s face in a video in real time.
In other words, the same sorts of machine learning and sub-domains of AI that can be used to fight fake news can also be used by others to propagate new types of misinformation.
Francesco first published this article on LinkedIn. It is reproduced with his permission here.
Francesco Marconi is responsible for strategy and corporate development at the Associated Press, where he is part of the strategic planning team, identifying partnerships opportunities and guiding media strategy. Francesco complements his professional activity with academic research at Columbia University's Tow Center for Digital Journalism, where he is an Innovation Fellow.
Francesco studied business and journalism at the University of Missouri and completed his post graduate work as a Chazen Scholar at Columbia Business School’s Media Program. In 2014 he joined Harvard University’s Berkman Center for Internet and Society as an affiliate researcher studying the impact of data in journalism. Francesco started his career at the United Nations researching science and technology solutions for developing countries, resulting in the publishing of his first book and TED talk on Reverse Innovation.
More like this
With Facebook and Google predicted to take half of the World’s total digital ad-spend in 2017, it’s no surprise that other players in the industry have raised concerns. But by updating their own data offerings to better reflect advertisers needs, media owners can keep pace with changing digital trends.25th Aug 2017 Opinion
If I were to ask you to describe the Internet of Things (IoT), I expect many of you would start to talk about how new technology is revolutionising the internet, providing “anything connectivity” through advanced networks, sensors, electronics, and software. And you wouldn’t be wrong.24th Aug 2017 Opinion
In her previous blog post, SPH Magazines' Hafizah Hazahal shared how the Google/Youtube ‘brand safety’ chaos has led to the industry’s reawakening to the importance of branding, vis-à-vis the pursuit of conversions in this digital age. When it comes to branding, a study by Magnetic Media has proven that magazine media excels in building “meaningfully different” brands which drives repeat purchase and grow market share.11th Aug 2017 Opinion
Challenging times have led businesses to be obsessed with chasing conversions… which can be at the expense of the brand. Just look at what happened in the recent chaos which saw many brand owners freezing their adspend on Google and Youtube, following the realisation that their ads are appearing alongside offensive content on these platforms.4th Aug 2017 Opinion
Harvard Business Review last week launched their new bot on Facebook Messenger, building on their success with a similar bot on Slack. The aim is to increase the number people having regular, frequent interactions with HBR to ultimately have them subscribe and/or up loyalty.21st Sep 2017 Features
While the rise of digital has led many publishers to reduce their print offering, Dennis Publishing has continued to invest. Kerin O’Connor, chief executive of The Week at Dennis, explains how it’s found success with a print version of The Week for children – and what it can teach the industry about the future of print…18th Sep 2017 Features
As one of Europe’s leading publishing houses, Gruner + Jahr has been through a period of major transition. Julia Jäkel, CEO, sets out how the business has managed that, and outlines the path for the future…18th Sep 2017 Features
Ebner Media in Germany employs and implements technology to mix and merge content with ecommerce. It’s been key in the company’s transformation from a print-centric publisher to a content and services company. Dominik Grau, chief content officer, has been driving the content-to-commerce strategy.14th Sep 2017 Features
In this interview, Rita Orschiedt, head of branded content at German news website ze.tt, reveals how you successfully reach millennials.15th Sep 2017 Insight News
Visit our Youtube channelFIND OUT MORE
FIPP newsletters allow you to keep up with industry trends, research, training and events across the worldFIND OUT MORE
Get global coverage of your launches, company news and innovationsFIND OUT MORE
What’s happening now, what’s coming next