At Alley, we pride ourselves on being creative and having fun outside of the projects we work on for our clients.
When we in the media business talk about “fake news,” we’re really talking about the results of a host of motivations and techniques that combine to generate many, many problems – each of which requires a nuanced solution.
That was my primary takeaway from this past weekend’s MisinfoCon, a first-time event which Alley is proud to have supported. The MIT Media Lab hosted journalists, technologists, librarians, academics, and even a former fake-news publisher to explore how tools ranging from cognitive science to ad tech can combat the influence of misinformation (unwittingly distributed false info) and disinformation (deliberately distributed false info).
Attendees split into 21 groups and spent most of the weekend working on ideas to present on Sunday afternoon. My group included MisinfoCon organizer Jenny 8. Lee, Bloomberg’s Robbie Brown, and Aviv Ovadya from MediaWindow. Together we explored how a list of disinformation sites could be maintained with machine learning and used by advertisers to prevent their programmatic ad spending from flowing to those sites. Our theory was that this would seriously reduce incentives for publishers motivated by profit to publish false information.
Admittedly, that’s not the motivation of all disinformation publishers, nor is it possible to categorize every bad actor as “fake news” – the nuances along this continuum can be very difficult to detect.
— Shan Wang⁷ (@shansquared) February 25, 2017
In my view, the fundamental problem addressed at MisinfoCon is not that people are creating and distributing disinformation online. It’s that social media makes it so easy to believe disinformation. That’s why I was most intrigued by ideas like 22 Million (the second presentation here), which takes a long view of developing a media literacy curriculum for teens.
Photo credit: Philip Smith