When we in the media business talk about “fake news,” we’re really talking about the results of a host of motivations and techniques that combine to generate many, many problems – each of which requires a nuanced solution.

That was my primary takeaway from this past weekend’s MisinfoCon, a first-time event which Alley is proud to have supported. The MIT Media Lab hosted journalists, technologists, librarians, academics, and even a former fake-news publisher to explore how tools ranging from cognitive science to ad tech can combat the influence of misinformation (unwittingly distributed false info) and disinformation (deliberately distributed false info).

Attendees split into 21 groups and spent most of the weekend working on ideas to present on Sunday afternoon. My group included MisinfoCon organizer Jenny 8. Lee, Bloomberg’s Robbie Brown, and Aviv Ovadya from MediaWindow. Together we explored how a list of disinformation sites could be maintained with machine learning and used by advertisers to prevent their programmatic ad spending from flowing to those sites. Our theory was that this would seriously reduce incentives for publishers motivated by profit to publish false information.

Admittedly, that’s not the motivation of all disinformation publishers, nor is it possible to categorize every bad actor as “fake news” – the nuances along this continuum can be very difficult to detect.

In my view, the fundamental problem addressed at MisinfoCon is not that people are creating and distributing disinformation online. It’s that social media makes it so easy to believe disinformation. That’s why I was most intrigued by ideas like 22 Million (the second presentation here), which takes a long view of developing a media literacy curriculum for teens.

If you’re interested in learning more about the goings-on at MisinfoCon, NiemanLab and Alexandra Samuel have good writeups. FirstDraft’s breakdown of the term “fake news” is also worth a read.

Photo credit: Philip Smith

The Latest news

00
01
02