UOSM2008: Topic 2 reflection

This post is part of a series published as part of the University of Southampton’s Living and Working on the Web module. To find out more, including links to all of this year’s student blogs, check out the UOSM2008 website.

Understanding news and contemporary media is a particular interest of mine – my ongoing dissertation work is on how journalists and publishers use digital tools and their perceptions and reactions to the “post-truth” epidemic – so it’s fair to say this has been my favourite topic so far. With this issue of authenticity so prevalent, in my post I looked to summarise the assorted facets of fake news, to what extent social media has played a role in it, and, using a MOOC exercise, how to critically assess what we see to determine trustworthiness.

In response, Nikhita Sharma raised a challenging question: why now? After brief deliberation, the conclusion that made most sense to me was to look primarily at how the wider cultural context is being reflected online, rather than any explicit technological factors. On his blog, Tom Pethick noted the associated concept of the Overton window, as explained by Vox‘s Carlos Maza.

In my comment on Tom’s blog, I also cited Tom Rowledge‘s alarming statistic (from Gabielkov et. al., 2016) that 59% of links shared online haven’t even been opened, which itself was cleverly buried beneath a bogus headline bold enough to entice me to read further. The interactive activity he embedded also proved a fun, accessible insight into how easily online influence can be built when integrity is set aside.

Throughout the module I have been enjoying Jeremy Luzinda‘s witty takes on each topic, and his infographic for this topic is too memorable not to share here.

000-e1521042407305
Five steps to assessing online content. Source: Jeremy Luzinda, 2018

My comment ventured beyond increasing users’ media literacy into how the service providers themselves might be compelled to act. It was unfortunate that we couldn’t discuss this further – I find Facebook’s survey example perfectly straightforward, but is handing users the power to shape authenticity an irresponsible and flawed approach? Only time can tell…

Comments

Word count: 299

UOSM2008: The “fake news” bubble and how to (potentially) handle it

This post is part of a series published as part of the University of Southampton’s Living and Working on the Web module. To find out more, including links to all of this year’s student blogs, check out the UOSM2008 website.

Task: Evaluate how to assess the reliability and authenticity of online information

The “fake news” bubble

“Fake news” is an inescapable term of the zeitgeist, in part thanks to politicians using it to discredit journalists (Juliane Lischka, 2017), Macedonian teenagers creating hoaxes to share widely across Facebook for easy ad revenue (Samanth Subramanian, 2017; Craig Silverman, 2016), discussions around journalistic standards (James Ball, 2017Mark Di Stefano, 2018), and social networks endlessly vacillating on how best to handle it all (Mark Zuckerberg, 2017; Adam Mosseri, 2018, Alex Kantrowitz, 2018). Google Trends data shows an explosion in related search activity around 2016’s US elections and close associations with Donald Trump, broadcasters like CNN and Fox, and verification services like Snopes.

Web search interest in the term "fake news" between January 2004 and March 2018. Source: Google Trends.
Web search interest in the term “fake news” between January 2004 and March 2018. Source: Google Trends.

However, its history is deeper. In this video I recorded with Adam Rann and Ryan Dodd for the UOSM2012 module last year, we investigate how the phenomenon came to be.

How to (potentially) handle it

In a New Statesman extract from his book on the subject, James Ball (2017) points to five actions readers can take to dispel these post-truth trends.

  • Proactively seek content from contrasting sources to prevent filter bubbles, where algorithmic personalisation and our curation limit the viewpoints we’re exposed to online (Eli Pariser, 2011)
  • React with careful consideration, verifying sources and assessing credibility before sharing
  • Improve statistical literacy to better understand poor, misleading, or inaccurate data presentation (John Burn-Murdoch, 2013; Agata Kwapien, 2015)
  • Approach everything – not just what we’re inclined to disbelieve – with skepticism
  • Resist baseless conspiracy, lest help fuel anti-expertise sentiment (Henry Mance, 2016)

How can we apply this framework to an example? Here’s one from the “Learning in the Network Age” MOOC (FutureLearn, 2017):

MOOC Fake News Example
Source: FutureLearn (University of Southampton)

The headline may be eye-catching, the URL plausible (KTLA is a genuine broadcaster), and “sources” reputable (NASA and Caltech researchers). However, there are telltale signs that this is fake, such as the author’s name (Jonah Oaxer = Jon, A Hoaxer), the lack of corroborating external sources, and the extreme language (e.g. “NASA”‘s “our days are numbered”). Additionally, other content on the site is outdated (e.g. a privacy policy updated in 2016) and of a similar clickbait nature designed for viral sharing rather than credible journalism.

Bibliography

Word count: 300