Photo Credit: PageFair
I wrote previously that the overabundance of information (media or otherwise) had created a crisis. Information overload is now an acute problem, and trust in discerning media providers is increasingly precious. Now there is a race to quality emerging, even as the rest of the Web remains rife with clickbait. To understand why it is useful to hark back to early studies on information overload from the 1960s.
A study from 1965 reveals the crisis of our present
The idea of ‘information overload’ gained currency among researchers in the 1960s. In 1965 three researchers (Streufert, Suedfeld, and Driver) published a study that measured how a sample group of students fared when they were fed more information than they could process[1]. The results were remarkable.
The researchers invited 185 graduate students to participate in a simulated attack on a fictitious island. They divided the students into small teams and asked each team how much information they wanted to receive from their intelligence officers. The students were told that this intelligence would inform their battlefield decision-making. What the students did not know, however, is that the professors overseeing the study ignored their answers and fed them an ever-increasing number of intelligence reports.
The objective was to feed all teams such a volume of information that they would eventually be overloaded. To assess when this was achieved, the study measured the number of decisions made by each team arising from the information they had received. What mattered here was not whether the students issued the right orders to successfully capture the island, but that they managed to issue any orders at all.
The chart below shows the performance of the groups. Based on previous tests, the students were categorized as very intelligent or as less so. The dotted line shows the performance of the brighter students. The increase in decisions produced clearly rises as the amount of data fed to them increases – but even these bright students hit their peak. Once they received 10 messages received per half hour their performance declined steeply. Less bright students (represented by the unbroken line) produced few decisions to begin with, and this declined precipitously when they received 15 or more messages every half hour.
The researchers continuously polled each team as the experiment continued to offer more or less information. What they discovered was remarkable: all of the students persisted in requesting increased quantities of information even after information overload. Even once the supply of information reached hit 25 messages per half hour – twice the level that had caused overload – the students requested “a little more” information. The chart below shows these requests (note that the lowest item on the left axis is “some information” rather than zero or none). Looking at this chart it is easy to think of this as information gluttony.
There is a parallel between this information gluttony and the obesity crisis. The consumption of a “bit” (the name Shannon gave to a unit of information) appears to give the consumer a dopamine hit every bit as pleasurable and destructive as a calorie. In the last half-century there has been an explosion of obesity as dietary calories became cheap and plentiful in the developed world. Some consumers adjusted to a world of abundant calories by tempering their appetites, and by trusting in brands such as Whole Foods Market in the US that promise superior nutrition. However, many fell prey to the calorific bounty of industrialized food production. 34.9% of American adults are now obese[2].
Junk information
In our information diet we now face a choice between gluttony and discernment. There are many junk food providers in the new information market. 12 overworked people control what news 1.09 billion people see on Facebook[3]. This is an extreme form of “churnalism”, the rapid production of large amounts of information that is not fact-checked or properly researched. Reporters at many outlets have become “hamster wheel” journalists, racing to release superficial coverage and parsed press releases ahead of the competition.
The Internet relies on a simple technological approach in which messages are cut up into smaller chunks, called “packets”, before being transmitted from one machine to another. The packets travel independently, each finding its own way across the network to their intended destination. Very often only some parts of the message get all the way. But the Internet uses an “error correction” mechanism to rectify this this. Each packet has a small “header” that tells the recipient where it fits in within the entire message. Missing packets can be identified and requested again.
The news and entertainment carried on the Internet hit us in much the same way that a blizzard of data packets hits a network node: tiny parts of a story flash across our view from different directions. Unless the packets come from a trusted source they have no headers, and no error correction. The data are incoherent, their message is incomplete. Luckily we have a form of error correction that attempts to marshall these incoherent data and fill the blanks: news organisations. The newsrooms and editorial suites at the heart of trusted media organizations are the error correction of the Internet. This error correction mechanism, however, has been under threat.
Even so, there are reasons to anticipate a recovery.
The race to quality: Netflix, BuzzFeed, et al.
In 2013, when I was working at The Irish Times, I conducted a survey of readers to learn whether readers were suffering from information overload. If they were then I believed that breaking news was a bad business to be in because it added to the noise that confounded them. The results showed that readers were indeed suffering from information overload, and that they were more interested in understanding the deeper story, why things have occurred and what the implications are, than in the headline news without analysis.
The kind of trustworthy insight that these users are interested in remains sufficiently scarce that some publications can charge for it. For example, The Financial Times, The Economist, and Vogue remain essential reading for many despite the availability of zero cost alternatives.
Much as a health conscious eater rejects artificial preservatives, colors, flavors, sweeteners, and hydrogenated fats in favor of a better nutritional mix, the discerning information consumer should prefer carefully balanced, (hopefully) trustworthy information diet presented in a reputable publication or site.
And this, perhaps, explains why there is a race to quality underway. Though high-volume, low-quality “clickbait” appears to rule the internet, prominent digital media leaders are investing in high quality media. The Huffington Post established “Highline”, which it describes as “a new digital home for an old journalistic tradition”. Highline invests time on “big, ambitious pieces intended to change the way you see the world or influence the course of policy”. In much the same vein, BuzzFeed – once a popular clickbait producer – established an investigative reporting team in 2013 and hired Pulitzer Prize winner Mark Schoffs to lead it. One result was a BuzzFeed and BBC collaboration that exposed match-fixing in professional international tennis in January 2016.
The race to quality extends to non-news media. Netflix, which is battling with Amazon Prime and others to become the default streaming service, justifies its investment on expensive content to its investors as a cost necessary to become “high-quality, curated offering and therefore have increasingly licensed content on an exclusive basis”.
In other words, trust and quality are bankable in a world of too much information. This is why media brands of repute and authority that live up to the trust of their users will prevail: they must be worthy of survival if they are to survive.
NOTES
1. Streufert, Suedfeld, and Driver, “Conceptual structure, information search, and information utilization”, Journal of Personality and Social Psychology, vol 2 (5), Nov. 1965. ↩
2. http://stateofobesity.org/rates/, last accessed 29 June, 2016. ↩
3. Daily active users on average for March 2016 (http://newsroom.fb.com/company-info/, last accessed 29 June, 2016). ↩
Dr. Johnny Ryan is Head of Ecosystem at PageFair. Previously, he was Executive Director of The Innovation Academy, and Chief Innovation Officer of The Irish Times, where he established a multi-million-euro Big Data programme, and introduced startups inside the 154-year-old newspaper. He has a PhD from the University of Cambridge, and was an associate on the emerging digital environment at the Judge Business School of the University of Cambridge from 2011-14. His second book “A History of the Internet and the Digital Future” is on the reading list at Harvard and Stanford.