Algorithms that curate content are a popular villain. We know now how Facebook has repeatedly used them, both in politics and personal life. They are well known for amplifying human biases – because algorithms run on data, and it is humans that pick the metrics that are used and how they are applied.
But what if we could harness human bias to run an algorithm that actually made humans’ lives better? What if we could find places where it’s a good thing to have human views and to ensure that they actually matter more than what the algorithm considers important – while the algorithm takes on tasks that rely less on the power of human discernment?
This is what we set out to do at the Globe and Mail when we created our own homegrown site automation system, which we called Sophi. It is an AI system that incorporates the best of human judgment as well as machine learning, with safeguards built in to prevent the machine from descending into clickbait hell.
Let me explain. Sophi places content on our pages according to what is most valuable to us (where “value” incorporates subscriber acquisition and retention as well as reach on search, social and other platforms). But most importantly, Sophi will only place content on a page within constraints specified by editors.
In other words, the humans entrusted with safeguarding the brand and mission of Canada’s national newspaper are the ones who decide the parameters for what is eligible for placement on a digital page.
This makes logical sense. Editors are the stewards of the brand. They work for us because they care about what we do — and about what they do. They are detail-oriented and mission-oriented professionals. That is why we hire them.
If we want our website to look as though it is being run by thoughtful humans who understand our brand, it is important that we take their thought process into account when setting up an algorithm to do the work of content curation.
So, editors decide what kind of content can go where – the mix of topics, how old an article can be, and so on. We encode this in the algorithm. Then, we let Sophi do its work. We let it optimize the placement of content so that the most valuable articles are getting exactly the right amount of promotion on the right spots of the page and being served to the right segments of the audience. We are now experimenting with true one-to-one personalization as well.
Does it work? As a journalist, I was surprised to see how well it works. We have been using Sophi to place 99 per cent of the content on The Globe and Mail’s digital pages for more than two years now. Our click-through rates and subscriber acquisition rates are dramatically higher.
But the most remarkable result is that not a single reader has noticed that it is an algorithm placing content on the page. We have received not one complaint or question from our readers (and I can tell you that they are generally not shy about complaining or questioning our news judgment!).
By running our digital pages for us, Sophi has given our editors time to work as journalists, to find news and use their superior story-telling abilities and news judgment to shape coverage. They are back to doing what humans do far better than any machine can, bringing about change by holding the rich and powerful to account and setting the agenda for the national conversation.
But does the algorithm ever make questionable decisions? Yes. Our editors don’t love every single decision that it makes. In those rare cases, editors can override Sophi’s decisions by using a simple Slack command that takes a few seconds to execute. This is generally used about one to four times a day, which is impressive, considering that we produce more than 100 unique articles a day.
As an aside, what has been really interesting is to see what the algorithm has taught us. We had assumed that our news content had a short shelf life. We had assumed that readers did not value our service journalism as highly as our news content. We had assumed that wire content would never be as valuable as our proprietorial journalism. And we were wrong on all three counts. There was a lot more value in our content than we had imagined, and it took an algorithm that channeled human intuition to unlock it.