The secret lives of algorithms

< Blog home

Is ignorance bliss?

 

Algorithms are now embedded in our daily lives, and they structure our social worlds in ways that are often barely noticeable. One example is recommender systems, which play a role in who we meet, what music we listen to, what we eat... They’re also responsible for some of our most extravagant-but-completely-unnecessary purchases, recommending the right product to us at exactly the right time. They might even influence how we’re seen in our circle of friends by giving us the right article to share.

As these digital products get more advanced, the technology actually driving them becomes less visible. This can make for a more intuitive user experience, but it also means that the algorithms end up living secret lives behind the interfaces we use. Each one is a black box with a complex set of inputs and outputs, capable of finding results for almost anything in a few milliseconds when it can take us an hour to find something to wear in the morning.

Of course there’s the occasional algorithmic mishap... A family of New Yorkers gets a visit from the police after their web search history demonstrates an interest in pressure cookers and backpacks. A dad finds out about his daughter’s pregnancy after Target’s predictive algorithms send her baby product coupons. YouTube’s autoplay system shares hundreds of disturbing videos with young children who watch their beloved Peppa Pig get tortured by her dentist. And as we give our explanations to the police or drag our screaming children to the dentist, we realize two things: 1) algorithms have very real repercussions in our lives beyond Spotify’s ‘Discover Weekly’ playlist, and 2) considering how ubiquitous algorithms have become, most of us know very little about how they work and what they’re capable of.

We can’t lift the lid and have a look at what’s inside the black box though. Frank Pasquale calls this the ‘One-Way Mirror’ of Big Data. Companies have a wealth of data about how we interact with media, using it to improve their products or to sell to advertising agencies for targeting, but we know little to nothing about how they use this knowledge to influence the important decisions that we — and they — make. The result is that our trust in the black boxes is gradually diminishing. It’s simple: if we keep giving away information about ourselves and only get exploited in return, our faith in the algorithms and the companies will disappear.

Tech firms are a one-way mirror
Yes Clark, tech firms know who you are

Metrics Matter

One thing we can see upon looking closer at the world of algorithms is that the fuzzy stuff our social world is made of has been translated into hard data and quantifiable metrics. Our complex cinematic preferences are boiled down to categories like “Cult Evil Kid Horror Movies” on Netflix. In a society that increasingly tries to break down our social lives into data that can be computed, things that cannot be counted are becoming invisible.

Engagement is measured in claps, likes, shares and clicks. This data then forms the algorithmic feedback loop that determines what we see and what remains hidden, what makes the first page and what gets lost in the ocean of SEO. All this then shapes the way we see the world around us. For every search result that appears on top of your list, there are billions that didn’t make the cut.

The Black Box Society by Frank Pasquale
The Black Box Society by Frank Pasquale


How we measure what gets shown matters. For example, do we want popularity to be the only path to algorithmic success? No matter how popular articles about the royal engagement are, they're no good as recommendations to people who don't like the monarchy. To paraphrase George Orwell, all metrics are equal, but some are more equal than others. Popularity as a metric is also easy to manipulate and doesn’t guarantee quality or relevance. Last year, marketing manager Brent Underwood uploaded a photo of his foot as a cover for a self-published, one-page book and asked friends to buy it on Amazon – it became a ‘bestseller’. It only got three sales, but it was in the ‘Freemasonry’ category; that’s how easy it is to game the system. Brent can now tell everyone he has written an Amazon bestseller.

In the rush to try and make human satisfaction quantifiable and measurable, many companies have settled for metrics that are easy to record, like clicks. We need to find ones that actually tell the whole story about what users want and need.

Next stop… where?

There has been a lot of recent criticism over Silicon Valley losing touch with the real world, but that isn’t going to stop tech companies playing a central role in deciding what tomorrow will look like. The power of algorithms not just to guess preferences, but to actually shape them, has created the prospect of these tools being used to harvest more and more data and alter users’ behavior with greater and greater success. This is worrying in a situation where fewer and fewer of us trust them to act in our best interests.


“Big data driven decisions may lead to unprecedented profits. But once we use computation not merely to exercize power over things, but also over people, we need to develop a much more robust ethical framework than [the industry] is now willing to entertain.” - Frank Pasquale

A better place


Everybody working to build these powerful algorithms needs to think carefully about the goals and metrics they optimize for, because they will shape the world around us, whether in the way we intend or not. We need to take a new approach to this technology that really works for users. It needs to be transparent and it needs to work towards goals that users really want.


Transparency and accountability are key values companies need to keep in mind when building algorithms and translating fuzzy social concepts into quantifiable data. It’s not straightforward, and unconscious bias can also find its way into machine learning data sets very easily, for example in a photo data-set which has lots of images of women cooking.


FAT/ML is one example of an organization working to tackle issues like fairness and accountability in machine learning. Researchers and people in business have joined forces to analyse failures in machine learning models and ensure that ‘the algorithm made me do it’ doesn’t become the default excuse for decision-making failures with systems reliant on machine learning. Transparency with algorithmic technology is essential to creating systems that work for humans, rather than exploiting them. As one of their studies reported, users are more willing to trust recommender systems with a transparent UI that offers them some way to judge the appropriateness of recommendations.  

What do we really want?

When developing an algorithm, it’s also necessary to really consider what the metrics being optimized for say about user behavior. With an engagement model based on views, incidents like YouTube and Peppa Pig occur, especially with ‘similarity of keywords’ enough to qualify as relevance. Focusing on just one dimension of what makes ‘good’ content will often come at the cost of looking at the bigger picture. ‘Engagement’ has become something of a buzzword in any case, since it can mean so many things – from clicks, to time-spent, or most-shared – without telling us much about the user experience.

What if we tried to optimize for real user satisfaction instead?

Parse.ly is looking at a creative re-evaluation of what ‘engagement’ actually means. They are proposing a new metric of ‘engaged time’ which measures the reader’s activity over three “heartbeats”, rather than stopping at the click. Parse.ly are trying to differentiate bounce clicks from passers-by, short-stays, and long-stays, a first step to getting a more comprehensive picture of behavior online, and working out what users really value.

Content algorithms have had some bad press recently. The apparent success of click based engagement models, and the honeymoon period for content algorithms before the 2016 presidential election in the US and the scandal of ‘fake news’, led to complacency. Not enough attention was paid to whether the product was actually helping users, or just tempting them into clicking. It’s now clear that something needs to change – we need to develop systems that work for people.

We don’t have all the answers yet when it comes to building software for a better future, but we’re going to keep trying to ask the right questions.

 

Read more:

Beer, David. Metric Power

Pasquale, Frank. The Black Box Society

Sinah, Rashmi and Swearingen, Kirsten. 'The Role of Transparency in Recommender Systems'

van Doorn, Niels. 'The Neoliberal Subject of Value'

Recommended for you