I have been pondering over data, privacy and all the scandals surrounding Facebook, Google, Amazon and the other tech giants. We all know they're collecting vast amounts of information about us and using their monopolistic powers to pervert markets and lay waste whole industries at their whim.
However, this piece is not about them. It is about us. What’s wrong with us? We’re contained inside surveillance capitalism, yet we seem perfectly content to carry on. I read the other day that Facebook posted record profits despite ALL the stuff that’s been going on with them in 2018. Don’t we care at all? Maybe.
Maybe we just don’t know how to. Maybe Adam Curtis is right. Maybe the world feels too complex and chaotic. He uses the word “HyperNormalisation” - a phrase coined by a Russian historian writing about the last years of the Soviet Union. In the 80s everyone from the top to the bottom of Soviet society knew that it wasn’t working. They knew it was corrupt, knew that the bosses were looting the system, knew that the politicians had no alternative vision. Everyone knew it was fake, but because no one had any alternative vision for a different kind of society, they just accepted this sense of total fakery as normal.
Sometimes this is exactly what the modern tech and media landscape feels like to me. We know it’s not working for us, we know we’re being treated as commodities in a state of perfect paralysis - yet we keep on checking Instagram and binging Netflix or YouTube.
Escaping the surveillance economy doesn’t feel like an actual option to most people.
So are we simply lost in a giant digital coliseum, housing billions of people that can get any kind of entertainment they like, 24/7? What can we do to break out?
A friend once mentioned that every superhero has a kryptonite - their super power always has a fatal flaw. So what is the flaw in Google and Facebook’s near-perfect algorithms and data collection capabilities?
Data itself, of course.
Their super power is also their greatest weakness. It relies on automated data gathering and processing at an almost incomprehensible scale. To quote Adam Curtis himself:
Using feedback loops, pattern matching and pattern recognition, those systems can understand us quite simply. That we are far more similar to each other than we might think, that my desire for an iPhone as a way of expressing my identity is mirrored by millions of other people who feel exactly the same. We’re not actually that individualistic. We’re very similar to each other and computers know that dirty secret. But because we feel like we’re in control when we hold the magic screen, it allows us to feel like we’re still individuals. And that’s a wonderful way of managing the world.
However, all of this power relies on the idea that the data can be trusted. What if we all passively, at a large scale, began to generate data that looked and felt real? Data that was generated by us - but created sarcastically. Fake, but real. What better way to show your human objection to constant data gathering. Let’s call it digital activism.
I think we are seeing signs of it already. For example, while gathering thoughts for this piece I came across the Parasite.
Bjørn Karmann and Tore Knudsen recently created Project Alias, a teachable “parasite” that is designed to give users more control over their smart assistants, both when it comes to customisation and privacy.
It sits on your Amazon Echo or Google Home and lets you block them and control them in various ways.
I then found Noiszy. Their tagline is brilliant.
Noiszy is a browser plugin that creates meaningless web data - digital noise. It visits and navigates around websites, from within your browser, leaving misleading digital footprints around the internet. It obscures you, but if enough people did it, could it actually turn Google or Facebook’s greatest asset into a giant liability.
What if more than 51% of the data they received was nonsense generated by humans in their own name? Would it break the code and implode the universe?
Activism like this is good (and fun).
It is also serious business. Maybe, if enough of us did it, it would actually create change. Facebook, Google and Amazon all relies on this data. They sell the very same data that we’re talking about obscuring. Maybe we could expose the fakeness with the same clicks, likes and shares that they crave.
We could also use these techniques for our own personal good. A friend of mine keeps two YouTube accounts. One of them is only used for Ted talks, stuff for work and other things that represents his best self. On the other one he watches football highlights, music, gaming and other entertainment. But he won’t let that “pollute” his other account because then it instantly takes over. Being in the world of recommender systems that made me think.
If we can build systems to obscure data, we can also build bots that can train these algorithms to show us more of what we’d like, not what we do. Until my YouTube account has an “exploration” setting I can switch on, I might employ a bot to be more aspirational on my behalf - so to speak. If left to themselves, these algorithms maximize engagement and time spent, but is that really what we want?
Until this utopian future occurs, maybe we have to employ a bot to make sure our YouTube and Instagram feed stays balanced, even if that content is not always our first impulsive choice. Maybe if we hid our worst tendencies from ourselves, then actually putting the device down would also be a wee bit easier?
I’d pay 3.99 a month for that.