Big Tech: The Precarious Balance Between Algorithmic Governance and Democratic Accountability

by Rachel Carr

Over the past months Amazon and Alphabet have reported phenomenal earnings for the second quarter of 2021. These figures were largely driven by Google skyrocketing advertising revenues, which grew by 69%, along with Amazon’s advertising income which increased by 87% from the year ago quarter. These results reflect the central role that social media and technology have played in society over the last year, not only in offering a much-needed escape from the boredom of COVID-19 lockdowns but also in their newfound role as public forums. Last April, when the Italian Prime Minister decided to address the nation on the latest lockdown measures, he elected Facebook as his chosen medium of communication. Similarly, the British government requested Amazon’s assistance in distributing emergency medical supplies and Google leaped at the chance to assume its role as a mouthpiece for public services announcement across the globe. 

However, as the “Gordian Knot” that entangles Big Tech with its societal consequences tightens further, we should consider the motivations behind the growing presence of these tech heavyweights in our lives. What exactly are these  tech giants selling to their customers and what are the potential consequences?

To understand what triggered the phenomenal rise of Big Tech superpowers we must first cast an eye back to April 2000, when eager dot com investors watched in horror as the stock market imploded and the value of their portfolios plummeted. As the mirage of many of Silicon Valley’s superstar valuations began to evaporate, it became clear that in a text-book case of irrational exuberance, venture capitalists had been so blinded by the lure of the internet’s potential, that they had wildly overestimated the intrinsic value of their investments. 

Surviving tech firms, struggling to justify their value to furious investors, began to search desperately  for a port in the storm as the turmoil raged. Amongst them was Google, today’s search-engine giant,  which had been incorporated a mere two years prior. According to Shashona Zuboff, the Harvard Business School professor and author of ‘The Age of Surveillance Capitalism,’ the dot com bubble triggered Google’s understanding that its true value lay not in the licensing deals it had been selling, but rather in its vast stores of behaviourally rich data. Despite the company’s founders previously condemning search engine advertising as “inherently biased towards the advertisers and away from the needs of consumers”, the firm went on the capitalize on just that, with Facebook, Amazon and Twitter soon following suit.

Zuboff has coined this commoditization of the data of individuals into behavioural products that can be sold in ‘predictive futures markets’, as surveillance capitalism. In recent years the cautionary tale of “if you aren’t paying for a product you likely are the product” has been widely circulated. Of this the general public seems to be reasonably cognizant: an hour spent on Skyscanner will likely flood your feed with holiday advertisements and a trip to the ASOS homepage will litter your desktop’s ad space with outfit ideas. However, it was what the “FAANGS” discovered next, and the lucrative source of the last decade’s soaring tech valuations, that is likely to induce the most surprise. Google and its peers deduced that while it could use its data to predict the future behaviour of users with reasonable accuracy, the easiest way to guarantee the precision, and thus value of those predictions, was to influence the behaviour of users to match the algorithm’s forecasts.

An example of the application of this insight was the addition of a number of emotional reactions to Facebook’s ‘like’ button. While this modification poses as a harmless quirk designed  to allow users to further engage with the platform’s content, it also assists Facebook’s algorithms in accurately identifying and collating data on human emotions. The opportunities resulting from the utilization of this data are massive. Users can be shown posts designed to induce feelings of discomfort or sadness, followed by sponsored content intended to take advantage of this vulnerability. Along a similar vein, Google has been known to display ads for a specific restaurant and then reroute a user’s map  journey to take them past the suggested establishment: a perfect example of the use of surveillance  and behavioural modification to maximise profits at the expense of individual autonomy.

The implications of these privacy infringements extend beyond the encouragement of the occasional impulse purchase. In 2017 the autonomous hoover ‘Roomba’ came under fire when the company announced its proposal to sell floor plans of customers’ homes, scraped from the device’s mapping capabilities. Later that same year the curtain fell on the infamous Cambridge Analytica scandal, revealing the role the data analytic firm had played in manipulating the data of 87 million Facebook users to manipulate the outcomes of both Trump’s 2016 Presidential the Brexit vote. This proof of intentional cyber manipulation, designed to promote the so-called ‘splinternet’, revealed the power of Big Tech behavioural nudging to distort democratic processes. In fact, in 2019 Mark Zuckerberg’s former advisor Roger McNamee publicly criticized Facebook for its relentless pursuit of  customer data through increasingly illicit means claiming that the company’s algorithms were, ‘’honed to manipulate user engagement with practices that were eventually commandeered by bad actors to infiltrate the national (US) consciousness and disfigure political discourse.” Earlier this year Zaboff subtitled her New York Times article with the ominous statement, “We can have democracy, or we can have a surveillance society, but we can’t have both.”

Whilst the extent of the influence of Big Tech on the democratic process is yet to be determined, it is undeniable that  tech companies have amassed vast stores of behavioural data which can spell danger in the wrong hands. As a result, there is an argument for putting certain social obligations on companies with such data privileges; in other words, “With great power comes great responsibility” . Covid-19 revealed Big Tech for what it truly is: a 21st century public forum. Due to their wide-reaching social impacts, large technology companies should be answerable to the governance of regulatory bodies. If banks, electricity, water and utilities companies are regulated because of the impact of these services on a nation’s citizens, then there is reason for Big Tech to no longer be able to evade such scrutiny.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.