Regulating tech giants: much more than a legal issue
The position of tech giants such as Twitter and Facebook is under increasing scrutiny, and for good reason. Both their political power and market power have in many ways outgrown our conventional governmental structures. Albeit formally neutral, several social media platforms have become a decisive element in the core political processes of Western and non-Western countries. Their influence on our daily lives is unprecedented, and in recent years we have experienced that they can make public leaders, elections and even revolutions.
I remember quite well the many (Western) commentators who cheered and praised Twitter for its key instrumental role in the rise of the Arab Spring, the series of anti-government protests and uprisings that started in Tunisia in late 2010, but quickly spread across much of the Arab world. While the fire spread across Northern Africa, the phenomenon soon became known as the “Twitter revolution”. This social platform undoubtedly gave the suppressed in many countries the possibility to connect, to organize themselves and to inspire each other to rise against their oppressors. In the West many regarded these developments as proof of their prophesy that social media would eventually only contribute to the integration and democratization of the world at large. This was in 2010-2011, but much has changed since. After a few hopeful months the Arab Spring turned sour (with the partial exception of Tunisia), and contrary to its ideals the revolution eventually produced no democratic, but even more repressive regimes and worse living conditions for tens of millions of Arabs.
A few years later, the Western world had its own (revolutionary) experiences to cope with, in which social media played a central role. Today, no one really questions the fact that the Brexit referendum was won by the “Leave” campaign, thanks to (its use of) Facebook-user data. While Barack Obama owed both his presidential wins (in 2008 and 2012) in large part to his online campaigns, a shockwave hit the US (and the world) when the political outsider and populist Donald Trump got himself elected in November 2016, also thanks to a massive use of his Twitter account.
While a decade ago the West beheld with excitement the toppling of dictatorial regimes by people whose most important “weapons” were cellphones with social media apps, it now finds itself in the middle of a storm of fake news, competing information bubbles and a disintegrating “United” States of America.
These developments pose difficult dilemmas for governments: should they curb tech giants and make them responsible for third-party content (and behavior), or should they be left the possibility to self-police their members? Thus far the US took the latter approach and left things “to the market” and to the user policies of the individual companies. This liberal approach came with a price: a free flow of disinformation and hatred that reached millions of Americans. Positioning themselves as “neutral”, the platforms let this situation go on for years. Only after the Capitol Hill riots on January 6th, most major social media banned Donald Trump’s accounts, because of his role in enflaming the violence that posed a serious threat to the heart of the world’s largest democracy.
After the unprecedented move by Twitter, which was soon followed by other platforms, CEO Jack Dorsey stated in a series of tweets, “I do not celebrate or feel pride in our having to ban @realDonaldTrump from Twitter, or how we got here. After a clear warning we’d take this action, we made a decision with the best information we had based on threats to physical safety both on and off Twitter. Was this correct? […]. Having to take these actions fragment the public conversation. They divide us. They limit the potential for clarification, redemption, and learning. And sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.”
In these few tweets Dorsey asks himself, and all of us, difficult but essential questions about the future of free speech and with that the functioning of our modern, 21st century democracy and society. Was the ban correct? Should an individual company have the power to curb free speech at its own discretion, or should governments start to regulate more?
In an even more surprising statement, German Chancellor Angela Merkel, fiercely opposed the ban. Her spokesman, Steffen Seibert, stated that “the right to freedom of opinion is of fundamental importance. Given that, the Chancellor considers it problematic that the President’s accounts have been permanently suspended.” This, of course, does not mean that Merkel agrees with Trump’s opinion or actions, but she pleas for a more public debate and approach to what can and cannot be said. In other words: It is up to lawmakers to set the boundaries of online speech, based upon the public values and parliamentary debate of a given society, and not up to commercial companies pursuing their own interests.
Regarding the matter of public regulation, the Americans and Europeans seem to steer in different directions. Up until time of writhing, the US Government has not made any serious attempts to regulate social media on this matter. This is mainly because Americans have always found it difficult to rhyme any regulation of this kind with their First Amendment (that protects Americans in their free speech). However, the First Amendment only protects individuals’ speech from U.S. governmental oppression, not commercial entities, as Dipayan Ghosh also sharply noted in his recent article on Harvard Business Review, Are We Entering a New Era of Social Media Regulation?
Meanwhile, most Europeans agree with Merkel – that the boudaries of online speech should be set by politics, not by the market – and in December 2020 the European Commission proposed two legislative initiatives: the Digital Services Act (DSA) and the Digital Markets Act (DMA). This broad package of proposed legislation aims to deal with many more tech giant related issues than just responsible public speech and the role that social media platforms have in the online public debate. According to the Commission’s own website, the two main goals are “to create a safer digital space in which the fundamental rights of all users of digital services are protected” and “to establish a level playing field to foster innovation, growth, and competitiveness, both in the European Single Market and globally.”
The Digital Services Act deserves special attention here. This is designed to force tech companies to take more responsibility for the illegal behavior on their platforms. According to this legislative proposal, companies that do not police themselves (and their users) will face fines of up to 6% of their global revenues. The good news is that this approach at least makes companies, in a way, co-responsible for the content they publish and circulate, instead of giving them a free (and profitable) ride while society ultimately pays the price.
But this is just the start of the debate on online free speech. Because in the end, it is us, the citizens of our societies, who make the content. Restricting baseless, provocative and hateful content from public platforms is a good start, at least aligning digital conversations with the wider legal framework that is already in place in the physical world – what is illegal elsewhere should also be illegal online.
But even more important is making people more aware of the power of such content, and of the results their (sometimes impulsive) posting. This is an essentially cultural task, where legislative intervention and legal enforcement can only go so far. May the Capitol Hill riots be a lasting reminder for us all.