Power, Parler, and the Problem of Big Tech
Perhaps the best of a set of bad options is to reconsider the role of antitrust laws.
By Dr. Brian Dellinger
Over the course of 2020, the previously minor social media application Parler rose to national prominence. The site served as a smaller, right-leaning mirror to Twitter, attracting an audience that included (among others) both U.S. senators and QAnon conspiracy theorists. Where Twitter forbade referring to a transgender person by biological sex, Parler reportedly banned users for mocking Republican congressman Devin Nunes. By the end of the year, the app had hit nearly three million daily users.
That changed after the January 6 attacks on the Capitol, amid allegations that the app provided a haven for insurrectionist sentiments. Responses were swift and comprehensive. On January 8, Google announced that it was removing Parler from its Google Play Store. Similar notices quickly arrived from Amazon, Apple, and other technology companies. The app could no longer access most mobile stores for download, authenticate its existing users, or even host any actual content. Any existing posts were lost. In effect, over the course of 48 hours, it functionally ceased to exist.
It is sometimes difficult to assess conservative claims of “big tech censorship.” On one hand, Parler’s erasure came only a day after Facebook suspended the account of President Donald Trump, and the same day that Twitter joined in that ban. On the other, the bans followed Trump’s defense of the Capitol attacks as “the things … that happen when a sacred landslide election victory is … stripped away.” To ban a sitting president is a drastic step, but it is hardly less extreme for that president to praise thugs who stormed the congressional building. Likewise, to the extent that claims of Parler’s complicity in the attacks are merited, its erasure, too, is justified.
Yet these bans come even as tech giants tolerate or cut deals with gratuitous moral evils. Apple, for instance, benefits from Uyghur labor camps, while Twitter continues to host the Ayatollah Khamenei. Meanwhile, other cases where content is removed — such as Amazon’s refusal to sell transgenderism-cautious book When Harry Became Sally while carrying, say, Mein Kampf — seem far less defensible.
Perhaps the underlying question is not whether a particular case is justifiable, but whether a handful of technology companies fundamentally control too much of the flow of information. Even where the bans described above are reasonable, the multi-company coordination that enabled them could potentially target any new service, with far less justification and no clear legal recourse. Indeed, the existing media giants arguably have good incentive to throttle upstarts in this way: by doing so, they limit the competition.
Such possibilities limit the strength of the usual free market response to corporate politicking: “If you don’t like it, build an alternative.” Parler’s troubles suggest that this solution is less viable than might be hoped. Developing a successful social media site is already a substantial challenge; to do so while also creating a new mobile storefront, authenticator, cloud service, and so on seems simply untenable.
A solution to this problem, unfortunately, remains elusive. Some conservatives suggest that the above cases justify the repeal of Section 230 of the Communications Decency Act. Under Section 230, websites do not risk liability for deleting content, even where that content is protected by the First Amendment. Critics claim that the act was intended to target only broadly objectionable material, with the expectation that tech firms would be politically neutral in what they removed; this expectation, they say, clearly has not been met.
As I have argued elsewhere, such arguments are misleading and wrong-headed. Section 230 explicitly protects removal of content for any reason, not merely a “reasonable person” standard of undesirability. Its repeal would not prevent Twitter from putting “content disputed” warnings on Tweets (since such labels are Twitter’s own speech, and so protected by the First Amendment); nor Amazon from simply refusing, like any retailer, to carry certain products; nor Google from doing business with whom it pleases. On the other hand, repeal could place heavy burdens on new social media competitors, which may explain Facebook CEO Mark Zuckerberg’s support for replacing Section 230 protections. Above all, it seems bizarrely short-sighted for Republicans to urge greater government interference in social media in this moment. One questions whether they expect the Democrat-controlled Congress and presidency to be more friendly to conservative speech than the status quo.
Perhaps the best of a set of bad options is to reconsider the role of antitrust laws. At issue in all the cases above is the ubiquity of the big tech firms: that decisions by a small group of companies can render information meaningfully unreachable or invisible. Amazon is at once a premier bookseller, accounting for over half of all American book sales, as well as a nearly $400 billion-a-year general retailer and the provider of a full third of all cloud infrastructure services. For its part, Google’s parent company Alphabet might simultaneously own the servers hosting a site, the ads running on it, the browser loading it and the physical cables transmitting it — and, of course, the search engine that located it. It controls a plurality of the market in several of these fields. Indeed, Google is already facing multiple lawsuits alleging anticompetitive behavior.
Careful revision of antitrust law might allow reopening of these markets, with competition encouraging a range of stances, political and otherwise. At minimum, such changes seem less fraught than inviting the government into the business of judging neutrality.
Dr. Brian Dellinger is an associate professor of computer science at Grove City College. His research interests are artificial intelligence and models of consciousness.