Rethinking Online Interactions: Healthy Digital Ecosystems Should Matter Like Clean Air and Water

Mark Zuckerberg sitting at a table in Congress with media surrounding him
Image credit: AP Photo/Andrew Harnik
Author  By Lucy Norton ’21
Published on 

Last week, computer science professors Eni Mustafaraj and P. Takis Metaxas reflected on the roles of disinformation and tech companies in the January 6 attack on the United States Capitol. In the second part of their discussion, they address tech regulation and the future of internet speech. Mustafaraj’s most recent research touches on this topic. Featuring contributions from three Wellesley students, her study examines the skewed amplification of far-right new sources in Google’s Top Stories feature.

Should tech platforms be regulated by the government? Would that help stop the spread of disinformation?

P. Takis Metaxas: I would say yes, but we need to be careful with regulation because we have not always done it well. For example, the 1996 regulation that treated software platforms as simply bulletin boards, where anyone could post a message and anyone could read it if they so wished, is dangerously outdated. The algorithms that today’s social media use practice selective and targeted amplification of their contents. That’s part of their business model and their secret of success. They select what we see, therefore they are editors of what we see, they are not just bulletin boards. They are targeted—that is, they are serving different content to each of us, thus enabling propaganda. And they are amplifying the messages of some sources, therefore creating and promoting celebrities and populists. If left unregulated, selective and targeted amplification is extremely dangerous to our society and to our democracy.

However, regulation is hard when we expect the solution to an extremely complex problem to be just a few simple rules that remain just over a long period of time. Technology changes our lives every day, all over the world. We have to change the way we think of regulation as a static action of a single state.

Eni Mustafaraj: The CEOs of the biggest tech companies are on record saying they want regulation. For example, YouTube’s CEO Susan Wojcicki told 60 Minutes that the government is free to tell them “this is how you should handle online harassment; this is how you handle online hate speech.” “We would follow those laws,” she said. “But we don’t see those laws.” Facebook’s Mark Zuckerberg has said the same. Thus, the question is not should we have regulation, but what kind of regulation, and beneficial to whom? The last time the Congress chipped away at the protections of Section 230, it did so in the name of curbing online sex trafficking. However, the bill, known as FOSTA-SESTA, ended up immediately harming sex workers, as our Wellesley colleague Jennifer Musto has pointed out.

In this politically polarized environment, one can easily imagine regulation that might wish to classify speech like that of the #MeToo movement or Black Lives Matter as a kind of hateful speech that incites violence. We have already seen such efforts from Republican lawmakers. Thus, while all agree that regulation is needed, I don’t think there will be a consensus anytime soon about its nature and reach.

What are the effects of banning former President Trump from social media platforms? What precedent does that set? Is it a reasonable way of stopping disinformation?

Metaxas: I do realize that silencing any particular person online, especially the president of a country, is neither an easy nor a light decision for a private company to make. In general, it should not be allowed because it gives tremendous power to unelected individuals to shape our view of the world. However, the particular decision to silence Trump in the middle of the insurrection he had provoked was absolutely correct: It made it harder to turn the insurrection into a coup. The stakes were way too high to do nothing. If they had not banned Trump at that moment, they would have enabled the demise of our democracy.

Mustafaraj: This is not the first instance of so-called deplatforming. It happened before to internet agitators such as Alex Jones and Milo Yiannopoulos. They didn’t have Trump’s political power, so they were easy targets. Had Twitter consistently been sensitive to the harm that online disinformation causes to democracy, it should have banned Trump when he was emphatically amplifying the racist, “birther” conspiracy theory in 2011–2012. The terms of service make it simple to ban anyone who breaks them; there is no complication in that. But a company’s market capitalization depends on how engaged its users are. Unfortunately, misinformation-driven engagement has proved very lucrative.

How have discussions of ethics and misinformation online changed in recent years, and how might they evolve?

Mustafaraj: For a long time, most people were dismissive of internet speech. They believed that what was being written on the internet was not real life, that it could be switched off at any time, and one could walk away, unharmed. This is why initially the harassment of women and people of color on the internet was not taken seriously. Or how the many seemingly innocuous conspiracy theories from the faked moon landing to chemtrails kept inspiring more outrageous ones.

But the internet is real life, and it is capable of changing us. We should start considering our internet interactions as part of our existential environment, like the air we breathe or the water we drink. An apt metaphor for the misinformation circulating on platforms like Facebook is thinking of it as unfiltered lead in drinking water. It slowly poisons minds because it’s unmarked and our neighbors are sharing it.

Wellesley just established The Frost Center for the Environment, to work toward a “just environmental future.” I hope to see similar interdisciplinary centers envisioning a “just and fair online environment for our future” as well.

Metaxas: We have been learning a lot about the way our brains work, the accelerated effects of technology on our own human nature, our mental limits, and our sense of reality. The future is not predetermined, it is the result of our individual and collective actions. There are many possible futures, and what really happens depends on us all. All I can say is that with the development of machine learning, the current popular version of AI, we are living during the most important development since the discovery of the technology of writing—the technology that allowed the past to influence our societies in the future. If people lived in a single democracy on the planet today, I would be very excited about the future. With our current divisions into states, religions, races, and classes, I am not very optimistic. But we have to try to work toward a common, just, and equitable good. Let’s start by reducing polarization across all divisions.