Blog Banner.jpg

A SITE OF BEAUTIFUL RESISTANCE

Gods&Radicals—A Site of Beautiful Resistance.

Can a new law prevent the spread of Fake News?

Last week, the “US Supreme Court [shot] down cases on social media liability” (Al Jazeera). It unanimously ruled that social media platforms (Twitter and Google) cannot be held responsible for content posted on their websites. Meanwhile, Brazil witnesses a heated debate over a law project that seeks to combat fake news by holding tech companies accountable for misinformation on their platforms.

Section 230 of the Communications Decency Act was used to argue that the companies are not publishers of content and therefore are not responsible for the speech posted on their platforms. On the other hand, platforms have the right to remove content that they believe is objectionable for any reason.

Can this type of freedom of content moderation by social media companies, guaranteed by Section 230, be remedied by an anti-Fake News Bill such as the one proposed in Brazil? How would the moderation process ensure the removal of harmful content and not content about harmful practices? For example, how will we remove terrorist content but not content about terrorism? How will we remove racist content but not content about racism? In the scope which these companies operate, answering these questions is far from simple.

There is still no legal consensus on how to address this issue in the global digital field; therefore, it is worth analyzing why so many people gravitate towards online content that is radically harmful to society. What are the values ​​that we practice and share in our communities, and how do we fight for them in our day-to-day lives?

The Brazilian anti-Fake News Law Project is a polarizing subject. Misinformation permeates the digital universe, and opinions tend to be in favor of the law on the left, or against it on the right. To go beyond binary politics and at the same time avoid ending up in the 'center', we can easily say that fake news is a serious problem, and we can question the extent to which this law is effective, practicable, and durable.

Does the law really solve the problem of disinformation online?

It is worth remembering that the law is often deliberately vague so that its interpretation can be flexible. This puts a lot of power in the hands of lawyers and their argumentative skills. Therefore, people who do not have access to legal professionals with extensive experience, knowledge and time are at a disadvantage. Because of this, questioning an anti-Fake News Law such as this one is not about protecting tech companies or whoever uses these tech tools to commit atrocities. It’s about finding solutions that don’t rely on a judicial system which proves time and time again that it cannot be relied upon.

Issues the anti-Fake News Bill addresses vaguely

— How to identify inauthentic accounts “without prejudice to the guarantee of privacy” and without collecting even more user data? What criteria are used to identify whether an account was “created or used for the purpose of spreading misinformation”?

Defining purpose can be extremely arbitrary, and requires detailed investigation and motive to provoke that investigation. An activist who uses a pseudonym may be impossible to distinguish from a Bolsonarist troll without judging only the nature of the opinion that each one shares on the networks. Distinguishing opinion from misinformation requires critical analysis from everyone, not just legal professionals or employees of tech companies.

— Which tools will be used to guarantee that there will be no “restriction to the free development of the individual personality, artistic, intellectual, satirical, religious, fictional, literary or any other form of cultural expression”?

If there were a list of things that distinguished an 'inauthentic' account from a satirical account, or blatant dissemination of disinformation from 'intellectual development', the inauthentic and blatant would have a handbook on how to operate legally, while 'satirists' and 'intellectuals' would migrate to other information dissemination platforms. Maybe that's why a law rarely manages to be specific enough to be effective, and vague enough to be interpreted in different contexts.

— What are the methods of “checks from independent fact-checkers with an emphasis on facts”? How is “critical fact-checking” carried out and how will legal entities be selected with the task of fact-checking?

Excessive use of the word fact does not bring you closer to it, possibly even pushes you further away. In science, it is understood that a fact exists in a context, and it can and should be questioned at any time. A fact probably boils down to evidence that finds a certain level of consensus, a consensus that can be revoked at any time, because how we contextualize and interpret evidence is subject to human error. There is no group of legal entities that can exercise the function of defining facts in the internet universe. What we can do is have a sense for critical analysis, in order to identify manipulation tools, lack of sources, speculation, conflicts of interest, etc.

— What constitutes a use of platforms incompatible with human use? Is any post scheduling tool considered an 'artificial disseminator'?

Artificial disseminators can facilitate the work of communication and media professionals. One of the tricks of entrepreneurship is “find what works and automatize it”. If you, for example, entered a virtual store, put something in the cart and left without buying it, an automatic message from the store may appear in your inbox reminding you of the product you left there. Automated emails and posts are the norm in the virtual industry, and it's existential to ask what is the number that draws the line between human and inhuman automation.

The cost for “application providers”

According to this law proposal, Brazilians will not be able to participate in WhatsApp or Telegram groups with more than 256 people or forward a message to more than 5 people. During elections, forwarding is limited to one person or group. That's because WhatsApp and Telegram have more than 2 million users in Brazil.

In response, Telegram sent a bilingual message to its users last week accusing the Bill of censorship, among other things. The next day, they communicated that they received “an order from the [Brazilian] Supreme Court that obliges Telegram to remove [the] previous message about PL 2630/2020 and to send a new message to users” saying that it “characterized FLAGRANT AND ILLICIT DISINFORMATION”.

When analyzing the first Telegram statement, I see nothing more than a company trying to protect itself financially, despite not mentioning this directly. The court also does not mention the financial scope of this debate, even though it is clear that the main motivation of these companies is profit – the political-electoral debate only becomes a priority when it affects this primordial economic motivation. There's no way this law won't cost these “application providers” a lot of money, in terms of programming and monitoring humanpower, and potential loss of users.

The reality is that whoever leaves these social media platforms because they cannot widely disseminate questionable content will find another vehicle – any other vehicle, as we have seen happen throughout the history of mainstream media.

The Telegram statement is not disinformation, it is an interpretation of the law from the perspective of an agent with an obvious conflict of interest. It is a very serious thing that we fail to distinguish between differing opinions, misinformation, and fake news. Not all misinformation is fake news, and not all opinions from people and institutions that spread certain narratives out of self-interest are equal to misinformation. If the government starts using the terms ‘disinformation’ or fake news to describe everything that opposes it, we're likely to find something akin to totalitarianism.

What we need is not a government or set of legal professionals with the power to decide what is truth and fact. What we need is a population with access to health and education resources to develop critical thinking skills. Does this law really stimulate the population's critical analysis abilities, or does it just seek to claim part of the power over the population that tech companies have conquered? Or worse, is it nothing more than the politics of a government wanting to demonstrate great efforts with no intention of enacting structural changes?

Whenever we come across online content, we have the opportunity to analyze this content, ask questions and reflect. This process requires stimulation, training and access to diverse knowledge, which go beyond posts in particular, such as fake news. Knowledge about how information sources are accessed, how communication strategies are developed, and even how websites work, can make all the difference for a person to develop a critical sense about what is seen online. A law cannot fill the chasm caused by the millennial inequality between the minority that controls the narrative, and the majority that consumes it. The democratization of narrative control will be achieved through a complete restructuring of the distribution of resources in society, and not through a dispute between agents that already wield monumental powers.


Mirna Wabi-Sabi

Mirna is a Brazilian writer, site editor at Gods and Radicals and founder of Plataforma9. She is the author of the book Anarcho-transcreation and producer of several other titles under the P9 press.