Paper: European approach to online disinformation: geopolitical and regulatory dissonance

Did you know about this paper? European approach to online disinformation: geopolitical and regulatory dissonance:
https://www.nature.com/articles/s41599-023-02179-8

It points out that the EU’s approach to fighting disinformation is based on two different and sometimes conflicting ideas:
1. The first sees disinformation as a serious threat to democracy, justifying strong and decisive actions, often using strict or forceful measures.
2. The second idea relies on digital platforms to self-regulate and act voluntarily, with a focus on gentle, minimal intervention from authorities.

Even with the recent adoption of the Digital Services Act (https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/digital-services-act_en), which increases regulation of online platforms, these two approaches still exist together. This law creates a “co-regulatory framework,” meaning it combines both strict regulation and voluntary actions by platforms. However, the tension between these two approaches can create internal conflicts and confusion, which may impact the future success of the EU’s efforts to combat disinformation.

Here some examples of potential conflicts and confusion due to the EU’s mixed approach to disinformation:

– Inconsistent Enforcement: Platforms might follow voluntary guidelines to fight disinformation but still face penalties if they don’t meet stricter regulations. For example, a social media company could implement its own measures to flag false information, yet still be fined for not aligning with EU laws, creating confusion about compliance.

– Different Standards Across Platforms: Some platforms may take voluntary measures seriously, while others do the minimum required by law. This could lead to a situation where one platform effectively reduces disinformation, while another allows it to spread, confusing users about what to expect.

– Public Trust Issues: If the EU mixes strict laws with voluntary self-regulation, the public might distrust the efforts. For instance, if a platform claims it’s following EU guidelines but fails to stop disinformation, people might lose confidence in both the platform and the EU’s effectiveness.

Do you agree with these “potential” conflicts? How would you propose to tackle this?

Deja un comentario

Tu dirección de correo electrónico no será publicada. Los campos obligatorios están marcados con *

es_ESSpanish
Scroll al inicio