In 2018, the European Commission (EC) created the EU Code of Practice on Disinformation (Code), the first self-regulatory piece of legislation that intended to motivate companies to collaborate on solutions to the problem of disinformation. Twenty-one companies agreed to commit to this Code, and these rules resulted in tangible solutions: Facebook, Google, and Twitter developed Ad Libraries to catalog political advertisers, Twitter published its takedown data, and multiple companies expanded their fact-checking tools and misinformation labels and created journalism and media literacy educational programs. However, the initial Code faced criticism, mainly because companies were not required to report quantitative outcomes, making it difficult to measure the impact of the new governance.
Due to these complaints, a group of thirty-four stakeholders, including companies, trade associations, industry associations, and international organizations revised the original EC Code. On June 16, 2022, the Commission released the amended and strengthened Code, where all companies operating in the EU were given the opportunity to sign on. This updated Code makes 44 commitments and includes 128 specific measures, compared to the original Code’s 21 commitments. Signatories will have six months to fulfill the commitments and measures, and seven months to provide baseline reports to the EC, so implementation will begin in early 2023. At this time, it is also not certain how many and which companies will lean toward compliance with the proposed Code changes, which are unpacked in the next section.
Demonetization of Disinformation
The 2018 Code made a commitment to “deploy policies and processes to disrupt advertising and monetization incentives for relevant behaviors, such as misrepresenting material information about oneself or the purpose of one’s properties.” However, this was a broad definition of disinformation, and did not require signatories to commit to any specific measures to meet this goal, instead suggesting that companies “may, as appropriate” implement any of the suggested policies such as brand verification tools and/or engagement with third party verification companies.
In the revised version, these policies are reframed as specific commitments that companies must indicate if they have signed onto or provide a reason if they choose to not to. Instead of simply disrupting false information, the 2022 Code aims to “defund the dissemination of Disinformation”, which is defined as “misinformation, disinformation, information influence operations and foreign interference in the information space.” Using clearer and standardized terms, like disinformation, helps better communicate the goals of this Code. Additionally, in the previous Code, companies were tasked to give clients mechanisms to monitor their ad placement. In the updated Code, the signatory companies are now also responsible for verifying the location of ads and avoiding placement next to disinformation (Measure 1.1).
Some of the other new specific measures include:
- Creating stricter eligibility requirements and content review mechanisms for content monetization and ad revenue share programs (Measure 1.2);
- Placing advertisement through ad sellers that have taken proven measures towards placing advertising away from Disinformation content (Measure 1.4);
- Giving independent auditors fair access to their services and data (Measure 1.5); and
- Identifying content and sources as distributing harmful disinformation (Measure 2.2).
Increased Transparency of Political Advertising
This section is greatly expanded from the original Code’s three commitments. One of the three has still not been solved: reaching a common definition of political and “issue-based advertising.” However, if a political definition has not been passed within a year of operations of the Code, it stipulates that the task force will have to meet and agree on a definition. Primarily, these commitments expand transparency expectations for political and issue ads. While in 2018, companies committed to public disclosure of political advertising, in 2022, companies must develop clear labels on political and issue-based ads that remain when users share ads (Commitment 6; Measures 6.1-6.5). The Code also tasks companies with identifying sponsors before ad placement (Measure 7.1) and removing sponsors that do not comply with requirements for political and issue advertising (Measure 7.3).
Notably, the proposed update could help researchers looking at political advertising or users who want to know more about why algorithms select certain ads to serve them. Companies must create an ad repository with close to real-time records of all political and issue ads served, archive ads for at least five years, and create searchable APIs with this information (Commitments 8, 10, 11). Companies must also research the uses of disinformation in political and issue ads, including the impact of short election “blackout periods” (Commitment 13; Measure 13.2).
Collaboration in Integrity of Services
While companies committed to implementing clear policies against automated bots and systems in 2018, the new Code increases collaboration in taking down manipulative actors and behaviors. The task-force must put together a list of tactics, techniques, and procedures (TTPs) utilized by malicious actors and update this list annually (Measure 14.3). The Code recommends examples from the AMITT Disinformation Tactics, Techniques, and Procedures Framework, including hack-and-leak operations, account takeovers, and impersonation (Measure 14.1).
The Code also requires companies to commit to work against the list of manipulative practices prohibited under the proposal for the Artificial Intelligence Act (Commitment 15). Additionally, signatories must coordinate to share information about cross-platform influence operations and foreign interference (Commitment 16).
Empowering all Users
This section is renamed in the new draft, from Empowering Consumers to Empowering Users. While the section keeps previous commitments to prioritize accurate information in recommender systems (Commitment 19) and mitigate disinformation and misinformation (Commitment 18), the section is also reworked to expand initiatives on media literacy (Commitment 17; Measure 25.1), allowing users to flag false information and appeal enforcement decisions (Commitments 23 and 24), and access edit history and the origin of content through global standards bodies such as C2PA (Commitment 20; Measure 20.2).
The Code keeps the recommendation from 2018 to develop indicators of trustworthiness through an independent and transparent process, but also includes recommendations for messaging services, such as including marks indicating content’s trustworthiness or limiting forwarding options on messages (Commitments 22 and 25; Measure 25.2). In addition to trustworthiness markers for information, the 2022 Code also calls for fact checking in all Member States’ languages (Commitment 21).
Empowering Researchers and Fact-Checkers
While the main highlights of the Empowering Researchers section are increased transparency and more granular data, the 2022 Code also adds a new section for empowering fact-checkers. Researchers will be able to apply for funding and access to data through a new third-party independent body, funded by the signatories, that will vet researchers and research proposals with the assistance of the European Digital Media Observatory (EDMO) (Measures 27.1-27.4, 28.4). In 2018, companies committed to having an annual event to discuss research findings; this commitment is not included in the 2022 version.
Platforms agree to use independent fact-checkers’ work in their platforms, give fair financial compensation, and create a repository of fact-checking content (Measures 31.1, 30.2, and 31.3). In coordination with EDMO, companies also must provide fact-checking organizations greater access to quantitative data and info on the impacts of their platform’s contents (Measures 31.1 and 31.2).
Stronger Enforcement and More Data Available
The signatories will create a new website, called the “Transparency Centre”, where they will provide the terms of service and policies that their service applies to implement each commitment and measure that they subscribe to (Commitment 34; Measure 35.1). Users will be able to track the Code’s implementation, policy changes, and task-force decisions (Measures 35.3, 35.5, and 36.3).
In the 2022 Code, each commitment and measure have Service Level Indicators (SLIs) and Qualitative Reporting Elements (QREs), where signatories must track at a Member State level, not globally or aggregate European data, the impact of each of their actions towards fulfilling commitments and measures. In the Transparency Centre, users will be able to understand each SLI and QRE and see updated data for these benchmarks (Measures 35.6 and 36.2). The task-force will also have to assemble a working group to develop structural indicators to measure the success of the Code (Commitment 41).
The largest change is that the European Code of Practice on Disinformation will tie into the EU’s platform regulation under the almost-completed Digital Services Act (DSA) and carry large financial penalties for companies that do not comply. The Code requires Very Large Online Platforms, defined by the DSA, to report on SLIs and QREs at a Member State level every six months and update the Transparency Centre, as well as commit to independent audits for their compliance with the Code (Commitment 40 and 44; Measures 40.1 and 40.3).
“The Code will play an important role in the assessment of whether the very large platforms have complied with their legal obligation of mitigating the risks stemming from disinformation spreading on their systems,” say European Commission Vice-President Vera Jourová and European Commissioner Thierry Breton. This is important because violations of the DSA carry large financial penalties for companies that do not comply. The DSA requires companies to undergo mandatory risk assessments and outside audits for risks such as disinformation, and if companies do not comply, they may face fines of up to 6% of their global revenue. If the Code of Practice on Disinformation becomes a Code of Conduct under Article 35 of the DSA, Very Large Online Platforms may have to sign up for the Code’s commitments and measures.
Current Signatories of the Latest Iteration
After the initial amendment process, the EU invited companies to sign the new code. Thirty-three companies agreed, including large tech companies, such as Google, Meta, and Microsoft. There are still companies absent from the signatories of the co-regulatory code. Amazon, Apple, and Telegram notably have not signed the code. Additionally, some companies decided to select which measures they would comply with, for example, Google, Twitter, TikTok, and Meta did not agree to implement “trustworthiness indicators” to inform users about possible disinformation on their sites.
As platform signatories continue to grow their products and services, and disinformation continues to evolve, the larger question is whether cooperation between the private sector and the European Commission will make a meaningful difference against the spread of disinformation online. Ultimately, the EC will have to assess whether companies are significantly improving under self-regulatory codes, or if stricter legislative frameworks like the DSA are needed in the future.
Alphabet, Apple, Amazon, Meta, and Microsoft are general, unrestricted donors to the Brookings Institution. The findings, interpretations, and conclusions posted in this piece are solely those of the author and not influenced by any donation.