Brussels, 16 June 2022
Today, Commission welcomes the publication of the strengthened Code of Practice on Disinformation. The 34 signatories, such as platforms, tech companies and civil society followed the 2021 Commission Guidance and took into account the lessons learnt from the COVID19 crisis and Russia’s war of aggression in Ukraine. The reinforced Code builds on the first Code of Practice of 2018, which has been widely acknowledged as pioneering framework globally. The new Code sets out extensive and precise commitments by platforms and industry to fight disinformation and marks another important step for a more transparent, safe and trustworthy online environment.
Věra Jourová, Vice-President for Values and Transparency, said:
This new anti-disinformation Code comes at a time when Russia is weaponising disinformation as part of its military aggression against Ukraine, but also when we see attacks on democracy more broadly. We now have very significant commitments to reduce the impact of disinformation online and much more robust tools to measure how these are implemented across the EU in all countries and in all its languages. Users will also have better tools to flag disinformation and understand what they are seeing. The new Code will also reduce financial incentives for disseminating disinformation and allow researchers to access to platforms’ data more easily.
Thierry Breton, Commissioner for Internal Market, said:
Disinformation is a form of invasion of our digital space, with tangible impact on our daily lives. Online platforms need to act much strongly, especially on the issue of funding. Spreading disinformation should not bring a single euro to anyone. To be credible, the new Code of Practice will be backed up by the DSA – including for heavy dissuasive sanctions. Very large platforms that repeatedly break the Code and do not carry out risk mitigation measures properly risk fines of up to 6% of their global turnover.
Together with the recently agreed Digital Services Act and the upcoming legislation on transparency and targeting of political advertising, the strengthened Code of Practice is an essential part of the Commission’s toolbox for fighting the spread of disinformation in the EU.
The 34 signatories include major online platforms, notably Meta, Google, Twitter, TikTok, and Microsoft, as well as a variety of other players like smaller or specialised platforms, the online ad industry, ad-tech companies, fact-checkers, civil society or that offer specific expertise and solutions to fight disinformation.
The strengthened Code aims to address the shortcomings of the previous Code, with stronger and more granular commitments and measures, which build on the operational lessons learnt in the past years.
Concretely, the new Code contains commitments to:
- Broaden participation: the Code is not just for big platforms, but also involves a variety of diverse players with a role in mitigating the spread of disinformation, and more signatories are welcome to join;
- Cut financial incentives for spreading disinformation by ensuring that purveyors of disinformation do not benefit from advertising revenues;
- Cover new manipulative behaviours such as fake accounts, bots or malicious deep fakes spreading disinformation;
- Empower users with better tools to recognise, understand and flag disinformation;
- Expand fact-checking in all EU countries and all its languages, while making sure fact-checkers are fairly rewarded for their work;
- Ensure transparent political advertising by allowing users to easily recognise political ads thanks to better labelling and information on sponsors, spend and display period;
- Better support researchers by giving them better access to platforms’ data;
- Evaluate its own impact through a strong monitoring framework and regular reporting from platforms on how they’re implementing their commitments;
- Set up a Transparency Centre and Task Force for an easy and transparent overview of the implementation of the Code, keeping it future-proof and fit for purpose.
Finally, the Code aims to become recognised as a Code of Conduct under theDigital Services Act to mitigate the risks stemming from disinformation for Very Large Online Platforms.
Background
The 2018 Code of Practice on Disinformation brought together industry players to commit to voluntary commitments to counter disinformation. At the core of the EU strategy against disinformation, the Code has proven to be an effective tool to limit the spread of online disinformation, including during electoral periods and to quickly respond to crises, such as the coronavirus pandemic and the war in Ukraine.
Following the Commission’s Assessment of its first period of implementation, the Commission published in May 2021 detailed Guidance on how the Code should be strengthened, asking to address the shortcomings of the 2018 Code, proposing solutions to make it more effective.
The signatories of the 2018 Code, and a broad range of prospective signatories engaged in the re-drafting of the commitments and measures worked together to ensure that the reinforced version of the Code is fit to address the important new challenges that disinformation poses to our societies.
The revision drafting process has been facilitated – contracted by the Signatories – by Valdani, Vicari and Associates (VVA) an independent consultant and Oreste Pollicino, a Constitutional Law professor of the Bocconi University as honest broker.
Next Steps
Signatories will have 6 months to implement the commitments and measures to which they have signed up. At the beginning of 2023, they will provide the Commission with their first implementation reports.
Taking into account expert advice and support from the European Regulators Group for Audiovisual Media Services (ERGA) and the European Digital Media Observatory (EDMO), the Commission will regularly assess the progress made in the implementation of the Code, based on the granular qualitative and quantitative reporting expected from signatories.
The established Task Force will monitor, review and adapt the commitments in view of technological, societal, market and legislative developments. Today the Task Force already held its first kick-off meeting. It will meet as necessary and at least every 6 months.
More Information:
Central hub on the Code of Practice on disinformation
Question and answers on the Strengthened Code of Practice
2022 Strengthened Code of Practice
Evaluation of the Code of Practice
Commission’s guidance on the revision of the Code of Practice
Press release Digital Services Act
Policy page on the Code of Practice
List of signatories and commitments (updated on a rolling basis)
Source – EU Commission
Fighting propaganda war with democratic methods – new anti-Disinformation Code
Article by V. Jourova and T. Breton
Russia’s information, or rather disinformation war, clearly accompanies its military offensive in Ukraine. It’s just the latest reminder how dangerous for democracies disinformation and information manipulation can be.
Constant and almost unlimited access to online information is one of the greatest successes of digitisation and tech advancement, but the Covid19 pandemic and the Russian invasion of Ukraine showed us that it can be gamed, often in very sophisticated ways, to spread dangerous disinformation campaigns by malicious actors.
Europe has learnt its lessons. We are not naïve any longer. We are addressing this threat in a European way with a mix of legislation, such as the Digital Services Act and unique tools, such as newly unveiled anti-disinformation Code.
Let’s be clear: this is not about critical views, but it is about algorithmic amplification of dangerous content, coordinated manipulative behaviour such as fake accounts or bots and about content that could create harm to our societies.
The lessons we learnt, working with online platforms and civil society show clearly that there are limits in how effective the fight against disinformation has been so far.
There are four main take-aways:
- Disinformation pays off: Players that have disseminated Covid19 related disinformation have gradually moved to pro-Kremlin disinformation, many of them motivated by financial gain. Monetisation of disinformation must be stopped;
- Efforts ramp-up is a must: Online platforms do not seem to dedicate resources to fighting disinformation equally in all countries and languages. Ad-hoc measures in a crisis cannot replace a structural EU-wide cooperation with fact-checkers and content moderation teams;
- Data is key: researchers need to have access to online platforms’ data to better understand the many facets of disinformation. Adequate transparency and access to data for researchers are not yet there;
- A joint approach: Disinformation is often disseminated and amplified through the coordinated use of accounts. Strong action and cooperation between online platforms is essential to putting an end to the spread.
In the EU we have come up with a comprehensive approach to try to defend ourselves from this complex threat of disinformation.
The new anti-disinformation Code is the latest important piece of the puzzle in the EU disinformation toolbox. It will guide the largest digital players to take real and strong action to curb disinformation. The aim is to become more effective in key areas, from understanding algorithms, to helping users critically assess the information they see, to removing financial gains from disinformation, and ensuring that disinformation in languages spoken by fewer people is not neglected. Under the new Code, not only very large platforms but many other important players are taking very significant commitments to achieve these objectives and to better protect our societies.
Indeed, on top of major online platforms like Google, Meta, Twitter, TikTok or Microsoft, also smaller or specialised online services, participants from the online advertising sector, as well as civil society organisation offering tools or services to fight disinformation will sign the new Code. This need not be the end of the story: all market participants are invited to join forces in the future, leaving no space for loopholes and providing for a complex response to a complex threat to our democracy.
For the very large platforms, the Code will be underpinned by the EU pioneering law, the Digital Service Act.
The DSA will make large platforms put our society’s well-being first and their business interests second. They will have to assess how their services are gamed, and even redesign them or make them more robust against disinformation. The Code will play an important role in the assessment of whether the very large platforms have complied with their legal obligation of mitigating the risks stemming from disinformation spreading on their systems. And, for crisis such as the war in Ukraine, or the beginning of the pandemic, the DSA will also have an ‘alarm signal’ to trigger a fast crisis response from such large platforms.
With all the pieces we have put together in recent years, the EU sets a new global standard on how to address disinformation, misinformation and information manipulation. The stakes are high, and our ambitions are up to the challenge.
Source – EU Commission
Questions and Answers: Strengthened Code of Practice on disinformation
Brussels, 16 June 2022
Why do we need a revised Code of Practice on Disinformation?
Since the first Code was published in 2018, the often complex and sophisticated phenomenon of disinformation has continued to evolve rapidly, particularly in the context of the COVID-19 pandemic and the war in Ukraine. Even though the first Code was a good first step, a Commission assessment done of the first code in 2020 also revealed some shortcomings such as:
- Inconsistent and incomplete application of the Code across platforms and Member States;
- Gaps in the coverage of the Code’s commitments;
- A lack of appropriate monitoring mechanism, including key performance indicators;
- A lack of commitments on access to platforms’ data for research on disinformation;
- Limited participation from stakeholders, in particular from the advertising sector.
The revised Code aims to address these gaps and shortcomings, also taking into account the lessons learnt during the aforementioned disinformation crises, to make sure the EU is equipped with the right tools to combat the spread of disinformation.
What was the process towards a new Code of Practice?
The 2018 Code of Practice on Disinformation was a milestone. It was the first self-regulatory instrument worldwide bringing together industry players to commit to voluntarily measures countering disinformation.
Following an assessment, the Commission published in May 2021 a detailed Guidance on how the Code should be strengthened by the signatories to create a more transparent, safe and trustworthy online environment.
The original signatories, major online platforms active in the EU as well as major trade associations from the European advertising sector, were joined by a significant number of prospective signatories in the process of the revision of the Code. The drafting process has been facilitated – contracted by the Signatories – by Valdani, Vicari and Associates (VVA) an independent consultant and Oreste Pollicino, a Constitutional Law professor of the Bocconi University as honest broker. This process is now finalised, and 34 Signatories have already joined the Code by submitting their sign-up forms. The Code remains open to new signatories.
Who are the signatories of the new Code?
Signatories involve major online platforms active in the EU, as well as trade associations and relevant players in the online and advertising ecosystem, who already participated in the previous Code. They are: Google, Meta, Twitter, Microsoft, TikTok, DOT Europe, the World Federation of Advertisers (WFA), the European Association of Communications Agencies (EACA), the Interactive Advertising Bureau (IAB Europe) and Kreativitet & Kommunikation.
New Signatories, who also took part in the revision process, include: the online video streaming platforms Twitch and Vimeo; the social networks Clubhouse and The Bright App; the Czech web portal Seznam and the search engine Neeva; the fact-checkers Maldita, PagellaPolitica, Demagog and Faktograf; the advertising technology providers MediaMath and DoubleVerify, as well as the advertising industry initiative GARM; civil society and research organisations Avaaz, Globsec, Reporters Without Borders and VOST Europe; and organisations that provide specific expertise and technical solutions to fight disinformation, Adobe, Crisp Thinking, Kinzen, Logically, Newsback, NewsGuard and WhoTargetsMe.
Can additional signatories join the Code?
Yes, the strengthened Code remains open for additional Signatories to join. One of the Commission’s expectations for the new Code was to include tailored commitments that correspond to the diversity of services provided by Signatories, their size, and the particular roles they play. With various new Signatories having joined the drafting process, the Code now covers a wide range of areas and commitments suitable for a diverse set of stakeholders interested in taking actions to limit disinformation. In addition, the Code sets up a new Task Force with a strong monitoring framework based on regular reporting and multiple Service Level Indicators to ensure compliance. It will establish a forum to review and adapt the commitments in view of technological, societal, market and legislative developments, thereby making sure that the Code remains fit for purpose in the longer run.
Stakeholders interested to become a Signatory to the 2022 Code can get in contact with the Task-force in view of signing up to the strengthened Code.
How does the new Code differ from the one published in 2018?
Concretely, the new Code contains commitments to:
- Broaden participation: the Code is not just for big platforms, but also involves a variety of diverse players with a role in mitigating the spread of disinformation, and more signatories are welcome to join;
- Cut financial incentives for spreading disinformation by ensuring that purveyors of disinformation do not benefit from advertising revenues;
- Cover new manipulative behaviours such as fake accounts, bots or malicious deep fakes spreading disinformation;
- Empower users with better tools to recognise, understand and flag disinformation;
- Expand fact-checking in all EU countries and all its languages, while making sure fact-checkers are fairly rewarded for their work;
- Ensure transparent political advertising by allowing users to easily recognise political ads thanks to better labelling and information on sponsors, spend and display period;
- Better support researchers by giving them better access to platforms’ data;
- Evaluate its own impact through a strong monitoring framework and regular reporting from platforms on how they’re implementing their commitments;
- Set up a Transparency Centre and Task Force for an easy and transparent overview of the implementation of the Code, keeping it future-proof and fit for purpose.
Wider participation in the drafting process helped shape more robust and tailored commitments covering all improvement areas identified in the Commission Guidance to better address the spread of online disinformation.
What is being done to demonetise disinformation?
The Code expands commitments aimed at defunding the dissemination of disinformation across the advertising supply chain. Advertisers’ organisations, communication agencies, ad tech companies and platforms commit to take actions to scrutinise, control and limit the placement of advertising on accounts and websites disseminating disinformation or next to disinformation content, as well as to limit the dissemination of advertising containing disinformation. Very strict eligibility requirements and content reviewing will limit the ways in which disinformation can be monetised through placing ads. Better cooperation among relevant players will ensure that ad placements are scrutinised more effectively. Brand safety tools and collaboration with third party organisations will provide for more transparency and controls to those placing ads online.
How does the Code ensure the transparency of political advertising?
The Code’s signatories commit to take further steps to ensure the transparency of political advertising, hand in hand with the proposed Regulation on transparency and targeting of political advertising. To ensure users can distinguish political ads from other ads and content, clear labels will be applied to the content displayed. Signatories will also integrate the results of research and best practices into their labelling systems to improve user comprehension.
The Code commits signatories to maintain and improve repositories for political and issue ads which will contain the ads themselves as well as related transparency information, including the display period, ad spend, impressions, and audience selection criteria, demographics and number of ad recipients. In addition, signatories will put in place appropriate identity verification systems for the sponsors of political and issue ads and ensure that labelling and transparency requirements are met before the ads may be placed. The Code also contains commitments of civil society signatories to research, monitor and report on the use of political and issues ads and, as appropriate, to assist in the improvement of policies and practices in this area.
How does the Code address manipulative behaviour?
The Code provides for a more comprehensive coverage of current and emerging forms of manipulative behaviour used to spread disinformation, as well as foreign information manipulation and interference. Signatories agree to develop a common understanding of manipulative behaviours and practices to spread disinformation that are not permitted across their services, such as bots, fake accounts, organised manipulation campaigns, account takeovers, malicious deep fakes, etc. Given the evolving nature of such tactics, techniques and procedures employed by malicious actors, this list and terminology will be periodically reviewed. On that basis, Signatories will adopt, reinforce and implement clear policies, covering the range of behaviours and practices identified. In addition, Signatories will establish operational channels to proactively share information about incidents that emerge on their respective services, in order to prevent dissemination and resurgence on other services.
What tools will be available for users?
Users will be better equipped to identify and react to disinformation:
- Labelling will be available more widely on platforms’ services across the EU;
- Services will provide users with a functionality to flag disinformation;
- Reliable information will be better promoted;
- The adoption of safe design practices will make platforms’ services more resilient to the viral propagation of disinformation. For instance, a recommender system will display unreliable sources less prominently and boost the visibility of authoritative sources;
- Providers of messaging apps will implement specific features – compatible with the nature of these services – aiming to limit the spread of disinformation;
- A transparent appeal mechanism will be available for users affected by decisions made regarding their content.
In the area of media literacy, the Code contains commitments on tools to improve media literacy and critical thinking, awareness raising campaigns and partnerships. The Code places a special emphasis on involving vulnerable groups in media literacy campaigns and cooperation with entities with relevant expertise, such as the European Digital Media Observatory, ERGA’s Media Literacy Action Group and the Media Literacy Expert Group.
How will you improve data access for researchers?
The Code ensures a robust framework for access to platforms’ data for research purposes: Non-personal, anonymised, aggregated or manifestly-made public data will be made available through automated access. A system to vet researchers and research proposals will be created to simplify access to data requiring additional scrutiny.
Platforms also commit to support research on disinformation, while researchers commit to conduct research based on transparent methodology and ethical standards.
How will the coverage and impact of fact-checking be enhanced?
The Code now contains a comprehensive set of commitments to increase fact-checking coverage and to support fact-checkers’ work. Relevant signatories commit to a consistent use of fact-checkers’ work on their services, with full coverage of all Member States and languages. It also establishes a framework for a structured and financially sustainable cooperation between the platforms and the fact-checking community, in collaboration with EDMO.
To improve the quality and impact of fact-checking, the Code foresees enhanced information exchange between platforms and fact-checkers, as well as the creation of a repository of fact-checks.
The fact-checking community commits to operate based on strict ethical and transparency rules to protect their independence.
What can we expect from the Task-force and the Transparency Centre?
The Task-force will keep the Code future proof and fit for purpose, by establishing a forum, among others tasks, to review and adapt the commitments in view of technological, societal, market and legislative developments. The Task-force is composed of the Code’s Signatories and representatives from the European Digital Media Observatory (EDMO), the European Regulators Group for Audiovisual Media (ERGA), the European External Action Service (EEAS), and is chaired by the European Commission. The Task-force can also invite relevant experts as observers to support its work. Today the Task-force held its first kick-off meeting. It will meet as necessary and at least every 6 months.
The Transparency Centre, accessible to all citizens, will allow for an easy overview of the implementation of the Code’s measures, providing transparency and regular updates of relevant information. Set up and maintained by the Signatories of the Code, this common point of reference will be operational and available to the public within 6 months.
What is the link between the Code, the Digital Services Act and the upcoming legislation on transparency and targeting of political advertising?
The Digital Services Act proposes a supervised risk-based approach, obliging Very Large Online Platforms to mitigate systemic risks that their systems pose. Such risks include the spreading of illegal content as well as the intentional manipulation of their service, for example to spread disinformation. A new European Board of Digital Services and direct enforcement powers delegated to the Commission will ensure appropriate oversight of the new rules, including for non-EU services targeting the EU. The DSA encourages the implementation of voluntary initiatives, such as Codes of conduct. The Code aims to become a mitigation measure and a Code of Conduct recognised under the co-regulatory framework of the DSA. Actions under the Code will therefore help Very Large Online Platforms mitigate risks stemming from disinformation on their services.
The Code also complements the proposed Regulation on transparency and targeting of political advertising, through industry-led measures to achieve progress in ensuring the transparency and public disclosure of paid-for political content. With the new Code, political ads will be easier to recognise thanks to efficient labelling and new transparency obligations.
What are the next steps now the new Code was signed?
As of today, Signatories will have 6 months to implement the commitments and measures to which they have signed up. At the beginning of 2023, they will provide to the Commission their first implementation reports. In general, Very Large Online Platforms, as defined in the DSA, will report every six-months on the implementation of their commitments under the Code. Other Signatories will report on a yearly basis.
Taking into account expert advice and support from ERGA and EDMO, the Commission will regularly assess the progress made in the implementation of the Code, based on the granular qualitative and quantitative reporting expected from signatories.
The strengthened Code also contains a clear commitment to work towards establishing structural indicators, allowing to measure the overall impact of the Code on disinformation. A first set should be available within 9 months.
Source – EU Commission