European Union lawmakers have asked tech giants to continue reporting on efforts to combat the spread of vaccine disinformation on their platforms for a further six months.
“The continuation of the monitoring programme is necessary as the vaccination campaigns throughout the EU is proceeding with a steady and increasing pace, and the upcoming months will be decisive to reach a high level of vaccination in Member States. It is key that in this important period vaccine hesitancy is not fuelled by harmful disinformation,” the Commission writes today.
Facebook, Google, Microsoft, TikTok and Twitter are signed up to make monthly reports as a result of being participants in the bloc’s (non-legally binding) Code of Practice on Disinformation — although, going forward, they’ll be switching to bi-monthly reporting.
Publishing the latest batch of platform reports for April, the Commission said the tech giants have shown they’re unable to police “dangerous lies” by themselves — while continuing to express dissatisfaction at the quality and granularity of the data that is being (voluntarily) provided by platforms vis-a-via how they’re combating online disinformation generally.
“These reports show how important it is to be able to effectively monitor the measures put in place by the platforms to reduce disinformation,” said Věra Jourová, the EU’s VP for values and transparency, in a statement. “We decided to extend this programme, because the amount of dangerous lies continues to flood our information space and because it will inform the creation of the new generation Code against disinformation. We need a robust monitoring programme, and clearer indicators to measure impact of actions taken by platforms. They simply cannot police themselves alone.”
Last month the Commission announced a plan to beef up the voluntary Code, saying also that it wants more players — especially from the adtech ecosystem — to sign up to help de-monitize harmful nonsense.
The Code of Practice initiative pre-dates the pandemic, kicking off in 2018 when concerns about the impact of ‘fake news’ on democratic processes and public debate were riding high in the wake of major political disinformation scandals. But the COVID-19 public health crisis accelerated concern over the issue of dangerous nonsense being amplified online, bringing it into sharper focus for lawmakers.
In the EU, lawmakers are still not planning to put regional regulation of online disinformation on a legal footing, preferring to continue with a voluntary — and what the Commission refers to as ‘co-regulatory’ — approach which encourages action and engagement from platforms vis-a-vis potentially harmful (but not illegal) content, such as offering tools for users to report problems and appeal takedowns, but without the threat of direct legal sanctions if they fail to live up to their promises.
It will have a new lever to ratchet up pressure on platforms too, though, in the form of the Digital Services Act (DSA). The regulation — which was proposed at the end of last year — will set rules for how platforms must handle illegal content. But commissioners have suggested that those platforms which engage positively with the EU’s disinformation Code are likely to be looked upon more favorably by the regulators that will be overseeing DSA compliance.
In another statement today, Thierry Breton, the commissioner for the EU’s Internal Market, suggested the combination of the DSA and the beefed up Code will open up “a new chapter in countering disinformation in the EU”.
“At this crucial phase of the vaccination campaign, I expect platforms to step up their efforts and deliver the strengthened Code of Practice as soon possible, in line with our Guidance,” he added.
Disinformation remains a tricky topic for regulators, given that the value of online content can be highly subjective and any centralized order to remove information — no matter how stupid or ridiculous the content in question might be — risks a charge of censorship.
Removal of COVID-19-related disinformation is certainly less controversial, given clear risks to public health (such as from anti-vaccination messaging or the sale of defective PPE). But even here the Commission seems most keen to promote pro-speech measures being taken by platforms — such as to promote vaccine positive messaging and surface authoritative sources of information — noting in its press release how Facebook, for example, launched vaccine profile picture frames to encourage people to get vaccinated, and that Twitter introduced prompts appearing on users’ home timeline during World Immunisation Week in 16 countries, and held conversations on vaccines that received 5 million impressions.
In the April reports by the two companies there is more detail on actual removals carried out too.
Facebook, for example, says it removed 47,000 pieces of content in the EU for violating COVID-19 and vaccine misinformation policies, which the Commission notes is a slight decrease from the previous month.
While Twitter reported challenging 2,779 accounts, suspending 260 and removing 5,091 pieces of content globally on the COVID-19 disinformation topic in the month of April.
Google, meanwhile, reported taking action against 10,549 URLs on AdSense, which the Commission notes as a “significant increase” vs March (+1378).
But is that increase good news or bad? Increased removals of dodgy COVID-19 ads might signify better enforcement by Google — or major growth of the COVID-19 disinformation problem on its ad network.
The ongoing problem for the regulators who are trying to tread a fuzzy line on online disinformation is how to quantify any of these tech giants’ actions — and truly understand their efficacy or impact — without having standardized reporting requirements and full access to platform data.
For that, regulation would be needed, not selective self-reporting.