Response to the Science, Innovation and Technology Committee call for evidence on social media, misinformation and harmful algorithms

Summary

This response focuses on how social media, misinformation and harmful algorithms impact elections and democratic processes.

We examine how these technologies contribute to the spread of election-related mis- and disinformation. We consider how platform business models and content promotion systems can amplify harmful content, particularly abuse and intimidation directed at political candidates and misleading electoral information aimed at voters. These algorithms can create echo chambers and increase political polarisation by promoting content that reinforces existing views. We also review how other democracies are addressing election-related mis- and disinformation.

Our findings illustrate the impact of how social media platforms handle the spread of mis- and disinformation during periods of heightened public discourse online, including during election campaigns.

Priorities for reform

To ensure social media can have a positive impact on elections, improvements are needed in three key areas:

  • Firstly, social media platforms should take proactive steps to reduce abuse and intimidation. We think this could be achieved through robust content monitoring and moderation policies, enforcement of these policies as well as user education.
  • Secondly, social media should not promote misleading information (such as election-related mis- and disinformation) which can undermine public confidence in the democratic process.
  • Thirdly, social media platforms should be more transparent for its users.

These changes are essential for fostering a healthy democratic debate and creating a social media environment where voters can access accurate information and where political candidates can campaign without fear of abuse or intimidation.

We believe more could be done by social media companies to ensure their platforms are not used to spread election-related mis- and disinformation. We also see that there is an important role for Ofcom to play in this debate, given its new responsibilities and duties under the Online Safety Act 2023 (OSA). We stand ready to work with Ofcom as they develop their new duties to protect content of democratic importance given that this content may also include election-related mis and disinformation.

Social media should take proactive steps to reduce abuse and intimidation

We know from our research that candidates were reporting increased and unacceptable levels of abuse and intimidation at the 2024 UK parliamentary general election.

Over half (55%) of candidates responding to our 2024 general election post-election survey said they had some kind of problem with harassment, intimidation, or abuse. Of the candidates who said they had experienced harassment, 43% said it came from anonymous sources, and 65% said the problems they experienced took place online. This abuse risks putting off people wanting to stand for election.

We carried out in-depth follow-up interviews with five candidates at the 2024 general election who experienced abuse and intimidation. We heard how some of these candidates felt election-related mis- and disinformation that was spread online led to in-person abuse. One candidate told us:

‘I knew elections would be tough. But what I did not know was that the protectors of the laws and the frameworks would be so inadequate to prevent this kind of disinformation [which] ultimately ended up in people screaming on the streets saying these kinds of things to canvassers and to me.’

Another candidate told us they were:

‘punched by a stranger in a public house due to false information posted on-line, which the newspapers published an article on when they knew it was false’.

Respondents to our survey also noted that social media platforms were not doing enough to respond to online abuse and intimidation. One respondent, when asked why they did not report harassment online, answered:

‘it was only on Twitter and there is no point. There is also no point in reporting it to Twitter because it never meets their thresholds for action anymore’.

Some groups of candidates were more likely to experience abuse and intimidation. Ethnic minority candidates were more likely to have received offensive social media posts about their ethnicity (55%) or religion (41%). Ongoing international tensions in some cases led to an increase in antisemitic and Islamophobic abuse directed at candidates during the 2024 general election campaign.

Female candidates also faced particularly concerning forms of abuse online, including reports of fabricated sexualised content such as deepfake pornography of these candidates.

Targeted harassment, enabled by new technologies, poses a serious threat to participation in democratic processes and could have a chilling effect on candidates standing at future elections. Over half of candidates responding to our general election post-election survey said they avoided some form of campaigning because of fear of abuse. Almost a quarter (23%) of candidates said they avoided using social media at least once due to fear of online abuse and harassment. The activity candidates avoided doing the most was campaigning on their own. 44% of respondents said that they avoided doing so at least once and this rises to (66%) of female respondents. One third (32%) of respondents also said they had avoided talking about or giving their opinions on controversial topics to avoid harassment.

To help address the abuse and intimidation candidates and campaigners face online, social media and online platforms should do more to help develop improved screening tools for candidates’ profiles, to remove abusive content and identify perpetrators. These could be developed and delivered by individual digital/social media companies, or centrally, with civil society. We will work further to understand and develop solutions to strengthen online protections for candidates and campaigners, including working with Ofcom.

We will also work closely with the Speaker’s Conference on abuse and intimidation of candidates and MPs, providing our evidence and observations on intimidation and abuse and recommendations for tackling this issue, both online and offline. Abuse and intimidation of campaigners is totally unacceptable and should never be seen as part of the job of being in the public eye. The risks that arise from this threat include preventing candidates from campaigning due to the fear of attack, and people deciding not to stand as a candidate at all.

Social media should not promote misleading content to users

Our research on the 2024 general election found that many people said they saw or heard misinformation during the election. Over half of people responding to our survey said they saw misleading information about political parties’ policies (61%) and candidates (52%) during the general election. Algorithmic promotion of misleading content, whether about candidates, policies, or electoral processes like when/where to vote, or the voter ID requirement, risks undermining democratic participation.

Platform algorithms can also rapidly amplify misleading content to its users. During the general election, there were reports that coordinated groups on X were spreading deepfake videos of politicians. These clips were modified to suggest the targeted politicians said something different from what they said, alongside comments to create the impression the videos were real.

To help address these concerns, ahead of the general election, we published information and advice for voters on how to engage confidently with campaign material and to think critically about material they saw and heard. This information was viewed over 26,000 times during the election campaign. This included information on organisations and other regulators that could address concerns about claims made in campaign material. It also set out what we were doing to ensure that voters had access to high-quality information.

Our research shows this information is needed – 18% of respondents to our survey said they did not know if they had seen or heard a deepfake, suggesting potential confusion among the public about being able to identify artificially generated or modified material. Given this, we suggest social media platforms should require clear labelling of AI-modified content during the regulated election period, where rules around campaign transparency and spending controls are in place to protect the integrity of elections. These rules are important as campaign activities generally intensify around the run-up to an election. Requirements to label AI-modified content would complement existing digital imprint rules, helping voters understand who is paying for and promoting campaign content as well as whether this content has been altered using AI.

In a recent report, the Centre for Emerging Technology and Security (CETaS) has similarly recommended that the Elections Act should be amended to require campaign adverts that has been digitally edited to be embedded with secure provenance records. A secure provenance record would provide a verifiable history of who created the content and how it had been modified, to help users be confident of content authenticity.

Social media should be more transparent for its users

Transparency from social media platforms is fundamental to protecting democratic processes and preventing the spread of election-related mis- and disinformation. Without robust access to platform data and to how social media algorithms work, regulators and researchers cannot effectively identify patterns of abuse, intimidation, and misinformation. This evidence base is crucial for policymakers to develop safeguards and measure their effectiveness in protecting voters and candidates.

We have long called for meaningful transparency in online political campaigning and have worked with social media platforms, for example, as they develop political advertising libraries. Meaningful transparency means providing us and voters with information about who is targeting them with political advertising. We set out detailed recommendations to achieve this in our 2018 Digital Campaigning report.

While political advertising is a legitimate tool for voter persuasion, transparency about funding sources is also crucial. The money that is donated and spent by parties and campaigners should be transparent and legitimate. This transparency allows voters to understand whether content is being promoted by recognised political entities or by actors who may seek to amplify division for other purposes. This is particularly important given the ability of social media algorithms to rapidly spread content that could exacerbate political tensions.

Transparency is also crucial to our role as a regulator, to ensure all participants are following electoral rules. Political finance transparency is also critical to identifying potential foreign interference operations, where foreign actors may be using social media platforms to drive polarisation and undermine democratic processes in the UK.

Social media political advertising libraries

Comprehensive social media ad libraries have the potential to deliver transparency to voters, however these measures are voluntary, and not every company that runs political advertising has created special labelling or advert libraries.

Meta (Instagram and Facebook), Google (including YouTube) and Snapchat have ad libraries, but these vary in terms of their scope and features. TikTok does not allow political ads but allows users to search among the best performing ads. BlueSky Social, a newer social media platform, does not currently allow advertising (including political ads) and does not have an ad library. X (formerly Twitter) does not offer an ad library in the UK, but it allows political ads.

Since 2018 we have recommended that social media companies that run election advertising in the UK should have ad libraries and should provide more detailed and accurate information such as the targeting, actual reach and amount spent on those adverts to enhance transparency and scrutiny about election campaigning. Further details of our recommendations on political advertising libraries can be found in our 2018 Digital Campaigning report.

Access to platform data for regulators and civil society

Access to platform data is essential for researchers and regulators to identify and understand emerging risks and develop evidence-based responses. This data helps quantify the scale of harmful content, understand the role that algorithms and recommendation settings play in the spread of harmful content, and evaluate the effectiveness of social media trust and safety measures.

However, CETaS’ report highlights the growing trend of social media platforms restricting access to their data. This is exemplified by Meta's closure of CrowdTangle, which had previously enabled researchers, journalists and fact-checkers to track, analyse and report on social media engagement patterns such as likes and reshares.

The European Union's Digital Services Act (DSA) may provide a model for the UK to follow by giving vetted researchers access to data about content prioritisation and recommendation systems. Article 40 of the DSA requires that providers of very large online platforms give access without undue delay to data (including, where possible, real-time data) when requested by researchers looking to conduct research on systemic risks in the European Union. According to the DSA, systemic risks include (but are not limited to) dissemination of illegal content and negative effects on civil discourse and electoral processes, public security, and in relation to issues such as gender-based violence.

By drawing on a framework like the EU's DSA, the UK could ensure that vetted, researchers and civil society groups have the necessary access to platform data. This would allow them to conduct rigorous analysis and provide insights into how content spreads online and how social media algorithms contribute to polarisation, which can inform policymaking and platform practices to address these harms.

 

Who should be responsible for tackling election-related mis- and disinformation?

We do not have legal powers to regulate the content of campaign material, beyond requiring campaigners to include an imprint on campaign material. There are a few restrictions on what can be said in campaign materials, including making or publishing a false statement about the personal character or conduct of a candidate or publishing offensive material.

Our 2024 public attitudes survey found that over three quarters of people (76%) believed that not enough has been done to tackle mis- and disinformation around elections. Only 5% thought that sufficient action was being taken. Respondents thought that either Ofcom (28%) or the UK Government (26%) should have the primary responsibility to tackle misinformation around elections, with the Electoral Commission coming in third (18%). Only 8% of survey respondents believed social media platforms should have primary responsibility for tackling this issue.

Tackling the issues posed by political mis- and disinformation on social media platforms will require close collaboration with other organisations, including other regulators and enforcement agencies.

We are taking several proactive steps, including working with other regulators who have responsibilities around elections, including Ofcom and the Information Commissioner's Office, to ensure a coordinated approach to shared challenges around online harms and the information landscape.

We also maintain ongoing dialogue with social media companies about inaccurate or misleading information about electoral processes on their platforms. We have agreements in place where we can flag instances of election-related mis- and disinformation about the electoral process to them. During the 2024 general election, through our monitoring activities, we identified specific instances where large-language models (LLMs) produced mis- and disinformation. While this was removed in most cases when identified, more could be done to proactively counter this problem.

We welcome the establishment of Ofcom’s Advisory Committee on Disinformation and Misinformation. This is a positive step to improving online protections for candidates and campaigners against abuse and intimidation spurred by election-related mis- and disinformation. We will work with Ofcom's advisory committee to develop solutions to address the spread of mis- and disinformation online.

We would also welcome recommendations from this Committee for social media companies to address coordinated disinformation campaigns and targeted harassment of political candidates on social media platforms.

What is happening internationally to tackle election-related mis- and disinformation online?

Tackling election-related mis- and disinformation is an issue that is being considered globally, and we are learning from international counterparts about their own approaches.

European Union

The European Union has outlined a comprehensive plan for safeguarding elections from mis- and disinformation. In 2023, it put forward a European Democracy Action Plan (EDAP). EDAP included the adoption of Regulation 2024/900 on the transparency and targeting of political advertising, which harmonises rules across EU member states on how political ads must be labeled, tracked and disclosed, sets limits on using personal data for targeting, and establishes oversight mechanisms. The regulation will apply from October 2025. The plan also includes a bolstered Code of Practice on Disinformation which included enhanced transparency measures such as enhanced fact-checking across EU countries and languages and stronger action to ‘demonetise’ mis- and disinformation content. The voluntary code, which includes signatories such as Meta, Microsoft and TikTok, aims to demonetise mis- and disinformation content by having stronger measures to avoid placing advertising near disinformation, as well as the dissemination of advertising which contains disinformation.

The EU also set up the European Digital Media Observatory (EDMO) to support a multidisciplinary group of fact checkers, academics and other relevant stakeholders to fight disinformation, including by setting up a taskforce to detect and alert voters to potential election-related disinformation during the 2024 European elections.

Australia

In 2022, the Australian Electoral Commission (AEC) launched a ‘Stop and Consider’ campaign encouraging all voters to think critically about the information they encountered online. While stressing they are ‘not the arbiter of truth regarding political communication’, the AEC has maintained a disinformation register since 2022 which they update to counter disinformation about the electoral process.

Also in 2022, the Digital Industry Group Inc. (DIGI), a not-for-profit industry association, developed a voluntary Australian Code of Practice on Disinformation and Misinformation (the Australian Code). Major technology companies including Adobe, Apple, Facebook, Google, Microsoft, TikTok and Twitch have adopted the Australian Code. As signatories, these companies must implement measures to reduce their users' exposure to mis- and disinformation. These measures include identifying and acting against inauthentic accounts and coordinated behaviors that spread false information, labelling false content, providing trust indicators, demoting content that could expose users to mis- and disinformation, enforcing editorial policies and content standards, supporting fact-checking initiatives, and providing users with tools to manage their content exposure.

The Australian Communications and Media Authority (ACMA) is the regulator which oversees the Australian Code. There have been discussions about providing ACMA with new oversight powers in relation to the Code. The Australian Government introduced a Bill to increase ACMA’s powers in relation to the Code but this lapsed prior to the dissolution of the Australian Parliament in 2024.

Canada

Elections Canada has provided recommendation in its Report on threats to the Electoral Process, to tackle election-related mis- and disinformation that focuses on platform accountability and transparency. In relation to generative AI, Elections Canada has proposed legislative changes to make it an offence to use AI to mislead, falsely represent, or cause anyone else to falsely represent themselves as an election official, from a registered party of association or a candidate using manipulated voice or image, with an exemption for parody or satire.

Elections Canada also recommends that any content generated or manipulated by AI should have clear transparency markers and that AI-generated chatbots or search functions should be required to indicate in their responses where users can find official or authoritative information.

In 2024, Elections Canada also launched ‘ElectoFacts’, a resource to verify whether information about Canada’s federal electoral processes is accurate.

New Zealand

The New Zealand Department of the Prime Minister and Cabinet has implemented initiatives including a civil society-led work program to build understanding and resilience against the harms of disinformation. This includes funding public research and analysis, a multi-stakeholder group, and support for capacity-building and community resilience efforts led by organisations such as Internet New Zealand.

Ahead of the 2023 election, a group consisting of the New Zealand Electoral Commission, Ministry of Justice, the Department of the Prime Minister and Cabinet, Government Communications Security Bureau and the New Zealand Security Intelligence Service set up the inter-agency protocol for New Zealand’s 2023 General Election. The 2023 General Election Communications Protocol was also developed to establish which government body was responsible for addressing misleading or inaccurate information about the general election. The Communications Protocol also set out how the New Zealand Electoral Commission would triage cases based on a range of factors to determine the seriousness of a case of misleading or inaccurate information.