Politics
The House | As the first MP to be deepfaked, I say we must do more to protect our democracy from AI harm
3 min read
Imagine discovering that your face, your voice, or your image has been used online by a third party, without your consent, to seriously misrepresent you. Not a misunderstanding.
Not a parody. A fabricated version of you – saying things you never said, doing things you never did, appearing in content you never agreed to.
AI deepfake technology means this is no longer the realm of science fiction – it is already happening. As the first MP to be the target of a deepfake political disinformation attack, I’ve seen first-hand the disruption it can cause our democracy.
In 2022, as minister for AI and the Intellectual Property Office, I rejected tech sector lobbying for broad text and data mining freedoms after hearing from the APPG for the Creative Industries. Without safeguards, such changes would have undermined the rights of musicians, writers and artists in a sector worth £146bn a year. If the UK is to lead in both AI and the creative industries, the burden must be on AI to show it can coexist – an unchecked ‘free-for-all’ serves neither.
I therefore welcome the government’s recent proposal to revisit digital copyright law, and its recognition that policy “must support prosperity for all UK citizens”. But this is not only about prosperity. It is also about ensuring AI is not used to undermine our democracy, security, society or fundamental rights.
Having spent 30 years in technology and innovation, and as the founder of one of the UK’s earliest AI drug discovery companies in 2001, I fully recognise the transformative potential to deliver enormous economic and public service benefits.
The UK already has the third-largest AI sector globally and the largest in Europe, and the Organisation for Economic Co-operation and Development estimates that AI adoption could increase UK productivity growth by around £55bn a year. But harnessing innovation requires regulation. As I set out in the 2021 prime minister’s Taskforce on Innovation, Growth and Regulatory Reform, the UK as a trusted regulator has a chance to lead in setting appropriate regulatory standards in new markets from AI to fusion energy and space debris.
With the rapid dissemination of deepfake tools allowing someone’s identity to be stolen and misused by anyone, we should establish a fundamental right to identity protection in the digital age.
Recent evidence from the Science, Innovation and Technology Select Committee highlighted the scale of the challenge. When questioned, the big tech platforms showed little sense of responsibility for protecting UK values, democratic norms or citizens’ rights. By allowing US and Chinese tech dominance – controlled by a small group with limited accountability – we risk outsourcing digital sovereignty and undermining UK values, conventions and laws.
Other countries are beginning to act. Denmark has proposed strengthening protections over individuals’ likenesses in its copyright framework. In the US, some states are proposing new laws to prevent the unauthorised use of AI-generated digital replicas.
The tech industry is pushing back with a new pro-AI group, Innovation Council Action, supporting candidates and policies in US elections that oppose AI regulation. They have the support of Donald Trump’s adviser David Sacks, and plan to spend at least $100m on backing candidates. This comes on top of nearly $325m already raised by other pro-AI organisations and individuals.
Parliament now faces a choice: lightly regulate AI, or set clear, values-based rules to prevent it undermining our democracy, society and economy. Legislating to protect UK citizens, society, economy and democracy from the widespread abuse of identity theft is a good place to start.
You must be logged in to post a comment Login