AI large language models have started a global discussion about the future of technology and our society
What needs to happen to take advantage of the opportunities and tackle the risks?
Large language models (like OpenAI’s GPT) are a type of AI, introducing game-changing possibilities. They could drive huge value to our economy and society by speeding up work, powering new services and driving huge scientific advances.
But they also pose serious security and societal risks. Some say the changes are over-hyped. Others worry we are building machines that will exceed our understanding and control.
Key figures on large language models (LLMs)
3,170
– Approximate number of AI companies in the UK
55 million
– The factor by which computing power has increased in the past decade
£400 million
– Government funding for the AI Safety Institute to the end of the decade
The UK must prepare for a period of heightened international competition and technological turbulence as it seeks to take advantage of the opportunities provided by LLMs.
A multi-billion pound race is underway to dominate this market. The victors will wield unprecedented power to shape commercial practices and access to information across the world.
Five key areas around LLMs
We are optimistic about this new technology, which could bring huge rewards and drive ground breaking scientific advances. Far sighted and speedy action is needed to catalyse innovation responsibly and mitigate risks proportionately.
1. Focus more on opportunities
The Government’s focus has skewed too far towards a narrow view of AI safety. It must rebalance, or else it will fail to take advantage of the opportunities from LLMs, fall behind international competitors and become strategically dependent on overseas tech firms for a critical technology.
This requires focusing more on opportunities and encouraging responsible innovation. The Government should improve access to computing power, increase support for digital skills, review AI research funding, and help start-ups and university spinouts grow.
2. Give copyright holders a fairer deal
Some tech firms are using copyrighted data to train models without permission and without paying the rightsholders compensation.
We think is this unfair. The point of copyright is to reward creators, incentivise innovation and prevent others from using works without permission. Current laws are failing to ensure this happens.
The Government must resolve this problem, including by changing legislation if needed. We’re also calling for more transparency from tech firms about how they use data, and investments in new licensed datasets to encourage good practice.
3. Tackle near-term risks
A robot apocalypse is not likely. But LLMs do amplify existing risks, e.g. around cyber-attacks, synthetic child sexual abuse material, terrorism instructions, and disinformation.
They also exacerbate biases and discrimination. The Government must scale existing mitigations quickly.
4. Investigate catastrophic risks carefully
Catastrophic risks involving thousands of deaths (from biological attacks, destructive cyber weapons or critical infrastructure failure) are less likely - but cannot be ruled out. There are however no early warning indicators.
That needs addressing as a priority, alongside mandatory safety tests for the riskiest models.
5. Prioritise open market competition
The Government is relying on existing regulators to oversee AI, rather than creating new regulators or laws.
They need new powers, resources, and AI standards for this to work. We also need legal clarity about who is responsible if things go wrong.
More focus on open competition is vital too, to ensure new businesses aren’t stifled by current market leaders.
What happens next?
We have made our recommendations to the Government and it has two months to respond to our report.
Read the full report on our website.
Find out more about our inquiry and our committee.
Follow the committee @LordsCommsCom
Cover image: © ana - stock.adobe.com