Artificial Intelligence systems like ChatGPT and Google Bard continue to cause concern – along with other lesser-known AI tools. The results generated by AI models and the accuracy of the information they provide is what concerns Government agencies.
As is often the case, many governments are already preparing legislation to regulate AI technology and how it is used. USA, China and the European Union are all proposing strict controls over how AI may be used within their territories. However, the United Kingdom has suggested it will not be introducing any new laws in the short term.
Balancing concerns with the need for growth
Britain has already declared it wants to become an AI-enabled country and economy. Back in March, the Secretary of State for Science, Innovation and Technology, wrote;
“To ensure we become an AI superpower, it is crucial that we do all we can to create the right environment to harness the benefits of AI and remain at the forefront of technological developments. That includes getting regulation right so that innovators can thrive and, the risks posed by AI can be addressed.”
These sentiments have since been echoed by the UK’s minister for AI and intellectual property who warned that “there is always a risk of premature regulation” which would end up “stifling innovation”.
Becoming an early adopter
Underscoring their commitment to the industry, the UK government has already negotiated ‘early access’ to AI technologies from Google DeepMind, OpenAI and Anthropic. Unfortunately it is still unclear what these agreements actually mean.
Great Britain also now has an AI taskforce which has been tasked with helping to build ‘foundation models’. Foundation models incorporate pre-trained algorithms that can then be extended by developers to create all-new products and services.
The use of foundation models accelerates development process so that these new products can be brought to market faster. And it is this speed that the UK hopes will help to establish it as a world leader in AI technology.
Which approach is correct?
So should we be regulating AI, or is the British ‘wait and see’ approach correct? Top academics have warned that humanity risks ‘losing control of autonomous AI’. It is these fears that most regulation seeks to address.
The reality is that the biggest dangers from AI are actually rooted in how they are trained. If AI models are trained with ‘bad’ data, the results they produce will be similarly faulty. However, this is an issue for technology companies to resolve rather than governments.
The other pressing concern is actually to do with who controls the AI models and what they do with them. The recent Meta-Facebook partnership announcement is concerning because it concentrates a huge amount of industry control in the hands of two of the world’s largest companies. If they can corner the AI market, they corner huge amounts of control over the future of an AI-enabled economy for instance.
So for now we will have to wait and see whether the UK’s wait and see approach is correct. But by not acting now, we should see AI technology develop rapidly in the UK.
1 comment
I have used panda for years I have it on all my devices its great.