Artificial intelligence (AI) systems have been changing the way businesses operate for several years now. But the public release of systems like ChatGPT and Google Bard has dramatically raised awareness of these technologies.

Many people are concerned about how AI could affect their jobs. Others worry about whether algorithms and models may be used to generate and distribute misinformation.

These concerns are accompanied by questions. How does AI handle competition regulations? Do consumers require increased protections from AI systems? How can consumers be sure the information produced by generative AI?

The UK CMA steps in

The UK’s Competition and Markets Authority (CMA) has recently announced an investigation into the systems that underpin popular AI systems like ChatGPT. The watchdog hopes to better understand how AI technology works so that they can ensure British consumer rights are properly respected.

Announcing the investigation, CMA chief executive Sarah Cardell said, “It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers while people remain protected from issues like false or misleading information.”

The industry responds positively

Most AI technology companies have welcomed the review, sharing the CMA’s belief that any conclusions and recommendations will ensure a better outcome for customers. Some operators have said that they hope the final report will “competitive imbalance” and “lack of disclosure” present in Big Tech’s proprietary data and AI training models.

The CMA is clear that the investigation will be quite general without focus on any particular vendors (like OpenAI, Google or Microsoft). They also confirm that the UK regards artificial intelligence as “a technology that has the potential to transform the way businesses compete as well as drive substantial economic growth.” As such their recommendations are likely to result in a series of framework recommendations rather than a blanket attempt at controlling the industry. This is a very different approach to the European Union’s AI Act which will appoint an oversight body and bespoke rules to govern the use of artificial intelligence within member states.

Don’t panic!

Advances in AI have shocked the general public and politicians who have been relatively unaware of the potential of the generative technologies. This lack of understanding may result in ‘panic’ as the Center for Data Innovation warns, “Exaggerated and misleading concerns about generative AI’s potential to cause harm have crowded out reasonable discussion about the technology, generating a familiar, yet unfortunate, ‘tech panic’.”

The CDI is worried that ‘tech panic’ could cause legislators to create restrictive regulations that stifle innovation and limit further AI development. Should this happen, businesses and consumers may miss out on the extensive benefits these systems bring.

What the CMA recommends has yet to be seen. However, when the final report is due to be published in September, it could have a profound impact on the way artificial intelligence is used in the UK.