Artificial Intelligence & Machine Learning

UK Government Warned of AI Regulatory Capture by Big Tech

UK Parliament Urges Competition Regulator to Keep LLMs Under 'Close Review'
UK Government Warned of AI Regulatory Capture by Big Tech
A U.K. parliamentary committee warned against regulatory capture by big AI firms. (Image: Shutterstock)

A U.K. parliamentary committee scrutinizing the artificial intelligence market urged the British competition regulator to closely monitor developers of foundation models and warned against regulatory capture by big tech companies.

See Also: The Anatomy of a Deepfake: What if You Can’t Trust Your Own Eyes and Ears?

The House of Lords Communications and Digital Committee in July opened a consultation on developing a regulatory framework for LLMs. In a Friday report detailing its findings, the committee urged the government to make market competition an explicit policy objective. That means "ensuring regulatory intervention does not stifle low-risk open access model providers."

Already, the market is trending toward consolidation and it's plausible that a small number of the largest LLMs will be used to power smaller models, "mirroring the existing concentration of power in other areas of the digital economy," the report said.

"Competition dynamics will play a defining role in shaping who leads the market and what kind of regulatory oversight works best," the report said. Britain's market regulator, the Competition and Markets Authority, should closely monitor the industry, the report says.

The antitrust authority in December separately announced a preliminary review of Microsoft's interest in OpenAI, soliciting comment on whether the two companies have effectively merged. In the weeks since, European and U.S. regulators have initiated similar inquiries (see: UK Market Regulator Reviews Microsoft's Interest in OpenAI).

Governmental agencies that oversee the market should guard against being co-opted by large AI companies - and should take steps to ensure that large companies don't unilaterally set the AI regulatory agenda, the report said. The debate on AI safety, for example, is dominated by portents of catastrophic risk, often from the same people who developed LLMs. "Critics say this distracts from more immediate issues like copyright infringement, bias and reliability," the report says.

The U.K. government itself has pivoted too far toward a narrow focus on AI safety, committee members said. "A rebalance is therefore needed, involving a more positive vision for the opportunities and a more deliberate focus on near‑term risks."

Committee lawmakers called on the U.K. government to review copyright rules to ensure that rights holders are adequately protected under British law. The comments from the committee lawmakers come after the Financial Times and The Guardian told the committee that LLM developers are "unethically and unlawfully using copyrighted data to train models without permission."

The Financial Times on Sunday reported attempts by the U.K. Intellectual Property Organization to broker a deal between creative industry organizations and leading AI companies to develop a voluntary code of practice for using copyrighted material for training AI models failed after the stakeholders failed to reach an agreement.


About the Author

Akshaya Asokan

Akshaya Asokan

Senior Correspondent, ISMG

Asokan is a U.K.-based senior correspondent for Information Security Media Group's global news desk. She previously worked with IDG and other publications, reporting on developments in technology, minority rights and education.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing cio.inc, you agree to our use of cookies.