MDX academic Dr Mahdi Aiash provides insight into how UK, EU, United States and China handle regulation of the technology as the new ‘more human-like’ chatbot is unveiled

Just prior to the highly anticipated Google I/O 2024 event, which serves as the customary platform for significant updates to Google’s prominent products like Search, Maps, YouTube, and Android software, the tech giant is poised to introduce a series of innovative artificial intelligence-driven features. At the same time, on May 13, 2024, OpenAI, the business behind ChatGPT, rolled out ChatGPT4o along with a new user interface, marking the company’s latest attempt to enhance the accessibility and utility of its widely used chatbot.
Certainly, AI-enabled services are now available to the public across various demographics and age groups. The rivalry between tech giants reignites discussions surrounding safety and, indeed, security concerns in the age of AI. Hence, the focus is to spotlight these concerns and provide answers to some relevant questions, particularly those that might be raised by the general public rather than just scientists driving the technology.
Simply put, ChatGPT4o provides a more human-like interaction; it allows users to speak to it, and has the ability to read images and analyse emotions. It also supports 20 different languages [1]. Performance wise, the new version matches the performance of the previous version ChatGPT4 at Turbo-level on text, reasoning, and coding intelligence, while setting new high watermarks on multilingual, audio, and vision capabilities.
OpenAI asserts that the new version has safety “built-in by design” and has undergone assessment using OpenAI’s “Preparedness Framework.” This framework, developed by OpenAI, monitors, assesses, predicts, and safeguards against potential catastrophic risks.
Without a doubt, there exists a fierce global competition in the development of AI technology. The primary contenders include the United States, China, the UK, and the EU. Both the U.S. and China perceive AI as vital for national security and economic advancement. Unfortunately, regulatory bodies and lawmakers are not keeping pace with the rapid advancement of technology. Nonetheless, there have been initiatives to regulate AI:
Conclusion:
The AI technology landscape is presently controlled by major players. In the absence of enforceable regulation, end-users have no choice but to rely on the assurances provided by AI providers. Personally, I notice variations in the approaches to regulating AI technology among these players.
In the United States, the primary influencers are the tech giants of Silicon Valley, driven by innovation that leads to profit. Consequently, there is considerable pressure on state lawmakers to enact regulations to manage technological advancements.
China, as another major player, has seen its AI regulations crafted by university scholars, known as the Draft of Scholars. These guidelines serve as suggestions rather than legally binding legislation. Given the intense competition with the United States, we can only hope these suggestions may be highly regarded. In the UK (and the EU), there appears to be a more cautious approach, with a greater emphasis on regulating technology. While this approach may seem wise, considering the stances of the other two players, there is a significant risk of losing our innovative edge against them and potentially resorting to importing technology. It is not surprising, then, that China a and the US are currently holding the first top-level dialogue on artificial intelligence in Geneva.
Dr Mahdi Aiash. Associate Professor in Computer Science and Cyber Security and Head of the Cyber Security Research Group at Middlesex University
Reference:
[1] ChatGPT: https://openai.com/index/hello-gpt-4o/
[2] AI-Act: https://artificialintelligenceact.eu/
[3]: UK-White Paper: https://shorturl.at/juJQZ
Tags: AI, artificial intelligence, EU, Europe, law, Regulation
Leave a Reply