Is Governing Artificial Intelligence Possible?
How AI Is Regulated
It is both unsettling and exciting to have a realistic conversation with a computer. Thanks to the quick development of generative artificial intelligence, most of us have at one point experienced this potentially revolutionary technology that is changing how we access platforms such as VerdeCasino, communicate via social media, and live our day-to-day lives. The full potential of what generative AI can do is still debatable, but all the signs indicate it will be very disruptive. The last time we witnessed such massive social change was when Web 2.0 was being launched. New, innovative companies, such as Google and Facebook, have changed how communication and service delivery is achieved. However, despite their success, even these companies are still undergoing different regulatory changes from time to time. So, why should AI be regulated, and how can this be achieved? This article takes you through some of the measures put in place to keep AI technology in check across different parts of the world.
Why is regulating AI important?
AI tools are being ‘trained’ and ‘fed’ massive chunks of data using ways that are largely unchecked and unregulated. While data is being fed into AI, there tends to be a lot of bias and errors, which can lead to automatic discrimination. Some of the Artificial Intelligence tools can also be trained via chats, private emails, and sensitive data, which can eventually expose personal details. These are just some of the reasons why it should be regulated, and it must be achieved at this early stage of its development.
How are different countries regulating AI?
At the time of writing this, some countries are at the forefront of regulating Artificial Intelligence. Australia, China, the European Union, and America are some of the few that are putting measures in place to ensure AI brings more good than harm to society.
The United States of America
As of May 3, 2023, the United States had been looking at ways multiple regulations could be executed to ensure AI is checked. Senator Michael Bennet introduced a bill aimed at creating a task force that would examine the United States’ policies on AI and how to come up with ways that would significantly reduce threats to civil liberties, privacy, and other aspects of life. Furthermore, the US Federal Trade Commission’s chief said the agency was looking at ways to use the existing laws to counter the dangers associated with Artificial Intelligence. Common threats identified by the body include AI “turbocharging” fraud and enhancing the power of a selected few organizations. President Joe Biden has also repeatedly told his science and technology advisers that Artificial Intelligence could help tackle disease and climate change. However, it was crucial to address the potential risks that AI posed to the economy, national security, and society at large.
G7 Looking At Ways to Regulate AI
A section of leaders from the G7 who were meeting in Hiroshima, Japan, on May 20th admitted the immediate need for governance of immersive technologies and AI. They agreed they’d have their respective ministries back at home discuss technology at large, such as the “Hiroshima AI process,” and come up with extensive reports.
European Union Regulation
The European Union is perhaps one of the regions that has made significant milestones regarding AI regulation. Key European Union lawmakers came up with a draft to control generative AI and proposed a ban on all forms of facial surveillance. When writing this, the draft is in the European Parliament, and if it passes, it will become the EU’s AI Act. The European Data Protection Board, which is responsible for uniting Europe’s national privacy watchdogs, formed a commission to look into CharGPT. The European Consumer Organisation (BEUC) has also joined other bodies and raised concerns about Artificial Intelligence chatbots and ChatGPT, urging the EU consumer protection bodies to investigate this type of technology and the harms associated with them.
China Moves to Regulate Speech
Since 2021, the Chinese Communist Party (CCP) has moved swiftly to roll out different regulations aimed at checking generative Machine Intelligence, algorithms, and synthetic content such as deep fakes. These rules prohibit price discrimination by recommendation algorithms on social media. For example, AI creators must label synthetically generated content. Additionally, to come up with an AI chatbot, a company would be required to train data and include content that is ‘true and accurate.’
The Future of AI Regulation
All in all, let us hope the different bodies tasked with regulating Artificial Intelligence come up with a future-proof set of rules that will not cripple this technology that has already proven its usefulness in society. Additionally, those responsible for creating this technology should conform to the set rules to ensure every person using it is safe.