AI and Legislation

How is AI regulated around the world?


Written by Reid Banerjee

Artificial Intelligence is an exciting new technology, and one that is expected to drastically change much of the world. The impacts of AI will be far reaching, changing schooling, industry, business, medicine, and more. There will undoubtedly be many lives improved by technologies which use AI. However, many are beginning to look upon the technology with fear. Already, jobs are being lost to AI. Companies find it much more profitable to use AI-powered chatbots instead of conventional customer service staff, Advertisers are using AI generation instead of spending thousands to produce a simple commercial, senior coders are using AI to become many times more efficient, no longer requiring new staff to work under them. Already, we have seen job loss from AI and more is expected in the near future. Goldman Sachs predicts that 300 million full-time jobs around the world could be at least partially automated by AI. Deepfake technology and AI video generation are creating increasingly realistic media, able to trick many into believing them to be real video. In CSU’s poll of 79 East Mecklenburg students, we found that many students are fearful for the future of AI. 66% of students indicated that they were more concerned for the future because of AI, with only 17.5% indicating that AI made them more hopeful. With fear of the future of AI increasing, some are calling for it to be controlled through regulation. But as fear of AI increases, so does the fear of not having it. 

Regulation in the United States

The United States is ranked number one in AI development in most categories, and holds a consistent lead above other countries. This makes sense, as top AI companies such as Google, Microsoft, OpenAI, and Meta are all located in the United States. Because of this success with AI, American policymakers have been reluctant to regulate it, preferring to maintain free market conditions to promote the further advancement of AI and retain the US’s lead. As such, many of the policies surrounding AI that have been created at a federal level are to promote AI. The CHIPS Act, pushed for and signed by President Biden, promotes investment in technology in the United States and makes AI one of its top priorities. Donald Trump signed an executive order near the start of his term titled “Removing Barriers to American Leadership in Artificial Intelligence”, which asked for the Assistant to the President for Science and Technology, in coordination with others, to act to remove regulations on AI in order to increase the pace and scale of the technology. The goal, as stated in the order, is to “sustain and enhance America’s global AI dominance in order to promote human flourishing, economic competitiveness, and national security.” The order also issued a review of  President Biden’s previous, more regulatory, executive order on the issue—which has since been repealed by the Trump Administration. Recently, the controversial “Big Beautiful Bill” championed by Trump initially attempted to follow up on this order and by attempting to prevent states from legislating AI for 10 years, but that provision was removed by the final version of the bill. 

States too have been slow to regulate AI. State representatives in California, as a state containing many prominent AI companies, have attempted to pass bills regulating AI but have not been successful. The only state to successfully pass a bill into law regulating AI so far has been Colorado, which passed the Consumer Protections for Artificial Intelligence Act. This law forces AI developers and deployers to use “reasonable care” to avoid algorithmic discrimination, which is described as “any condition in which the use of an artificial intelligence system results in an unlawful differential treatment or impact that disfavors an individual or group of individuals on the basis of their actual or perceived age, color, disability, ethnicity, genetic information, limited proficiency in the English language, national origin, race, religion, reproductive health, sex, veteran status, or other classification protected under the laws of this state or federal law”. This “reasonable care” is enforced through other provisions in the Act, including mandating high-level summaries of the data used to train a model, making known foreseeable limitations in the model, the designed purpose of the system, the intended benefits, how the system was evaluated for algorithmic discrimination and performance, actions taken to reduce algorithmic discrimination. The deployer of the AI system must also notify the attorney general and any developers of the system within 90 days of any credible report made to the deployer of any foreseeable risks of algorithmic discrimination. 

 

Regulation around the world

The vast uncertainties surrounding the AI issue have caused several places across the globe to take very different approaches to it. The United Kingdom has taken a similar stance to the United States, focusing on allowing companies to innovate with AI. In a 2023 document titled “AI Regulation: A Pro-Innovation Approach”, the UK’s Department for Science, Innovation and Technology and its Office for Artificial Intelligence detailed their regulatory policy, in which they allow their existing regulators to focus on AI through a set of principles. These principles include “safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.” By choosing to allow regulators freedom, the UK hopes that they can provide flexible rules that are more adaptable to the always-changing technology as opposed to regulating it through parliament. This appears to have worked so far, as the United Kingdom is currently ranked third by Stanford University’s AI vibrancy ranking, and is consistently ranked among the top five by several AI metrics. This is not to say that the UK will never regulate AI—the government has specified that it is not opposed to the idea of regulating AI more strictly in the future. There may also be a more dramatic shift in policy soon if the government’s attention turns to the issue, as the more liberal Labour Party has recently replaced the Conservative Party, which led in 2023 when the document was written. 

Similarly to the UK, Japan is attempting to limit regulation, instead opting to provide nonbinding guidance for companies to follow. Japan is not in a hurry to slow the development of AI. Instead, it hopes to work with the rest of the world to create a society that benefits from AI. Thus, Japan draws greatly from international standards such as those from organizations like UNESCO, OECD, and G20 in creating its guidance. In the highly interesting document “Social Principles of Human-Centric AI”, written in 2019, the Japanese government expressed its desire to use AI as a solution to Japan’s current problems, such as low birth rates and a reduced labor force. It sets forth goals to encourage the development of AI to create a society that will be able to utilize it to a full extent. This includes teaching people about the bias of AI, creating an education system that teaches people how to use AI in creative ways, and social systems to support people using AI. Japan wants to use AI in ways that promote and protect the principles of human dignity, diversity and inclusion, and sustainability. 

Despite its small size, Singapore has become an early leader in AI. In 2019 they became the first country to publish a Model AI Governance Framework, a comprehensive document which outlines its desired best practices for “accountability, data quality and protections, transparency in development and deployment, incident reporting, testing and assurance, content perseverance, safety and alignment R&D, and AI for public good.” This framework has been updated multiple times to keep up with advances in AI. Singapore hopes to create tools that encourage AI companies to create in safe ways, such as with their ‘AI Verify’ which develops testing tools that companies can use, providing a transparent way to transmit information about their products.

Not everywhere has been so reluctant to regulate AI, however. For example, the European Union recently adopted the first comprehensive regulation on artificial intelligence. The EU AI Act, passed in February of this year, aims to “to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.” To do this, they have created different “risk categories” that AI systems may fall into. Each category has specific regulations associated with it. The strictest category includes AI used for cognitive and behavioral manipulation, “social scoring” AI, biometric identification, and real-time facial recognition. All technologies in this category have been banned, with an exception for biometric identification and facial recognition in limited law enforcement applications. A level below this are two types of “high-risk” products: AI systems used in products falling under EU’s safety regulation (toys, aviation, cars, medical devices) and AI systems falling into specific areas that will have to be registered in an EU database. These latter areas include use of AI in critical infrastructure, education and training, employment and management, essential and public services, border control, and law enforcement. These high-risk AI systems will be assessed both before being put on the market and throughout their lifecycle, and people will have the right to file complaints about AI systems to designated national authorities. ChatGPT and other generative AI systems do not fall into a high-risk category, but will have to follow transparency requirements and copyright law. These technologies will need to implement features that disclose when AI is used to generate content and prevent the AI from generating illegal content, as well as publish summaries of the data used in training the model. Content that is either generated or modified with the help of AI, such as images, audio or video files (e.g. deepfakes), will also need to be clearly labeled as AI generated so that users are aware when they come across such content. The law does not intend to heavily impede AI technologies; instead, it aims to create an environment where innovation is supported—especially for EU-based startups—by allowing companies to access ways to test their AI before a public release. 

China has implemented several regulations and guidelines for AI, and has attempted to quickly adapt to any technological advancements. In July of 2023, the Chinese government enacted a regulation entitled “Interim Measures for the Management of Generative Artificial Intelligence Services”, which applies only to software offered to the public, not AI for research or scientific purposes. Designed to put equal emphasis on security and innovation of AI, these measures include provisions to “uphold socialist values”, prevent discrimination, respect intellectual property, prevent monopolies, respect the privacy, reputation, and health of individuals, and increase transparency. Most of the work of upholding these measures rests on AI providers, but several administrative organizations were tasked with enforcing them. Also in 2023, China published the Deep Synthesis Provisions, or “Administrative Provisions on Deep Synthesis in Internet-Based Information Services.” These create guidelines for the use and development of deepfakes. For example, the software must prompt users to inform the individual they are creating a deepfake of and get the individual’s consent. Additionally, China has created labeling rules, which will take effect in September of this year. These rules describe two types of label: explicit labels, which are visible tags, and implicit labels, which are found in metadata. Explicit labels must be added to anything that could confuse the public such as chatbots or other AI generated media. Content distribution services–such as social media– are required to categorize AI generated content into one of three categories, and clearly label it with both explicit and implicit tags. These three categories are: confirmed AI-generated content, which is content that has an implicit tag; possible AI content, which has no implicit label, but has user reports of AI; and suspected AI-generated content, content with an explicit label or other evidence that suggests the use of AI generation.

 

Should AI be regulated?

AI technologies are continuously improving. Although the proliferation of AI varies depending on both the place and industry, every legislative body is currently being forced to weigh the implications of enacting AI regulations. Countries risk either not regulating AI, possibly leading to job loss and destruction of institutions, or they risk implementing regulations which are too strong, causing them to fall behind the rest of the world and lose out on the many advantages that it may provide. It is not an easy choice, but public opinion, even among the world’s young people, who remain some of the biggest users of generative AI, is shifting. CSU’s aforementioned AI poll found that 86.2% of responders indicated that they believed that AI should be regulated more than it currently is, and 38.6% indicated that AI should be either heavily regulated for individual use and thus mostly used by the government or for scientific purposes, or that AI should be completely banned. Although AI has brought several benefits, especially in the sciences, many are wary of the issues presented by generative machine learning—whether it be copyright law or privacy concerns. It is certainly too soon to tell what will happen to the industry throughout the century, but for many, it is worth the assurance it brings them to regulate this new technology, even if it may slow the pace of development.





Sources

 

https://www.goldmansachs.com/insights/articles/generative-ai-could-raise-global-gdp-by-7-percent 

https://www.whitehouse.gov/presidential-actions/2025/01/removing-barriers-to-american-leadership-in-artificial-intelligence/ 

https://www.congress.gov/bill/117th-congress/senate-bill/2551 

https://www.congress.gov/bill/116th-congress/house-bill/6216 

https://leg.colorado.gov/bills/sb24-205 

https://leg.colorado.gov/sites/default/files/2024a_205_signed.pdf 

https://hai.stanford.edu/ai-index/global-vibrancy-tool 

https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence 

https://www.diligent.com/resources/guides/ai-regulations-around-the-world 

https://iapp.org/media/pdf/resource_center/global_ai_law_policy_tracker.pdf 

https://www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach 

https://www.chinalawtranslate.com/en/generative-ai-interim/ 

https://www.loc.gov/item/global-legal-monitor/2023-04-25/china-provisions-on-deep-synthesis-technology-enter-into-effect/ 

https://www.insideprivacy.com/international/china/china-releases-new-labeling-requirements-for-ai-generated-content/ 

https://aiverifyfoundation.sg/wp-content/uploads/2024/05/Model-AI-Governance-Framework-for-Generative-AI-May-2024-1-1.pdf 

https://www.pdpc.gov.sg/help-and-resources/2020/01/model-ai-governance-framework 

https://www.smartnation.gov.sg/nais/ 

https://www.meti.go.jp/shingikai/mono_info_service/ai_shakai_jisso/pdf/20210709_8.pdf 

https://www.cas.go.jp/jp/seisaku/jinkouchinou/pdf/humancentricai.pdf 

 

Special Thanks to Simon Wilson for helping with editing.