Artificial intelligence (AI) usage is expanding rapidly among businesses and individuals in the context of a regulatory environment that is still nascent, particularly in the Asia-Pacific region. This chapter frames the context for regulation drafting by discussing the five key challenges governments face in this task – ethical considerations, managing innovation, digital infrastructure, talent and labor dynamics and geopolitics. Understanding these challenges is helpful to navigate governments’ perspectives on AI and offers the ability to identify their intersection with business decisions. It also allows for better risk-mapping for firms and helps to draft more effective engagement strategies. A summary of approaches of three leading AI powers — the United States, China and European Union — and multilateral initiatives follows this discussion to better understand global dialogue on AI regulations.

Global Trendlines

AI is an epoch-making technology which is on the verge of transforming our societies and the way we do business and possesses tremendous potential for economic growth. Not only is there an increase in AI usage by individuals, the uptick in AI deployment by businesses in the past five years is also noteworthy. 

According to a global survey, around 72 percent of the organizations surveyed are using AI in at least one business unit or function. This finding is supported by increasing mentions of AI in Fortune 500 companies’ earnings calls with around 80 percent of them having mentioned it. 

While investment in and usage of AI has rapidly progressed, regulatory structures are in nascent stages, particularly in Asia-Pacific. Most governments are concerned about the risks of AI in their jurisdictions, but not all are looking at regulating it at present. In fact, several nations in the region have yet to announce their national AI strategies. However, the global dialogue on AI safety is moving toward regulations, even if limited in scope, making it a matter of when, not if.

On one end are the states which are hesitant to regulate an embryonic AI ecosystem due to fears that rushed regulation will hinder innovation. On the other end, there are frontrunners like the European Union (EU) which has implemented legislation with clear compliance requirements and penalties. It remains unclear how this law will be enforced and impact AI development in the Eurozone, but it marks the first comprehensive legislation on AI. Imposed on top of these two approaches are regional and multilateral initiatives that aim to define AI rules. The result is a complex jigsaw puzzle of the AI regulatory landscape with several governance gaps.

Understanding this policy landscape and identifying governance gaps is crucial for enterprises for two reasons. One, patchy AI laws are a regulatory and compliance headache for firms, particularly multinational corporations, that are still investing in and optimizing their AI strategies. AI technologies are expensive investments and therefore require a thorough mapping of the associated risks. Secondly, and more importantly, identification of governance gaps is crucial because the private and tech sectors are ahead of most governments in their AI capabilities and knowledge. Understanding these regulatory gaps will provide businesses with the necessary information for identifying market opportunities and engaging governments to ensure that regulations do not over-reach and hinder growth. 

Challenges With AI Regulation

Before unpacking the regulatory puzzle, it is important to understand the five key challenges faced by governments when drafting policies on their AI ecosystems. In addition to serving as a primer on governments’ perspectives, these challenges also overlap with business concerns and form crucial components of firms’ decisions when deploying AI. These challenges are ethical considerations, managing innovation, digital infrastructure, talent and labor dynamics, and geopolitics. A brief overview of these issues will help in setting the context of evolving regulatory oversight in different regions of the world.

Ethics of AI

Ethical considerations on rights, liabilities, harm, transparency and risks of AI usage are at the forefront of dialogue on AI regulations. AI is as much of an ethical challenge as it is a technological one.

Instances of AI usage for mis- and disinformation are increasing by the day and are particularly front and center of governments concerns with regulating AI — more so because they are being used to influence and interfere in elections and domestic politics. Closely aligned to this threat are AI image, audio and text impersonations used for scams.

Other concerns in major focus are those regarding rights and transparency. Reports of the use of copyrighted and private data to train models have raised calls for transparency. Activists and experts have called for access to transcripts of training data to understand how AI models are being trained and learn how they generate responses. Another reason driving calls for transparency are concerns that algorithms are perpetuating existing biases. Research has shown that algorithms are not free from biases, but rather extend them if coded improperly and trained on imperfect data. 

On the flipside, absolute governmental regulation of AI enables administrations to act as “ministries of truth” and clashes with free-speech rights. The risk of this is illustrated by instances like Chinese chatbots responses to inquiries on Tiananmen or the Indian government’s reaction to a chatbot calling Prime Minister Modi a fascist. Regulation on information is a slippery slope and requires careful calibration.

Ethical risks are important to business operations as well. Organizations need to understand how the algorithms work before they can deploy them in their operations and need to understand how the data they input is being processed. Where rights and liabilities on AI usage are undefined, accidental harm to consumers might invite litigation and cause damage to corporations’ reputations. Businesses, like individuals, are also concerned about the privacy of their data and would benefit from greater transparency and explainability around AI models.

As AI usage expands these concerns are only expected to be heightened. Given that this is the territory of AI that impacts everyone — individuals, businesses and governments — it is already the first to be addressed, either through laws or strong norms. In certain jurisdictions, like the United States (read Regulatory Approaches in Key Global Markets), piecemeal regulations tackling some of these issues have already been introduced and more are expected to follow once further consensus on approaches is reached. Even in their absence, leading AI tech firms and businesses have set up ethics committees, procedures and principles and have taken public pledges to convey their commitment to ethical development of AI models. There is also great interest from a range of actors from government agencies, tech giants, AI creators and even the pope in setting the “algor-ethics” agenda.

Despite similar ethical considerations being proposed by various actors — like transparency, safety, privacy, user harm and equity — actors’ underlying connotations of those terms differ greatly. For example, the European understanding of the term “privacy” is significantly different from that of Singapore. These differences introduce subtle but significant variations in the way regulatory thinking evolves in jurisdictions. An understanding of the nuances of the local cultural and legal contexts in operating markets is, therefore, crucial in monitoring the development of regulations and creation of engagement plans.

Managing Innovation

AI is expected to add around US$15 trillion (approximately five times the GDP of France) to the global economy by 2030. While countries and businesses are aware of the risks, the financial costs of losing competitive edge in the AI race are significant. 

Since the industry is still in a nascent stage, with several use cases being unexplored, countries are incentivized to look at how to attract new investments while ensuring that it provides the most conducive and business-friendly environment for companies to set up shop. This requires a fine balance of policies to ensure that the government’s approach is clear and easily implemented. Excessive compliance requirements can dissuade companies from pursuing breakthroughs, slow down production time and increase costs for AI innovators. These are particularly greater concerns for startups and smaller players — the one’s primarily leading the charge in AI innovation — because such policies can hinder investments. On the other hand, uneven implementation of policies can also detract businesses’ confidence in the administration.

The CEO of a leading AI company, mentioned last year the possibility of ceasing operations in Europe in response to the EU’s AI Act due to concerns over compliances. While the company did not leave the EU market, it lobbied along with several others to water down the EU AI Act and reduce its regulatory burden. A similar trend is also visible in the U as calls for regulations have increased. From 2022 to 2023 there has been a 185 percent increase in the number of companies lobbying the U.S. government on AI related issues.

The scarcity of AI talent, infrastructure and other resources adds to government’s concern about finely balancing regulation and innovation. However, Argentina’s case highlights a different approach altogether. It is promising lower regulations to invite AI companies and emerge as “the world’s fourth AI hub.” The lack of regulation in itself may be seen as an incentive, but often it is a combination of factors that attracts investors to a certain market. While it is probable that some other countries might also bank on such incentives to invite innovators, it is unlikely that guardrails on AI are going to be absent in most parts of the world. 

Digital Infrastructure

When thinking of AI, it is easy to discount policies and regulations which impact the digital infrastructure that forms a key part of the foundations of AI systems. Equipment like semiconductor chips to build computers, data centers, internet infrastructure or the energy infrastructure to supply these power-hungry machines and water needed to cool equipment are critical factors. Regulations and policies on these aspects can also have huge impacts on the overall AI ecosystem in a market. 

The growth of AI is driving the demand for increased data capacity and fueling the development of more data centers. Particularly in Asia Pacific, data center growth is expected to boom. These centers require significant resources and are leading to concerns of surging environmental costs. By 2026, global data centers’ total electricity consumption is expected to reach 1,000 terawatt hours which is roughly the annual consumption of Japan. Similarly, the water consumption of the data centers is increasing, because tech giants have reported greater annual water usage in their latest year end reports. 

Governments need to balance their environmental goals, net zero commitments and business interests. Already we have witnessed some countries like Singapore, Netherlands and Ireland pause data center construction to manage their power girds. Others like Japan and Taiwan have imposed limitations. Data centers are being subject to sustainability codes which require monitoring and sustainability strategies. These developments directly impact data center markets and require evaluation.

At the same time, countries are also investing significant sums into their domestic semiconductor industries to boost chip production, invite fabrication plants and reduce reliance on imports. Policies like the recent U.S. export bans on semiconductors technology and incentives offered by various countries in the Indo-Pacific region can play a huge role in determining market attractiveness to chip manufacturers.

For businesses, monitoring these developments are key to ensure successful AI strategies and avoid policy-induced delays. Additionally, assessment of countries’ strategies and progress in these domains is vital to understand their ecosystem readiness for AI growth. Knowledge of policies and cultural contexts that affect AI-related digital infrastructure can also present opportunities for investment in areas like sustainable technologies, green energy projects, data centers and semiconductor markets. In sum, this information is essential for effective stakeholder engagement, market entry and business expansion strategies related to AI.

Talent and Labor Dynamics

Advancements in AI are introducing unique gluts in the labor market as it drives changes in skill requirements. Governments are under increasing pressure to take policy interventions to manage a fair transition for markets and workers. These policies can impact the development and deployment of AI in dramatic ways.

Two parallel trends are contributing to this pressure on governments. On one end, countries are facing a crunch for top AI talent that is vital for sustained innovation; on the other, workers and unions are expressing concerns over job losses due to AI and demanding protective legislation.

High caliber AI talent is extremely scarce and highly mobile. Big tech companies and startups are offering lucrative salaries to lure the right talent and are poaching entire AI engineering teams from innovative startups. Such practices have exposed businesses to anti-trust regulatory action as illustrated by the recent U.S. Federal Trade Commission probe of a big tech company. To invite talent, several countries around the world, such as United Kingdom and Singapore, have introduced special immigration schemes and AI talent corridors to make migration for skilled tech workers easier. In some others, like the United States and France, companies have expressed concerns about immigration rules which make it harder to draw foreign talent. A leading tech company sent a letter to the U.S. Department of Labor last year urging it to revise immigration policies for the AI sector. 

Despite all the troubles with hiring, retention of skilled talent is the bigger problem for most world leaders and tech CEOs. For example, in Korea, despite a booming AI industry, there is a massive net outflow of AI talent because local companies are unable to match salaries and scope of work offered by companies in the United States and the United Kingdom. The big tech companies have also seen a trend of net outflows of AI talent as workers leave to start their own companies or join more agile startups where they have more control over product lines.

Systemic changes in the nature of work are being felt by the other end of the spectrum, too. Fearing job losses and automation of industries, labor groups and unions around the world have started putting pressure on governments to ban or regulate the use of AI by employers in their industries. The International Monetary Fund (IMF) has predicted that almost 40 percent of the jobs worldwide will be affected by AI, with a sizable chunk being completely replaced. Governments and companies are under pressure to find the correct mix between reaping the productivity gains of AI and ensuring fair worker rights. At this stage it is unclear whether these rights will be secured through government intervention or through self-regulation by industry players like in the case of the deal struck by the Writers Guild of America on AI usage. Nevertheless, monitoring these trends is necessary to ensure agility in shaping and adapting to the evolving AI landscape.

Geopolitics

Finally, geopolitics is playing a crucial role in the AI race. Like in other emerging technologies, AI is dual use in nature, having both civilian and military applications. World leaders are eager to tap into its productivity gains and advance their countries’ development which is adding to the competition for AI resources.

Geopolitical competition and calculations intersect with all the challenges discussed above. Not only are countries trying to poach the best foreign AI talents, but they are also seeking to limit access to key digital infrastructure like the U.S. export bans on China against cutting edge semiconductors demonstrate. States are also playing a role in setting agendas and are driving multilateral initiatives toward AI values and ethics to direct the development of AI along their preferred path. Geopolitics is also driving wedges in global supply chains from critical minerals to energy and exacerbating crises, such as in the Taiwan Strait, by adding new dimension to countries’ relations. 

Development of AI and large-language models in select languages also exacerbates the risk of a widening digital divide. Countries are concerned that foreign AI solutions will be unable to effectively understand cultural nuances and language variations, essentially minimizing use-cases and productivity gains. To insulate themselves from these tensions several countries are attempting to build up domestic capabilities and sovereign AI — state owned AI models trained on local data and languages. These present opportunities for growth in AI usage and demand for allied infrastructure and energy. On the other hand, it also complicates the business environment for companies by reducing interoperability in AI stacks and raising barriers to trade in digital services. 

A nuanced understanding of domestic politics in operating environments can help businesses better assess the risk to their investments and offer more agility in decision making by providing intelligence on political changes. It can also be useful in understanding local contexts and identifying new opportunities for business expansion. 

Regulatory Approaches in Key Global Markets

Some countries have started putting in place regulatory frameworks for AI. While these provide early indications of what AI regulations may look like, it is a long way before we see a finalized state of internationally accepted regulation. That said, regulatory practices in United States, China, European Union and multilateral bodies do set the example for other states to evaluate and follow, and these will have far reaching consequences impacting businesses’ development across the world. This section outlines in brief the regulatory approach in these regions.

United States: Incremental Market-Driven Approach

The United States is one of the world leaders in the development of AI technology hence its approach to AI regulation is consequential globally. Despite an increasing number of AI related bills being presented in the U.S. Congress, at present, there are no comprehensive federal regulations or legislation in the United States that regulate the AI ecosystem. In this landscape, until recently the U.S. government had followed an executive-led approach with President Joe Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. President Donald Trump has revoked this order which he believed hindered AI innovation. Currently, there are several existing federal laws concerning AI that have limited application. 

Overall, the U.S. approach is driven by businesses and builds incrementally to safeguard American citizens’ rights without enforcing top-down regulatory burdens on businesses. However, the lack of uniformity and clarity in scope, definition and compliance poses both a legal hurdle and an opportunity to capitalize by businesses seeking to deploy AI systems.

Existing legislation on privacy and intellectual property have been the primary tool used by the U.S. government to regulate AI. This is demonstrated by the joint statement in April 2023 by the Federal Trade Commission, the Equal Employment Opportunity Commission, the Consumer Financial Protection Bureau and the Department of Justice stating that “existing legal authorities apply to the use of automated systems and innovative new technologies.” Current state privacy legislations, like the California Privacy Protection Act and the Biometric Information Privacy Act in Illinois, can also be used as the basis for litigation against automated decision making. Finally, as several intellectual property cases before U.S. courts demonstrate, the legislation is also being used to regulate the unchecked use of AI. Federal and state agencies are consequently empowered to take regulatory action based on present legislations.

Since there is no over-arching federal legislation, some sector specific legislations also apply in limited use-cases. An example is the New York City’s Local Law 144 of 2021 which prohibits the use of AI systems in employment decisions unless audited for bias.

The White House had been leading the charge on AI regulation in the United States with the release of the Blueprint for an AI Bill of Rights and AI Risk Management Framework in January 2023. While the risk management framework is a voluntary mechanism for AI risk monitoring, the blueprint provides five principles to guide future legislation — safe and effective systems, protection from algorithmic discrimination, data privacy, transparency in the form of notice and explanation, and human alternatives, consideration and fallback with the ability to opt out of automated systems. 

Biden’s executive order on AI built on these principles asked for guidelines and assessments on seven actions and rule-setting on one action. Most notably, the order required developers of potential dual-use models to report information about training activities, ownership and red-team safety tests of these models to ensure safety and security of AI technology. The other seven parts of the order focused on industry development and application strategies for the United States to maintain its lead. Specifically, these dimensions were promoting innovation and competition, supporting workers, advancing equity and civil rights, protecting privacy, protecting consumers, patients, passengers and students, advancing the federal government’s use of AI and strengthening American leadership abroad. While Trump has revoked the order, a lot of it has been set in motion at the federal agency level so it remains to be seen how much his administration pulls back from the Biden administration’s policies.

In the final days of the Biden administration new export controls for advanced semiconductors and AI models were introduced titled, “Framework for the Responsible Diffusion of Advanced Artificial Intelligence (AI) Technology.” The new regulation restricts access to advanced semiconductor chips and closed-weight AI models to all but the United States and 18 allies and partners. It represents the most significant expansion of export controls aimed at China related to semiconductors since a sweeping set were issued in October 2022. It also marks the first inclusion of AI models in such export controls. The Trump administration will be responsible for the implementation of this ruling as it enters its 120-day consultation period to receive feedback.

The future implementation of AI governance mechanisms in the United States will likely to be pared back in certain dimensions since Trump has called for a more pro-business approach. Perhaps, the biggest indicator of the technology industry’s confidence in a more facilitative domestic environment is the recent half trillion dollar investment announcement in a joint venture called Stargate by some of the biggest technology companies in the United States. On the contrary it is unlikely that the Trump administration will dilute any of the export controls on China in terms of AI models and advanced semiconductors.

At the present, the inability of the U.S. Congress to agree on AI legislation remains the biggest hurdle to uniform federal legislation across the country. As is the case with state privacy legislations, AI regulations might also follow a patchy and complex regulatory environment with varying requirements and standards. 

China: Targeted State-Led Approach

Even though China has no overarching national AI law, it was the first to introduce targeted national regulations on AI systems. Its AI regulations specifically apply to algorithmic recommendations, deep synthesis technologies and generative AI systems like ChatGPT. The state’s approach has called for the protection of socialist values but is not a top-down edict from the Communist Party of China (CCP). The regulations have offered protections for consumers and created mechanisms like algorithm registries that have evolved out of deliberations between the CCP, state agencies, scholars and technocrats. In China, academics and scholars are key shapers of policy discourse on AI and many have expressed concerns about AI risks. As a result, one of the core principles of China’s AI governance regime, laid out in a policy document by the CCP, is to “ensure that AI always remains under human control.” Overall, China’s approach shows incisiveness and agility while welcoming development of AI systems. For foreign companies, particularly those from countries with more democratic and liberal value systems, Chinese regulatory compliance for AI might be a hurdle.

The government’s provisions on algorithmic recommendations have been in effect since March 2022, allowing individual protection by giving users the ability to opt out, banning differential pricing and increasing transparency through the government’s algorithm registry. The deep synthesis regulations on the other hand apply to all synthetic algorithms that produce text graphics, audio, video or virtual scenes. These establish rules for labelling and content management, technical security, transparency and data security and personal information protection. 

Finally, its latest interim measure on generative AI focuses on AI development along “core socialist values of China” simultaneously with measures requiring generative AI providers to use legal data sources, respect intellectual property rights, consent-based use of personal information and maximizing authenticity, accuracy and objectivity and diversity of training data. However, these provisions only apply to generative AI services being offered to Chinese nationals and therefore signal a slight easing to allow for AI development in China.

China’s approach to AI regulation has been agile, targeted and cautious which offers a lot of learning for other jurisdictions. Businesses have clear compliance requirements and a relatively clearer understanding of the regulatory mood in China. However, the requirements of data localization and for AI systems to not endanger “core socialist values of China” can pose challenges for foreign companies looking to enter the China market. A case in point is the omission of a Big Tech company’s latest AI offerings in China despite it being one of its biggest consumer markets. The Chinese government shares common concerns with governments around the world but their definitions of the terms within AI safety might differ. Ultimately, it’s state-led approach will likely clash with other countries’ regulatory approaches and reduce interoperability for companies.

European Union: Rights-Driven Approach 

The European Union’s approach has differed from the other two discussed above. The European Union has broken away and introduced a comprehensive AI law that safeguards the rights and safety of its citizens. Its AI legislation follows a risk-based classification and compliance obligations for AI systems. While most generative AI systems will fall under transparency and copyright requirements, there are certain higher risk use cases which have more regulatory oversight and bans. According to the European Parliament, the EU Act does not overly burden companies with compliance, while adding safety guardrails on AI development in the market.

The AI Act has categorized AI systems into unacceptable risk, high risk and limited risk systems. Unacceptable risk systems are considered a threat and banned completely. These include systems indulging in social scoring, biometric identification, facial recognition and behavioral manipulation. High risk use-cases like educational training, medical devices, cars, aviation and migrant and border controls will be assessed before being put to use as well as throughout their lifecycle. For generative AI and other limited risk systems the law has imposed transparency requirements, including labelling, prevention of illegal content generation and copyright data summaries. Only advanced generative models like GPT-4 would have to undergo evaluations.

The EU rights driven approach attempts to balance security and safety while fostering innovation. By categorizing AI into different risk assessments, the EU approach provides business a clearer understanding of their compliances for AI usage. It also does not hinder the rollout of generative AI services in the region with excessive compliances and provides sandboxes for pre-release evaluation and testing. Regardless, the AI Act is the first in the world to impose comprehensive obligations on AI, but its value add remains the clarity it offers businesses in their compliance requirements because other jurisdictions have seen similar controls on AI through mechanisms such as privacy and data regulations which have been unevenly imposed. While this approach could have the effect of a General Data Protection Regulations-like dissemination of EU standards across countries, it could also be a hurdle for business operations in the EU market. Consequently, several Asian countries are waiting for the implementation of this law to assess its full impact on Europe’s AI ecosystem before they adapt some of its key features.

Multilateral Initiatives

Given the global interest in regulations several multilateral initiatives to develop AI rules have been introduced. These have not resulted in any global treaty on AI but rather produced some voluntary guidelines and pledges by companies and countries. The groupings include AI Safety Summits, Global Partnership on AI (GPAI), Group of 7 (G-7), United Nations, Group of 20 (G-20), Association of Southeast Asian Nations and Organization of Economic Cooperation and Development which have all added to the discourse around AI. Most of these initiatives are calling for greater security, transparency and interoperability between AI regulations on the global stage despite differences in regulatory approaches and conceptual definitions among some countries. For instance, the G-7 leaders last year announced their desire to increase interoperability between their AI safety institutes despite the EU’s greater compliance requirements. Some like the G20, United Nations and GPAI are also pressing for bridging the digital divide in AI among Global North and Global South countries.

While global summits and dialogues are helpful in leading the discussions toward easing cross-border functionality for businesses, at this stage it remains uncertain how exactly that will be achieved. Partly contributing to this uncertainty is a lack of AI legislation and regulatory clarity in most parts of the world. Another factor that will hamper interoperability is different value systems and entrenched interests of states, for example, like that of democratic and non-democratic regimes or young and aging economies. Thus, it remains critical for businesses to have agility in this ever-changing regulatory landscape and engage with governments and key stakeholders to push for interoperable global standards.