Four Asian jurisdictions passed major artificial intelligence legislation within a 13-month window, producing a body of law that is striking not only for its ambition but for how sharply the approaches diverge. Japan's AI Promotion Act entered into force on 4 June 2025. South Korea's Framework Act on the Development of Artificial Intelligence and the Creation of a Foundation for Trust was enacted on 21 January 2025 and enforced from 22 January 2026. Vietnam's Law on Artificial Intelligence was passed on 10 December 2025 and took effect on 1 March 2026. Taiwan's Artificial Intelligence Basic Act was promulgated on 8 January 2025. Together they represent the most concentrated burst of national AI lawmaking in Asia to date, and collectively they offer a window into how different governments are thinking about AI governance, safety, and economic competitiveness.

For marketing and advertising technology professionals, the significance is direct. AI systems underpin programmatic bidding, audience targeting, content generation, and automated decision-making across digital campaigns. Each of these laws establishes frameworks - some with binding obligations and penalties, some purely aspirational - that will shape how those systems can be built, deployed, and inspected in some of the world's most consequential technology markets. The global divergence of AI regulation has been a consistent concern for international operators, and Asia's new legislation adds four more distinct regimes to an already fragmented landscape.

Japan: a light-touch promotion law with a strategy headquarters

Japan's approach is the most permissive of the four. Act No. 53 of 2025, according to the official translation, frames AI-related technology as "a foundational driver of Japan's economic and social development" and places its primary emphasis on research, development, and utilization rather than constraint.

The law creates a new AI Strategy Headquarters inside the Cabinet, chaired by the Prime Minister and including all Ministers of State as members. That is not a minor institutional gesture. Placing the headquarters at Cabinet level signals that AI strategy is treated as a matter of national executive priority rather than a sectoral concern delegated to a specific ministry. The Headquarters is responsible for drafting a Basic Plan on AI, which must be approved by Cabinet vote and published without delay.

The substantive obligations in the law are distributed across several actors. The State is required to promote research and development from basic science through to practical application, build and maintain facilities and datasets for shared use by research institutions and commercial operators, and ensure that guidelines remain aligned with international norms. Local governments must formulate autonomous policies that leverage local characteristics. Research institutions are asked to pursue interdisciplinary work spanning the humanities and natural sciences. AI-utilizing business operators - defined as anyone developing or providing products or services using the technology, or using it in their own operations - must cooperate with government measures. The public is asked to deepen understanding and interest.

What the law does not contain is equally notable. There are no mandatory pre-market approvals, no risk classification tiers, no administrative fines, no enforcement powers for regulators to compel access to systems or data. According to Article 13, the State shall formulate guidelines that accord with international norms to ensure proper implementation - but this is a direction-setting duty, not an enforceable requirement on private actors. The law explicitly acknowledges that AI carries risks including criminal use, personal data leakage, and copyright infringement, but addresses those risks by calling for transparency in research and development processes rather than through direct prohibition or penalty.

The legislative architecture is designed to be enabling. Article 17 commits Japan to active participation in the formulation of international norms, reflecting a consistent posture at forums including the G7 Hiroshima AI Process - a framework that, according to the IAPP, now includes 66 countries and regions and 38 organizations. The Hiroshima Process Code of Conduct, referenced directly in Taiwan's AI Basic Act, recommends a risk-based approach and identifies 11 key action areas spanning risk management, organizational governance, transparency, and content authentication. Japan's domestic law sits broadly within that spirit while stopping far short of converting it into binding national obligation.

South Korea: structured obligations, a presidential committee, and penalties

South Korea's Framework Act is considerably more ambitious in its institutional architecture and enforcement provisions. The law establishes a National Artificial Intelligence Committee under the President, composed of up to 45 members including a majority of civilian experts commissioned by the President, relevant ministerial heads, and a representative from the Office of National Security. The Committee's term is fixed at 5 years from the date of the Act's entry into force.

The Committee is not purely advisory. According to Article 8, it deliberates and resolves on matters including master plan formulation and review, research and development strategy, identification and improvement of regulations that hinder AI industry competitiveness, infrastructure expansion including data centers, the promotion of AI utilization across manufacturing, services, and the public sector, and the establishment of international norms. It can make recommendations or express opinions to government agencies, and when it does so on matters of statutory or system improvement, those agencies are required to formulate response plans.

The law introduces a risk classification concept through its definition of high-impact artificial intelligence. Article 2(4) lists 11 specific domains where AI systems are considered high-impact: energy supply, drinking water production, health and medical services, medical devices, nuclear facility safety management, biometric analysis for criminal investigation, decisions affecting individual rights such as hiring and loan screening, transportation management, government decision-making affecting citizens, early childhood and secondary education evaluation, and any additional areas designated by Presidential Decree. That is a broader list than one might expect from a framework statute, and it signals where binding obligations will ultimately concentrate.

Those obligations are specified in Articles 31, 32, and 34. Business operators providing high-impact AI or generative AI must notify users before deployment that the product or service operates on the relevant AI. Generative AI outputs must be labeled as such. Virtual audio, image, or video outputs that are difficult to distinguish from real content require explicit disclosure. Operators of systems where cumulative training computation meets a threshold set by Presidential Decree must identify, assess, and mitigate risks across the AI lifecycle and establish risk management systems capable of monitoring and responding to safety incidents. Failure to meet the transparency notification obligation carries an administrative fine not exceeding 30 million Korean won under Article 43.

The law also establishes an Artificial Intelligence Safety Institute under the Minister of Science and ICT, dedicated to defining and analyzing AI safety risks, researching evaluation criteria, and conducting international exchange on AI safety. A separate Artificial Intelligence Policy Center can be designated to support policy formulation and international norm development.

South Korea established an AI privacy framework through separate guidelines in August 2025, addressing personal data processing for generative AI development - demonstrating that the Framework Act operates alongside, rather than replacing, existing data protection structures.

Vietnam: the most technically detailed risk-based architecture

Vietnam's Law on Artificial Intelligence, passed on 10 December 2025, is the most operationally complex of the four texts. It introduces a three-tier risk classification system with specific procedural requirements attached to each tier, and it contains provisions on liability, incident management, national infrastructure, and regulatory sandboxes that go well beyond framework principles.

The classification system divides AI systems into three levels. High-risk systems are those capable of causing significant damage to life, health, rights, national interests, or national security. Medium-risk systems are those capable of causing confusion, influencing, or manipulating users who do not realize they are interacting with an AI or with AI-generated content. Low-risk systems are everything else.

The procedural consequences differ materially. Providers of high-risk systems must undergo conformity assessment before deployment or upon significant system changes. Systems on a list designated by the Prime Minister require assessment by a registered or recognized conformity assessment body. Other high-risk systems may be self-assessed. According to Article 14, providers of high-risk systems must establish and maintain risk management measures, manage training and validation data to ensure quality, compile and store technical documentation and operation logs at a level necessary for assessment and inspection, design systems to enable human supervision and intervention, bear accountability to competent State authorities regarding intended use and operating principles, and coordinate with authorities and deployers in inspection and incident remediation.

Those obligations carry explicit liability consequences. Under Article 29, where a high-risk AI system is managed and operated in accordance with regulations but still causes damage, the deployer bears responsibility for compensating the damaged person. The deployer may then seek contribution from the provider, developer, or other relevant parties if there is an agreement. Liability is exempted only where damage occurs entirely due to the intentional fault of the damaged person, or in force majeure events. Where a system is hijacked or illegally intervened upon by a third party, that third party bears primary liability; if the deployer or provider was at fault in allowing the intrusion, joint liability applies.

The law also establishes a National AI Development Fund - described as an off-budget state financial fund operating on a not-for-profit basis - to mobilize and allocate resources for AI research, development, application, and management. The Fund explicitly accepts risks in science, technology, and innovation and can allocate capital flexibly independent of the budget year.

Foreign providers of high-risk AI systems deployed in Vietnam must maintain a legal contact point in the country. Where mandatory conformity certification is required before use, foreign providers must have a commercial presence or authorized representative. The transitional provisions give existing operators 18 months from 1 March 2026 to comply with the law's requirements in healthcare, education, and finance sectors, and 12 months for other sectors.

Taiwan: principles first, with explicit cross-references to the Hiroshima Process and OECD frameworks

Taiwan's Artificial Intelligence Basic Act, promulgated on 8 January 2025, is the shortest of the four texts and the most explicitly principle-driven. Its 20 articles establish foundational governance architecture without imposing direct compliance obligations on private actors - that task is left to sectoral legislation to be enacted within two years of the Act's implementation.

The basic principles enumerated in Article 4 cover seven areas: sustainable development and well-being, human autonomy, privacy protection and data governance, cybersecurity and safety, transparency and explainability, fairness and non-discrimination, and accountability. The explanatory notes accompanying each principle are unusually specific about their international sources. Transparency and explainability draws on the EU's Ethics Guidelines for Trustworthy AI published in 2019. Safety draws on Singapore's Model AI Governance Framework for Generative AI from 2024. Sustainability explicitly references the G7 Hiroshima Process International Code of Conduct for Organizations Developing Advanced AI Systems. The framework's legal design is therefore openly intertextual, treating international consensus documents as normative inputs into domestic legislation.

The institutional centerpiece is a National AI Strategic Committee convened by the Premier of the Executive Yuan, composed of scholars, experts, representatives from AI-related private organizations and industry, ministers without portfolio, relevant agency heads, and heads of special municipality governments. The Committee must meet at least once a year and reviews the National AI Development Guidelines. The National Science and Technology Council handles the Committee's administrative operations.

Article 5 places obligations directly on the government rather than private operators. The government must prevent AI applications from infringing on people's life, body, freedom, or property; undermining social order or the ecological environment; and engaging in bias, discrimination, false advertising, or the dissemination of misleading information. Where AI products or systems are designated as high-risk by a sectoral regulator in consultation with the Ministry of Digital Affairs, they must be clearly labeled with warnings. Article 16 requires the Ministry of Digital Affairs to promote a risk classification framework aligned with international standards so that sectoral regulators may develop their own risk-based management regulations.

The Act explicitly requires the government to review all laws and administrative measures under its jurisdiction within two years of implementation and to enact, amend, or repeal any that are inconsistent with the Act. That review obligation creates a structured legislative pipeline that Taiwan's other regulatory agencies must now navigate.

Taiwan's draft AI legislation had been tracked since August 2025 when the Executive Yuan submitted it for deliberation, with roots in a 2017 Ministry of Science and Technology initiative to develop an AI ecosystem.

Comparing the four frameworks

The four laws differ across several axes that matter for any business operating AI systems in Asia.

Scope of private obligation. Vietnam and South Korea impose direct obligations on AI developers, providers, and deployers. Japan and Taiwan address their requirements primarily to government actors, leaving industry obligations to be developed through subsidiary instruments.

Risk classification. Vietnam and South Korea both define high-risk AI through specific domain lists and attach compliance requirements to that designation. Japan and Taiwan establish risk as a concept requiring further elaboration, without immediately triggering private sector obligations.

Enforcement. South Korea specifies administrative fines - up to 30 million won for transparency failures, and imprisonment of up to three years or fines up to 30 million won for unauthorized disclosure of Committee deliberations. Vietnam establishes civil liability for high-risk AI deployments that cause harm. Japan and Taiwan include no direct penalty provisions for private actors in the current texts.

Generative AI. South Korea addresses generative AI explicitly, requiring labeling of outputs and notification to users. Vietnam's medium-risk tier captures systems that cause confusion about whether content is AI-generated. Japan and Taiwan do not address generative AI specifically in their framework legislation.

International alignment. All four explicitly acknowledge or draw on international frameworks, including OECD AI Principles, G7 Hiroshima Process materials, and in Taiwan's case the EU's Ethics Guidelines for Trustworthy AI and Singapore's governance framework. The Hiroshima AI Process, analyzed by the IAPP, includes a voluntary reporting framework hosted by the OECD in which 25 companies have published reports, and a 2026 Action Plan agreed at the second in-person Friends Group meeting in Tokyo in March 2026 that includes outreach efforts, knowledge-sharing seminars, and interoperability studies.

The divergence has practical implications for advertising technology operators. A marketing platform deploying an AI-powered bidding or content generation system across all four markets would face binding conformity assessment obligations in Vietnam, transparency and labeling obligations in South Korea, voluntary governance standards in Japan, and aspirational principles with pending sectoral implementation in Taiwan - all within a single product deployment. The fragmentation documented across European AI training regulations now has a parallel structure emerging in Asia.

Why this matters for marketing

The relevance extends beyond legal compliance departments. AI systems that power programmatic advertising - audience segmentation, automated bidding, dynamic creative optimization - increasingly fall within the definitional scope of these laws. South Korea's definition of high-impact AI explicitly includes "judgments or evaluations that have a significant impact on the rights and obligations of individuals, such as hiring and loan screening." Systems that make consequential decisions about which individuals see which offers could reasonably be analyzed under that framing. Vietnam requires conformity assessment before deployment for systems on a Prime Minister-designated high-risk list - a list that, by March 2026, had not yet been published, leaving operators in a transitional window.

Japan's lightweight framework may appear commercially attractive, but the Hiroshima Process context is relevant. The HAIP Reporting Framework, which lists participants publicly on the OECD website, is increasingly functioning as a soft accountability mechanism. Companies that have submitted reports face scrutiny from civil society and potential benchmarking by competitors. The 2026 Action Plan specifically calls for outreach efforts to increase the diversity and number of reporting companies. That is a market dynamic that brand-sensitive operators will need to monitor.

The Taiwan coverage from December 2025 noted that the legislation's roots extend back to 2017, reflecting years of deliberation about how to balance Taiwan's semiconductor strengths with governance credibility. The final Act's explicit cross-referencing of international frameworks is a design choice with a strategic dimension: it positions Taiwan as a rule-taker aligned with global norms rather than an independent regulatory actor, which may ease market access concerns for international AI developers seeking predictable operating environments.

Timeline

  • January 8, 2025 - Taiwan promulgates Artificial Intelligence Basic Act, with the National Science and Technology Council designated as competent authority
  • January 21, 2025 - South Korea enacts Framework Act on the Development of Artificial Intelligence and the Creation of a Foundation for Trust (Act No. 20676)
  • June 4, 2025 - Japan promulgates AI Promotion Act (Act No. 53 of 2025), establishing AI Strategy Headquarters within the Cabinet
  • August 2025 - South Korea's Personal Information Protection Commission publishes AI privacy guidelines for generative AI data processing
  • August 28, 2025 - Taiwan's Executive Yuan submits Draft AI Basic Act to the Legislative Yuan for deliberation
  • December 10, 2025 - Vietnam's 15th National Assembly passes Law on Artificial Intelligence at its 10th session, signed by Chairman Tran Thanh Man
  • January 22, 2026 - South Korea's Framework Act enters into force
  • March 1, 2026 - Vietnam's AI Law takes effect; 18-month compliance window opens for healthcare, education, and finance AI systems; 12-month window for other sectors
  • March 2026 - Second in-person meeting of the HAIP Friends Group in Tokyo agrees the HAIP Friends Group Action Plan 2026, shifting the process toward practical implementation
  • March 25, 2026 - IAPP publishes analysis of the Hiroshima AI Process structure and voluntary reporting framework

Summary

Who: Japan, South Korea, Vietnam, and Taiwan - the four Asian jurisdictions that enacted national AI legislation between January 2025 and March 2026.

What: Four distinct legislative frameworks governing artificial intelligence research, development, deployment, and governance. Japan's law is a promotion-focused framework establishing Cabinet-level strategy machinery with no direct private-sector penalties. South Korea's act creates a presidential committee, defines high-impact AI across 11 domains, mandates transparency and labeling obligations, and attaches administrative fines. Vietnam's law introduces a three-tier risk classification system with conformity assessment, civil liability for high-risk systems, and a National AI Development Fund. Taiwan's act is a principles-based framework that delegates sectoral implementation to existing regulators while requiring a full review of existing law within two years.

When: The four laws span January 2025 through March 2026, with South Korea's and Vietnam's acts both in force as of early 2026.

Where: Japan, South Korea, Vietnam, and Taiwan. The Hiroshima AI Process context connects Japan's posture to a broader multilateral framework now spanning 66 countries, 38 organizations, and a voluntary reporting mechanism hosted by the OECD.

Why: Each government frames the motivation differently. Japan emphasizes economic competitiveness and national security. South Korea focuses on trust and citizen rights protection alongside industry development. Vietnam prioritizes risk management, infrastructure sovereignty, and structured state oversight of AI deployment. Taiwan stresses human-centered AI, digital equity, and the need to anchor domestic governance in internationally recognized principles. What unites them is an awareness that AI governance left entirely to market forces or to sectoral regulators applying existing law will produce accountability gaps - and that those gaps will eventually attract harder regulatory responses.

Share this article
The link has been copied!