Taiwan drafts AI Basic Act amid chip dominance and regulatory gaps

Taiwan's Executive Yuan submitted Draft AI Basic Act on August 28, 2025, establishing governance framework for artificial intelligence development across semiconductors.

Taiwan drafts AI Basic Act amid chip dominance and regulatory gaps

Taiwan advanced its artificial intelligence regulatory framework on August 28, 2025, when the Executive Yuan submitted the Draft Artificial Intelligence Basic Act to the Legislative Yuan for deliberation. The proposed legislation represents the culmination of efforts dating back to 2017, when the island's Ministry of Science and Technology first launched the AI Grand Strategy for a Small Country project to invest in developing an AI ecosystem.

The Draft AI Basic Act designates the Ministry of Digital Affairs as the competent authority responsible for implementing the framework. The legislation establishes fundamental principles for government action in advancing AI research and development, setting policy objectives based on what the document describes as "AI fundamental principles and the government's priorities."

Taiwan's strategic position in the global hardware supply chain has grown substantially since the advent of generative AI. The introduction of systems like OpenAI's ChatGPT 3.0 in late 2022, along with other emerging AI applications, strengthened Taiwan's role in providing essential hardware infrastructure for computational processing power in the AI industry. The semiconductor industry's production value, primarily driven by the semiconductor foundry sector, is projected to exceed NT$6.4 trillion in 2025, reflecting a 22.2 percent increase from 2024, according to the Industrial Technology Research Institute.

The government's approach builds on previous action plans. Taiwan AI Action Plan 1.0 ran from 2018 through 2021, focusing on research and development, infrastructure, and talent cultivation to facilitate software-hardware integration and edge AI while supporting growth of Taiwan's AI and robotics industries. In 2018, the Executive Yuan rolled out this initial program to enhance Taiwan's AI capabilities and foster development of tailored AI services.

After achieving the Taiwan AI Action Plan 1.0 milestones, the Executive Yuan approved Taiwan AI Action Plan 2.0 in 2023, aiming to elevate Taiwan's AI industry value beyond NT$250 billion by 2026. This second action plan focuses on talent development, industry growth, and global technological influence, reinforcing Taiwan's position as a technology innovation hub.

With the rise of generative AI in late 2022 and its rapid development in 2023, the National Science and Technology Council initiated programs including the Trustworthy AI Dialogue Engine, developed by TAIWANIA II. The Trustworthy AI Dialogue Engine project marks a milestone under the action plan, alongside the Chip-Driven Taiwan Industry Innovation Program, aiming to secure Taiwan's leading position in chip manufacturing and promote integration of advanced silicon chips with frontier technologies.

The Legislative Yuan has discussed various bills to address risks and concerns associated with AI development since 2019. However, all of these bills qualify as what the document terms "basic acts," providing only high-level principles and ethical considerations for AI development. In August 2025, legislators from different political parties proposed more than 10 versions of the Artificial Intelligence Basic Act and conducted a consolidated review of some versions. The Executive Yuan approved its own version of the Draft AI Basic Act and submitted it to the Legislative Yuan for further action.

The Draft AI Basic Act outlines 15 policy objectives for government implementation. The government must adequately plan overall resource allocation and handle matters concerning subsidies, commissions, investments, incentives, assistance and guidance for AI-related industries or provide financial incentives such as taxes or financing. Government agencies shall review and adjust their functions and operations, enacting, amending, or abolishing relevant regulations or guidelines. When existing regulations remain silent prior to such enactment, amendment or abolition, the government interprets and applies the Act according to AI fundamental principles by prioritizing promotion of new technologies and services.

Sectoral regulators may establish or complete existing innovative experimental environments for AI innovative products or services. The government should cooperate with the private sector to promote AI innovation and application through public-private partnerships. The government shall endeavour to promote AI-related international cooperation and participate in international joint development and research programmes.

The legislation mandates continued promotion of AI education at all school levels, within industries and across society, government agencies and public institutions. The government must prevent AI applications from causing personal or property damage, destruction of social order or the environment, conflict of interest, bias, discrimination, false advertising, misinformation, falsification or other problems that violate relevant regulations.

The Ministry of Digital Affairs and other relevant authorities may provide or recommend assessment and verification tools for sectoral regulators to address these matters. The Ministry of Digital Affairs shall promote a risk classification framework for AI that aligns with international standards so that sectoral regulators may stipulate their risk classification regulations for the industries they oversee.

Government agencies may establish standards, verification, traceability or accountability mechanisms through laws or guidelines, adopting a risk management approach to assess potential vulnerabilities and abuses to enhance the verifiability and human control of AI decision-making, and the trustworthiness of AI applications. For high-risk AI applications, the government shall clearly define the attribution of responsibility and the conditions for liability, establishing mechanisms for remedy, compensation or insurance. AI systems that remain in the research and development stage shall be exempt from these accountability requirements and mechanisms. However, this exemption does not apply if the AI is tested in real-world environments, or if the results of AI R&D are used to provide products or services.

Advertise on ppc land

Buy ads on PPC Land. PPC Land has standard and native ad formats via major DSPs and ad platforms like Google Ads. Via an auction CPM, you can reach industry professionals.

Learn more

The government shall prevent skill gaps, protect labour rights and provide employment measures for those who will become unemployed due to the use of AI based on their work capabilities. The competent authority for personal data protection shall assist sectoral regulators to promote data protection by default and by design measures or mechanisms. The government shall establish a mechanism for open data, data sharing and data reuse to enhance the availability of data used by AI and regularly review and adjust or adapt relevant laws and regulations.

The government must endeavour to improve the quality and quantity of data used by AI and ensure that the training results uphold national multicultural values and respect intellectual property rights. When using AI for its operations or provision of services, the government shall conduct risk assessments and plan risk response measures. The government shall formulate rules of use or an internal control and management mechanism according to the nature of the operations involving AI use.

Like other basic acts in Taiwan, the Draft AI Basic Act only outlines the government's high-level responsibilities for fostering AI development and does not impose specific regulatory requirements or mandatory obligations on the private sector. However, once the Draft AI Basic Act is enacted and implemented, ministries and commissions are required to review and adjust their laws and regulations in accordance with the Act's provisions.

Before enacting or amending relevant laws and regulations, existing laws and regulations must be interpreted and applied in line with the Draft AI Basic Act provisions. Monitoring the Draft AI Basic Act developments and observing how sectoral regulators will adapt their laws and regulations after the Act's enactment remains crucial.

The AI Technology R&D Guidelines, issued by the National Science and Technology Council in September 2019, aim to promote ethical AI research across various industries. These Guidelines highlight three core values for AI R&D: human-centred values, sustainable development, and diversity and inclusivity. To support these values, the AI R&D Guidelines outline eight principles for AI R&D, including fairness and non-discrimination, human autonomy and control, safety, privacy and data governance, transparency and traceability, explainability, and accountability.

As a non-binding, voluntary framework, there are no penalties for non-compliance, nor is there an enforcement body. However, their emphasis on ethical AI research is crucial for the responsible development and deployment of AI technology.

The Public Sector Gen AI Guidelines, formulated by the National Science and Technology Council and issued by the Executive Yuan in August 2023, establish a framework for the ethical application of generative AI within the public sector. These guidelines emphasise the importance for government personnel to assess the information generated by generative AI to ensure objectivity and professionalism, rather than replacing their independent judgment and creativity. They also prohibit the dissemination of classified or personal data with AI systems and recommend that generative AI should not serve as the sole basis for administrative acts or public decisions.

In June 2024, the Financial Supervisory Commission issued non-binding administrative advice in the form of the Financial AI Guidelines. These Guidelines aim to establish best practices for financial institutions utilising AI and introduce a risk-based evaluation framework to ensure responsible adoption of AI. The six core principles upheld by the Financial AI Guidelines include governance and accountability mechanisms, fairness and human-centred values, privacy and customer rights protection, system robustness and security, transparency and interpretability, and sustainable development.

Financial institutions are required to conduct risk assessments and implement appropriate risk control measures when using AI systems, considering factors such as the usage of AI systems, the level of autonomy, complexity, impact on stakeholders and possibilities for seeking relief. The Financial AI Guidelines apply to financial institutions such as banks, insurance companies and securities firms, emphasising compliance with AI principles throughout the entire life cycle of AI systems. Although the guidelines are non-binding, it is expected that the Financial Supervisory Commission will oversee compliance with them due to the highly regulated nature of the financial sector.

The Ministry of Digital Affairs released the Draft AI Evaluation Guidelines in March 2024. Referencing the European Union's Artificial Intelligence Act, the Draft AI Evaluation Guidelines classify AI products and systems into four risk levels: unacceptable, high, limited and low. They provide specific criteria for evaluating AI products, focusing on overall quality, performance and the ability to maintain functionality under various conditions. Targeted at the information technology, telecommunication, communication, cybersecurity and internet sectors, these Guidelines encourage industries to self-monitor their AI products or systems based on risk levels and recommend submitting high-risk products to an AI testing centre for evaluation.

As part of the AI Action Plan 2.0, the Ministry of Digital Affairs also established the Artificial Intelligence Evaluation Center to implement the evaluation mechanism for AI products and systems. While these Guidelines are not mandatory, they provide essential guidance and suggested processes for industry reference. The exact timeline for when they will take effect remains uncertain.

The importance of fairness and non-discrimination is highlighted in various AI guidelines and the Draft AI Basic Act. The AI R&D Guidelines, the Financial AI Guidelines and the Draft AI Evaluation Guidelines all emphasise the need to avoid bias and discrimination in the development and deployment of AI systems. The AI R&D Guidelines encourage the establishment of external feedback mechanisms to ensure fairness, while the Financial AI Guidelines require the use of diverse, high-quality data and third-party assessments to prevent bias.

The Financial AI Guidelines also introduce the concepts of human-in-command, human-in-the-loop and human-over-the-loop to ensure fairness. The Draft AI Evaluation Guidelines prioritise fairness as a critical evaluation criterion, requiring AI systems to treat all groups equitably, irrespective of race, gender, political views, disabilities and other factors.

The Draft AI Basic Act similarly establishes fairness and non-discrimination as one of the fundamental principles, urging AI developers and service providers to avoid algorithmic biases and discrimination. Furthermore, it specifies that the Ministry of Digital Affairs and other relevant authorities may provide or recommend tools and methods for assessment and verification to prevent certain harms caused by AI, including bias and discrimination.

The Draft AI Evaluation Guidelines set forth rigorous testing requirements to ensure the accuracy and reliability of AI systems, particularly for high-risk AI systems. These Guidelines outline a comprehensive four-step process known as TEVV, encompassing a wide range of factors, including safety, explainability, resilience, fairness, accuracy, transparency, accountability, reliability, privacy and security. Similarly, the Financial AI Guidelines stipulate that financial AI systems must undergo evaluation for robustness and performance throughout their entire life cycle. This includes assessing their ability to meet implicit or explicit objectives during the design phase and ensuring their effective operation under real-world conditions before deployment. These Guidelines emphasise the necessity of thorough testing and evaluation to ensure AI systems' effectiveness and reliability across various domains.

In the realm of AI governance, transparency and accountability are crucial principles that are generally upheld. The Public Sector Gen AI Guidelines and Financial AI Guidelines both emphasise the importance of transparency by requiring appropriate disclosure of AI usage in governmental and financial institutions. Additionally, the Draft AI Evaluation Guidelines tackle the issue of insufficient transparency in certain AI systems and stress the necessity for explainability, enabling stakeholders to understand the reasoning behind AI decisions.

During the R&D phase, the AI R&D Guidelines suggest that a minimum level of information be provided and disclosed when developing and applying AI systems, software and algorithms, including modules, mechanisms, components, parameters and calculations. The attention to transparency ensures that people can understand the elements involved in AI systems' decision-making processes. AI development and application should adhere to traceability requirements, such as data collection, data labelling and tracking algorithms used in decision-making. Regarding accountability, these Guidelines for both sectors require the establishment of internal control principles and appropriate governance structures to ensure that government agencies and financial institutions assume responsibility for any adverse outcomes resulting from their AI systems.

The Draft AI Basic Act similarly emphasises transparency and explainability, as well as accountability, as two fundamental principles for AI research, development and application. It recommends that the AI-generated content be clearly disclosed or labelled to facilitate the assessment of potential risks and understanding of their impact on stakeholders. It also calls for ensuring appropriate accountability, including both internal governance and external social responsibilities.

The application of AI technology, particularly generative AI, has profoundly impacted the content industry. The legal issues associated with using generative AI include the need for authorised data during AI training, the application of fair use, the protection of generated results under copyright law and the risk of infringement when using AI-generated content. Additionally, the use of generative AI may involve legal issues related to infringement on portrait rights, voice rights and other personality rights. Taiwan has not yet established specific laws or regulations governing the application of AI technology for content generation. These issues are generally addressed by existing legal frameworks, such intellectual property laws and the Civil Code. However, owing to the emerging nature of these legal issues and the lack of sufficient precedents, significant uncertainty remains in addressing potential risks.

Notably, the Taiwan Intellectual Property Office has issued several rulings regarding copyright issues in the use of generative AI. For example, one ruling emphasises the need for copyright owners to authorise the use of their work for AI training. Another ruling implies the possible infringement of copyrights when using articles generated by ChatGPT and suggests first clarifying whether ChatGPT has obtained authorisation from copyright owners to avoid disputes. The Taiwan Intellectual Property Office has also stressed that only the creation by a natural person can be protected by the Copyright Act. AI-generated content may not be copyrighted if it does not involve human creative input.

These rulings generally indicate that if an AI model uses work from any third parties for training or imitation in AI systems, it would constitute reproduction of those works and may potentially infringe upon the third parties' copyrights. The AI operator should secure authorisation or a licence from the copyright owners for the use of their work in AI training to prevent such infringement. Furthermore, if the generation of content does not involve human effort, the output by AI will not be protected by the Copyright Act. These rulings may provide some guidance when addressing intellectual property issues in the context of AI.

Currently, there is no specific legislation that addresses liabilities arising from the use of AI. Liabilities associated with AI usage are governed by existing regulatory frameworks, such as the Civil Code and Consumer Protection Act. Liabilities under the Civil Code generally follow the principle of fault liability, while liabilities under the CPA are strict liabilities. However, establishing responsibility for AI-related harm under theories of negligence or product liability is challenging due to difficulties in proving causation and fault among users, owners or developers of AI systems. The absence of industry standards for AI products further complicates the determination of liability under consumer protection laws.

Additionally, the use of AI in sectors such as unmanned vehicles and medical treatment introduces challenges in determining liability. In the realm of unmanned vehicles, the differing levels of automation and control between AI technology and human operators complicate the attribution of responsibility in the event of an accident. Similarly, within AI-related medical disputes, the black-box nature of AI algorithms and the complexity of the decision-making processes present challenges in determining the liability of medical professionals and AI systems, raising issues related to the standards of care expected from medical professionals and challenging their discretion.

Notably, the Draft AI Basic Act requires the Taiwan government to establish regulations concerning the conditions, responsibilities, remedies, compensation and insurances for AI applications based on AI risk classification, aiming to clarify liability attribution and criteria, and thereby enhancing the trustworthiness of AI.

The Draft AI Basic Act requires the government to prevent AI applications from causing harm to citizens' lives or the environment, and to avoid issues such as conflicts of interest, bias, discrimination, false advertising, misinformation or fraud. It also encourages the Ministry of Digital Affairs and relevant authorities to provide tools and methods for assessing and verifying AI to prevent such harms. The Statute of Fraud Crime Harm Prevention, enacted by the Taiwan government in July 2024, also mandates online advertising platforms to disclose the use of deepfake technology or AI-generated personal images in advertisements.

The disclosure and notice-of-use requirements aim to enhance transparency, accountability and consumer protection, as well as to prevent harm caused by fraud. The Draft AI Basic Act emphasises the importance of transparency and explainability in AI systems, requiring appropriate disclosure or labelling of AI outputs to evaluate risks and understand their impact on rights and interests. The Fraud Prevention Statute also mandates online advertising platforms to disclose the use of deepfake technology or AI-generated personal images in advertisements.

The Draft AI Basic Act and relevant AI guidelines do not clearly define their jurisdiction, leaving uncertainty about whether the subsequent AI regulations have any extraterritorial effect. It remains unspecified how these regulations will apply to AI products from other countries.

The interaction between AI and competition law poses challenges within the digital economy. In light of this, the Fair Trade Commission, in its White Paper on Competition Policy in the Digital Economy issued in December 2023, highlighted concerns about data collection leading to increased price discrimination and the difficulties in investigating concerted actions by enterprises. Furthermore, current regulations in Taiwan have not adequately addressed the issues of differential treatment between enterprises and consumers in the context of AI-driven personalised pricing.

In April 2024, the Fair Trade Commission fined Agoda for using a competitor's business name as a keyword when deploying a self-learning machine to purchase keyword advertisements, which caused confusion among internet users. This case also highlights the potential challenges of using AI in the realm of competition law.

In 2025, the Fair Trade Commission published the Explanatory Information on Soliciting Public Opinions on Competition Law Issues Related to Generative Artificial Intelligence in Taiwan to gather public comments for the Fair Trade Commission's future legislative and enforcement initiatives. This document clarifies the market structure and characteristics of the generative AI, provides an overview of Taiwan's AI hardware supply chain, model development, and application deployment, and further addresses competition issues that may arise from generative AI. Specifically, it examines four types of regulated competition conduct: unilateral abuse of market power, concerted actions, mergers and false advertising or other unfair competition practices.

The principles of data protection is one of the core principles upheld by the Financial AI Guidelines. When using AI systems to provide financial services and interact with customers, financial institutions should protect customer privacy by adhering to the principle of data minimisation and ensure transparency by informing customers about how their rights and benefits may be affected.

The Draft AI Basic Act also enshrines the principle of privacy protection and data governance, requiring the Personal Data Protection Commission to assist sectoral regulators in avoiding unnecessary processing of personal data during AI research and application and promoting personal data protection by default and by design measures or mechanisms. It emphasises the importance of preventing excessive use of personal data and advocates for the principle of data minimisation, while also encouraging the openness and reuse of non-sensitive data.

In June 2025, the Ministry of Digital Affairs announced the draft Act for the Promotion of Data Innovation and Utilization and initiated a 60-day public comment period to gather feedback. The draft aims to establish a legal framework for data access and usability, create mechanisms for data openness, sharing, and reuse, and thereby foster technological innovation and R&D, including in the field of AI.

The AI regulatory framework in Taiwan is still under development. Currently, neither the Draft AI Basic Act nor those non-binding AI guidelines impose mandatory obligations on the private sector or grant enforcement power to competent authorities. However, once the AI Basic Act is enacted, sectoral regulators are expected to formulate sector-specific AI regulations.

As of now, there are no significant AI-related private litigation cases in Taiwan. However, as AI technologies continue to proliferate, disputes over AI issues such as intellectual property infringement and algorithmic bias are expected to arise.

Currently, three major AI systems that have been implemented or are under consideration for use in courts and the judiciary to enhance the efficiency and transparency of the judicial process. The AI-prediction system for judicial decision-making regarding child custody, developed by National Tsing-Hua University and published in September 2019, utilises case data to provide a probability-based prediction for custody outcomes. It is intended to assist parties involved in family cases, as well as attorneys, social workers and mediators, in understanding the principles that judges apply in parental rights cases.

The AI sentencing information system, introduced in February 2023 by the Judicial Yuan, aims to harmonise sentencing practices across courts by analysing previous rulings and automatically tagging relevant sentencing factors. The system has both factual and evaluative modes and is available for use by lawyers and the public. For example, a defendant used the AI Sentencing Information System to understand potential sentencing during an appeal, as mentioned in a judgment.

The AI judgment-drafting system, pre-launched by the Judicial Yuan in late 2023, aims to assist judges in drafting criminal judgments for DUI and fraud cases. The generative AI-based system has faced criticism, prompting the Judicial Yuan to clarify that it only assists judges in generating preliminary drafts based on specific legal conditions and evidence. The ultimate decision-making, including fact-finding and sentencing, remains solely under the judge's authority. The Judicial Yuan plans to conduct external consultations and issue guidance on using the system to ensure that it adheres to the highest standards of judicial responsibility.

In recent years, the legal industry has seen a significant rise in the use of technologies such as AI, known as legal tech. This trend has led to the adoption of innovative services such as AI contract review and risk assessment across various sectors, including finance. However, the need to adjust regulatory standards in response to legal tech remains a critical issue. For instance, in 2022, the Taiwan Bar Association revised its regulations to allow lawyers to advertise their services on approved platforms, emphasising the importance of adapting regulatory standards to accommodate new service models.

Instead of enacting laws and regulations to regulate the use of AI within the private sector, at this stage, the Taiwan government prefers to issue guidelines that serve as non-binding administrative guidance. This approach provides flexibility and encourages industry sectors to engage in self-regulation, aiming to foster innovation and adaptability within the rapidly evolving field of AI. The Draft AI Basic Act also allows government agencies discretion and flexibility to align with international standards and practices by formulating the AI risk classification framework and related regulations in the future.

The Draft AI Basic Act and AI guidelines reflect the Taiwan government's proactive and strategic efforts to address AI-related risks while promoting AI technology innovation and development. By maintaining a forward-looking regulatory framework, Taiwan is well prepared to navigate the complexities of the AI era.

Timeline

  • 2017 - Taiwan launches AI Grand Strategy for Small Country project to invest in developing AI ecosystem
  • 2018 - Executive Yuan rolls out Taiwan AI Action Plan 1.0 (2018-2021) to enhance AI capabilities
  • September 2019 - National Science and Technology Council issues AI Technology R&D Guidelines for researchers' reference
  • 2022 - Taiwan Bar Association revises regulations to allow lawyers to advertise services on approved platforms
  • Late 2022 - Introduction of ChatGPT 3.0 strengthens Taiwan's role in providing hardware infrastructure
  • February 2023 - Judicial Yuan introduces AI sentencing information system to harmonise practices across courts
  • June 2023 - Executive Yuan approves Taiwan AI Action Plan 2.0 (2023-2026) focusing on talent development
  • August 2023 - Executive Yuan issues Public Sector Gen AI Guidelines establishing framework for ethical application
  • December 2023 - Fair Trade Commission publishes White Paper on Competition Policy in Digital Economy
  • March 2024 - Ministry of Digital Affairs releases Draft AI Evaluation Guidelines classifying AI into risk levels
  • April 2024 - Fair Trade Commission fines Agoda for using competitor's business name as keyword in AI-powered advertising
  • June 2024 - Financial Supervisory Commission releases Guidelines for Use of AI in Financial Industry
  • July 2024 - Taiwan government enacts Statute of Fraud Crime Harm Prevention mandating deepfake disclosure
  • August 2025 - Executive Yuan submits Draft AI Basic Act to Legislative Yuan on August 28 designating Ministry of Digital Affairs as competent authority
  • November 2025 - Report on Taiwan AI law published detailing comprehensive framework drafted prior to Legislative Yuan review

Summary

Who: Taiwan's Executive Yuan, Legislative Yuan, Ministry of Digital Affairs, National Science and Technology Council, Financial Supervisory Commission, Fair Trade Commission, and Lee and Li Attorneys at Law (Ken-Ying Tseng and Yi-Mei Pan authored the analysis).

What: The Draft Artificial Intelligence Basic Act establishes a governance framework for AI development in Taiwan, outlining 15 policy objectives including resource allocation for AI industries, regulatory adjustments, international cooperation, AI education promotion, risk prevention measures, and establishment of risk classification frameworks aligned with international standards. The legislation designates the Ministry of Digital Affairs as competent authority and requires government agencies to conduct risk assessments when using AI systems.

When: The Executive Yuan approved and submitted the Draft AI Basic Act to the Legislative Yuan on August 28, 2025, following years of preparatory work including Taiwan AI Action Plan 1.0 (2018-2021), Taiwan AI Action Plan 2.0 (2023-2026), and multiple guideline publications throughout 2023-2024. The analysis document was drafted in November 2025 prior to Legislative Yuan finalisation.

Where: The regulatory framework applies throughout Taiwan, affecting the semiconductor industry (projected production value exceeding NT$6.4 trillion in 2025), financial institutions, government agencies, and AI startups. Taiwan's strategic position in the global hardware supply chain for computational processing power positions the island as a critical player in AI infrastructure development.

Why: Taiwan developed this comprehensive regulatory approach to balance innovation with risk management as AI applications proliferated following generative AI's emergence in late 2022. The framework addresses concerns about algorithmic bias, data governance, intellectual property rights, liability attribution, consumer protection, and competition issues while maintaining Taiwan's technological competitiveness. The legislation responds to global AI governance trends while leveraging Taiwan's semiconductor dominance to advance AI ecosystem development through non-binding guidance rather than prescriptive regulation.