South Korea formally brought into force its AI Basic Act on 22 January 2026, enshrining a single, nationwide legal framework to steer the development, deployment and oversight of artificial intelligence. According to the law’s official portal, the statute combines previously separate measures into a unified approach aimed at promoting innovation while strengthening safeguards for safety, transparency and public trust. Industry observers say the move positions Seoul among the most assertive national regulators on AI. (Sources: [2], [4])
The legislation establishes a risk-based regime that imposes heightened duties on operators of systems deemed to have major social or safety consequences. Those “high-impact” applications include tools used in hiring, credit assessments, medical advice and critical infrastructure, which will face mandatory risk assessments, safety controls and clearer disclosure to users about where AI is in use. The Department of Commerce’s analysis highlights that these requirements apply to both domestic and foreign providers that meet revenue, sales or user thresholds. (Sources: [5], [4])
Generative AI is singled out for specific rules intended to curb deception and abuse. Providers will be required to label or watermark AI-produced images, audio and video so recipients can recognise synthetic material, a measure described by South Korean authorities as a basic, minimum safeguard against deepfakes and other manipulative content. The official site and reporting on the law make clear that such provenance markings are central to enforcement. (Sources: [2], [7])
To improve regulatory reach over global platforms, the Act compels overseas AI service providers meeting certain commercial thresholds to appoint a local representative responsible for compliance and communications with regulators. Government guidance and trade analyses note this mirrors recent trends in tech regulation elsewhere and is intended to ensure effective supervision of services offered to Korean users. (Sources: [4], [5])
Enforcement powers include on-site inspections and fines for breaches of the statute’s obligations, with transitional arrangements to allow adaptation by industry. Government material and sector commentary indicate there will be a grace period before the full scale of penalties is applied, providing time for firms to implement watermarking, transparency measures and risk-management processes. (Sources: [2], [5])
Beyond regulation, the Act also sets out measures to nurture the domestic AI ecosystem, allocating support for research, talent development and startup growth while organising national coordination on data infrastructure. Analysts say this dual focus, combining industrial policy with compliance duties, reflects Seoul’s ambition to balance competitiveness with ethical and consumer protections as AI use expands across the economy. (Sources: [2], [6])
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [4]
- Paragraph 2: [5], [4]
- Paragraph 3: [2], [7]
- Paragraph 4: [4], [5]
- Paragraph 5: [2], [5]
- Paragraph 6: [2], [6]
Source: Noah Wire Services