Kazakhstan has established a structured approval process for high-risk AI systems, aiming to enhance safety and trust amid its new AI law, which mandates application-based reviews and public listing of approved systems.
Kazakhstan has moved to put a formal approval process behind the use of high-risk artificial intelligence systems, as the country builds out the regulatory framework created by its new AI law. According to rules published by the authorities, sectoral government agencies will compile and maintain public lists of “trusted” high-risk systems, with the aim of strengthening confidence in AI use and encouraging safer practices across different industries.
The process will be application-based. Owners of high-risk systems must submit a formal request, proof of intellectual property rights and a positive audit conclusion before their system can be considered for inclusion. The relevant agency will have 10 working days to check whether the submission is complete and whether the system description, legal paperwork and audit materials meet the required standard. If the application succeeds, the system will be added to the list and its details published online within five working days.
If officials find inconsistencies, applicants will be notified and can resubmit once the issues are fixed. That follow-up review can take up to five working days, and updated lists will continue to be posted on government websites as they are revised.
The move follows the broader law on artificial intelligence signed by President Kassym-Jomart Tokayev in November 2025, which entered into force in January 2026. As outlined by the US Library of Congress and legal advisories from EY and PwC, the legislation introduced a risk-based framework for AI, rules on transparency and accountability, and requirements for labelling synthetic content created or altered with AI. The rules also sit alongside wider restrictions on manipulative or unlawful AI functions, signalling that Kazakhstan is trying to combine adoption of the technology with tighter oversight.
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
Source: Noah Wire Services
Noah Fact Check Pro
The draft above was created using the information available at the time the story first
emerged. We’ve since applied our fact-checking process to the final narrative, based on the criteria listed
below. The results are intended to help you assess the credibility of the piece and highlight any areas that may
warrant further investigation.
Freshness check
Score:
8
Notes:
The article references recent developments, including the signing of the AI law by President Kassym-Jomart Tokayev on 17 November 2025, which entered into force on 18 January 2026. ([pwc.com](https://www.pwc.com/kz/en/pwc-news/ta-reports/tax-legal-alert-fy19/278-december-2025.html?utm_source=openai)) The content appears to be current and not recycled from older sources. However, the specific publication date of the article is not provided, making it difficult to assess its freshness definitively. ([ey.com](https://www.ey.com/en_kz/technical/tax-alerts/2025/12/law-on-artificial-intelligence-kazakhstan?utm_source=openai))
Quotes check
Score:
7
Notes:
The article includes direct quotes attributed to various sources. However, without specific citations or links to the original sources, it's challenging to verify the authenticity and context of these quotes. The absence of verifiable sources raises concerns about the reliability of the quoted information.
Source reliability
Score:
6
Notes:
The article is sourced from Qazinform, a news outlet that appears to be niche and may not have the same level of credibility as major international news organizations. ([zakonpravo.kz](https://zakonpravo.kz/en/article-19-lists-of-trusted-high-risk-artificial-intelligence-systems-of-the-artificial-intelligence-act?utm_source=openai)) The lack of independent verification from more established sources diminishes the overall reliability of the information presented.
Plausibility check
Score:
8
Notes:
The claims about Kazakhstan's AI law and the introduction of mandatory audits for high-risk AI systems align with information from other reputable sources. ([ey.com](https://www.ey.com/en_kz/technical/tax-alerts/2025/12/law-on-artificial-intelligence-kazakhstan?utm_source=openai)) However, the article's lack of specific details and verifiable sources makes it difficult to fully assess the plausibility of all claims.
Overall assessment
Verdict (FAIL, OPEN, PASS): FAIL
Confidence (LOW, MEDIUM, HIGH): MEDIUM
Summary:
The article presents information about Kazakhstan's AI law and mandatory audits for high-risk AI systems. However, the lack of verifiable sources, reliance on a niche news outlet, and absence of independent verification from reputable organizations raise significant concerns about the accuracy and reliability of the content. ([ey.com](https://www.ey.com/en_kz/technical/tax-alerts/2025/12/law-on-artificial-intelligence-kazakhstan?utm_source=openai))