Kazakhstan has moved to put a formal approval process behind the use of high-risk artificial intelligence systems, as the country builds out the regulatory framework created by its new AI law. According to rules published by the authorities, sectoral government agencies will compile and maintain public lists of “trusted” high-risk systems, with the aim of strengthening confidence in AI use and encouraging safer practices across different industries.

The process will be application-based. Owners of high-risk systems must submit a formal request, proof of intellectual property rights and a positive audit conclusion before their system can be considered for inclusion. The relevant agency will have 10 working days to check whether the submission is complete and whether the system description, legal paperwork and audit materials meet the required standard. If the application succeeds, the system will be added to the list and its details published online within five working days.

If officials find inconsistencies, applicants will be notified and can resubmit once the issues are fixed. That follow-up review can take up to five working days, and updated lists will continue to be posted on government websites as they are revised.

The move follows the broader law on artificial intelligence signed by President Kassym-Jomart Tokayev in November 2025, which entered into force in January 2026. As outlined by the US Library of Congress and legal advisories from EY and PwC, the legislation introduced a risk-based framework for AI, rules on transparency and accountability, and requirements for labelling synthetic content created or altered with AI. The rules also sit alongside wider restrictions on manipulative or unlawful AI functions, signalling that Kazakhstan is trying to combine adoption of the technology with tighter oversight.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services