South Africa’s draft national artificial intelligence policy has moved from concept to consultation, opening a new phase in how the country may regulate the technology, and in particular how it may assign responsibility when AI systems act with increasing independence. The policy was published for public comment on 10 April after Cabinet approval on 25 March, and comments are due by 10 June, according to notices from the Department of Communications and Digital Technologies and legal briefings on the draft.

The draft is part of a broader shift away from viewing AI as little more than a decision-support tool. Lawyers writing on the policy say its significance lies in how it anticipates more autonomous systems, including so-called agentic AI, which can pursue objectives and take action without waiting for a human sign-off at each step. That matters because the legal risk is no longer limited to flawed outputs or biased recommendations; it can attach to the system’s own conduct and the consequences that follow.

In corporate terms, that places a heavy burden on boards and senior managers. The policy discussion, as analysed by legal commentators, points to existing South African company law as a constraint on any attempt to outsource accountability to software. Directors remain responsible for decisions about whether to deploy AI, what authority it should have and how it is supervised, even where the system operates at scale and speed that make conventional oversight difficult. Baker McKenzie has said organisations should already be reviewing governance structures in anticipation of tighter sector-specific controls.

The draft also sits within a wider regulatory architecture rather than a standalone AI statute. According to Baker McKenzie, the policy follows a sector-specific, multi-regulator model, with oversight expected to be embedded in existing supervisory frameworks. Other legal analyses describe the policy as a starting point for responsible and inclusive AI governance, tying it to skills development, ethical deployment, cultural preservation and human-centred use.

For businesses, the most immediate challenge is practical risk allocation. Lawyers say contracts, warranties, indemnities and audit rights were often drafted on the assumption that systems remained tightly controlled by people, leaving a mismatch when autonomous tools are allowed to act on an organisation’s behalf. They also warn that South African law may attribute AI-generated messages and transactions to the deploying entity, while common-law principles such as agency, estoppel and delict could all widen exposure if weak governance makes the system’s actions appear authorised.

The consumer and privacy dimensions are equally important. Analysts note that the Protection of Personal Information Act can restrict automated decisions with legal effect, while the Consumer Protection Act may impose strict liability where harm arises in consumer-facing settings. The policy therefore arrives as both a signal of intent and a warning: companies that are already using autonomous AI may need to tighten controls, review systems permissions and rethink whether their current compliance models are fit for purpose.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services