James Bach has made a blunt claim that cuts against much of the current hype around automation: artificial intelligence cannot act responsibly because it is not a person. His argument is not really about whether AI can be useful, but about where responsibility begins and ends in a business. In his view, that line remains firmly with natural persons, who can be held to account in law, in contracts and in ordinary social life.

Bach frames the issue through the workings of business itself. Every company depends on services such as sales, finance, support and research, and those services only function when someone is answerable for failure, recovery and oversight. He argues that responsibility can be delegated only within a clear human protocol, and that even when AI is used, a person must remain competent, alert and able to intervene. Without that structure, he warns, organisations risk inefficiency, poor quality and negligence claims. His new "Principles of Responsible Work", written with Jon Bach and Michael Bolton, is intended as a compact statement of that view.

The broader debate lends some support to his position. Writing in Scientific American, Marcus Arvan argued that advanced AI systems are too unpredictable to be reliably aligned with human goals, suggesting that the real challenge lies as much in human judgement as in machine capability. Similarly, Joanna Bryson has long argued that only humans can be accountable for AI, while a paper in the journal AI and Ethics by Jan Christoph Bublitz explored the ethical and legal complications that would arise if AI were ever treated as part of a person rather than merely a tool. Across these accounts, the common thread is that human agency cannot simply be offloaded onto software.

That concern is especially acute in high-stakes settings. James Johnson, writing in International Affairs, examined the use of AI decision-support in military planning and warned that automation bias and over-reliance on machine output can weaken moral judgement. Wesley J. Smith has also argued against treating AI as a person at all, saying that it lacks the qualities normally associated with moral responsibility. Bach’s point sits close to that line of thinking: AI may assist human work, but it cannot own the consequences of that work. In his formulation, the danger is not that AI becomes responsible, but that humans stop being so.

Source Reference Map

Inspired by headline at: [1]

Sources by paragraph:

Source: Noah Wire Services