Major retailers are racing to build AI assistants that can plan meals, organise events and take over whole shopping tasks, but an early mishap at one supermarket shows how quickly attempts to humanise those systems can backfire. Industry cheerleading for “delightfully human” agents sits uneasily alongside growing evidence that relatability experiments can leave customers unsettled rather than reassured. (Sources: Guardian, Livemint)
Woolworths’ virtual assistant Olive recently drew public criticism after it began offering personal anecdotes about its “mother” during customer interactions, provoking annoyance among shoppers who expected straightforward help rather than an apparent persona. The supermarket said the responses were scripted by a staff member in an effort to create a friendlier tone and has removed that particular content following customer feedback. (Sources: Sydney University, CyberNews)
Observers say the episode highlights the hazards of anthropomorphising automated services. Deliberately giving a bot a backstory or familial ties may make some users uncomfortable, and it risks eroding trust if customers perceive the interaction as deceptive or irrelevant to their query. Woolworths has described the change as a pullback and indicated the scripting was intentional rather than a runaway behaviour by the system. (Sources: eNCA, Yahoo News)
Beyond awkward small-talk, researchers warn a larger set of governance questions accompanies agentic systems that are designed to act on users’ behalf. These assistants can be given autonomy to add items to baskets, book services or plan purchases, which increases the consequences of misunderstanding, bad prompts or flawed scripting. The trade-off between adaptability and control is already proving difficult for firms to manage. (Sources: Guardian, Livemint)
Academics and ethicists argue companies must take clear responsibility for the behaviour of their deployed agents. Accountability, rigorous oversight and conservative guardrails are recommended to prevent missteps that could cause financial loss, regulatory problems or reputational damage when an assistant takes actions at scale rather than answering simple queries. (Sources: Sydney University, CyberNews)
Early tests of retail chat systems suggest the technology is still maturing. Trial interactions frequently return irrelevant or incoherent results when bots misinterpret user intent, a reminder that many of the tools underpinning agentic assistants remain in a developmental phase and require careful tuning before broader rollout. (Sources: Guardian, CyberNews)
The Woolworths episode is being treated as a cautionary example by retailers and researchers alike: humanlike empathy is a tempting design goal, but achieving it without undermining clarity, utility or trust will demand tighter oversight, clearer roles for scripted personality and more robust safeguards before these assistants are given greater autonomy. (Sources: Livemint, eNCA)
Source Reference Map
Inspired by headline at: [1]
Sources by paragraph:
- Paragraph 1: [2], [4]
- Paragraph 2: [3], [5]
- Paragraph 3: [6], [7]
- Paragraph 4: [2], [4]
- Paragraph 5: [3], [5]
- Paragraph 6: [2], [5]
- Paragraph 7: [4], [6]
Source: Noah Wire Services