A recent study has revealed that individuals without legal expertise tend to place greater trust in legal advice generated by artificial intelligence (AI) systems such as ChatGPT than in advice given by real lawyers—particularly when the source of the advice is unknown. The research, which involved a series of experiments with 288 participants, sheds light on the growing public reliance on AI-generated content and raises questions about how such technology is influencing decision-making in critical areas like law.

The experiments presented participants with legal advice and asked which guidance they would be more willing to act on. Notably, when participants did not know whether the advice came from a human lawyer or an AI language model, a majority showed a preference for the AI-generated advice. Even when participants were informed about the origin of the responses, they indicated an equal willingness to follow advice from ChatGPT as from a qualified lawyer.

One factor potentially contributing to this trend is the complexity of language used by AI systems. Researchers observed that AI-generated advice tended to employ more sophisticated vocabulary and phrasing, whereas human lawyers used simpler language but often provided lengthier explanations. Additionally, AI responses were often delivered with greater confidence, which might influence recipients' perceptions of their reliability.

However, the study also explored participants’ ability to distinguish between advice from AI and that from legal professionals when the source was not disclosed. Participants performed only slightly better than random guessing, with an average accuracy score of 0.59 on a scale where 0.5 represented chance-level accuracy and 1.0 indicated perfect identification. This suggests that while some detection is possible, it remains a challenging task for most people to reliably discern the origin of legal advice.

The implications of these findings come amid the rapid integration of AI-powered tools like ChatGPT into everyday life, assisting with tasks ranging from answering general queries to offering medical and legal information. Experts emphasise that the confident manner in which AI systems present information can mask potential inaccuracies or so-called “hallucinations”, where the content is incorrect or nonsensical. In areas such as law, misinformation could lead to serious consequences, including unnecessary complications or miscarriages of justice.

Given these concerns, the study supports ongoing efforts to regulate AI technologies. For instance, the European Union’s AI Act requires that outputs generated by text-producing AI must be “marked in a machine-readable format and detectable as artificially generated or manipulated”. Yet regulation alone is seen as insufficient. The study’s authors advocate for enhanced AI literacy among the public, enabling individuals to critically assess AI-generated content and recognise its limitations.

Practical recommendations include approaching AI as a tool for preliminary enquiry—such as researching legal options, identifying relevant laws, or discovering similar cases—while always verifying critical advice with qualified human professionals before making decisions or taking action. Encouraging critical thinking, source awareness, and cross-checking information with trusted experts is essential to prevent overreliance on AI-generated advice.

The research highlights a dual approach combining regulation and education to harness the benefits of AI technology responsibly. As AI continues to evolve and permeate various facets of life, understanding its capabilities and constraints will be critical for individuals navigating important decisions, particularly in complex fields like the law.

Source: Noah Wire Services