The term ‘artificial intelligence’ often conjures images of detached, autonomous machines operating independently from human society. Yet, a closer examination reveals that AI is far from an alien or independent entity; it is deeply entwined with human reality. From popular voice assistants like Siri to advanced educational tools powered by GPT, AI systems operate as extensions of human design, logic, and culture. For instance, Google Translate, which often appears to ‘know’ numerous languages, actually relies entirely on millions of human translations inputted into its system. Therefore, the ‘artificial’ label refers more to the method of creation than to the essence of AI itself. This distinction is crucial as it challenges the misconception that AI operates autonomously, instead underlining its role as a human-anchored amplification of our own power.
This understanding has significant implications for how societies regulate and manage AI technologies. The European Union’s AI Act, enacted in 2024, exemplifies this perspective by defining AI as software developed through human-designed techniques and emphasising human accountability. Such policy choices counter the myth of AI’s autonomy and prioritise responsibility. Similarly, the United Nations Educational, Scientific and Cultural Organisation’s (UNESCO) Recommendation on the Ethics of Artificial Intelligence, adopted in 2021, reiterates that AI is always “human-made and human-directed,” framing it as a socio-technical system rather than an independent intelligence. The language we use to describe AI shapes the frameworks for governance, ethical oversight, and accountability, which are essential to mitigate AI’s potential harms.
Underlying all AI systems is human-generated data, sometimes described as AI’s ‘DNA.’ Models like OpenAI’s GPT or Anthropic’s Claude do not ‘think’ but generate responses based on the vast datasets of human writing and behaviour they have ingested. Streaming services such as Spotify leverage users’ listening habits to power recommendation algorithms. Consequently, AI is essentially a repository and mathematical model of human action, reflecting our behaviours at scale. However, this dependency introduces vulnerabilities. The Cambridge Analytica scandal, involving the misuse of Facebook data in 2016 to influence elections, exposed how AI could amplify political biases and manipulation. Closer to home, Sri Lanka witnessed the exacerbation of social media misinformation in the 2018 anti-Muslim riots, where unregulated algorithmic systems intensified hate speech. These examples demonstrate that AI is not detached; it is profoundly embedded in societal dynamics and human consequences.
The often-assumed ‘intelligence’ of AI is, in fact, a sophisticated imitation. AI systems generate plausible outputs by recognising patterns and predicting likely sequences, rather than possessing true understanding or reasoning. IBM’s Watson, which triumphed on the game show Jeopardy! in 2011, did so by matching clues to linguistic patterns, not through human-like reasoning. Similarly, large language models can draft coherent essays but lack comprehension of the ethical, legal, or emotional contexts involved. This distinction is vital as it warns against overreliance on AI outputs. For example, in 2023 a US lawyer was sanctioned for submitting a legal brief containing fabricated case references generated by ChatGPT, which statistically mimicked legal citations without discerning truth. Earlier, Microsoft’s Tay chatbot demonstrated how AI can replicate harmful human biases, quickly producing racist content after exposure to toxic user interactions. Recognising that AI simulates rather than understands human expression is crucial to avoid misplaced trust and accountability.
Bias remains a persistent problem with AI, often replicating and amplifying societal prejudices. The COMPAS algorithm in the United States, intended to predict criminal recidivism, disproportionately labeled Black defendants as high risk while underestimating risks for white defendants due to biased historical data. Hiring algorithms, such as Amazon’s failed experiment in 2018, have similarly discriminated against women applicants. These issues are not confined to Western contexts. In India, Aadhaar-linked biometric systems have excluded rural and poor populations from vital public services. In Sri Lanka, the use of facial recognition technologies raises concerns about underrepresentation of darker-skinned individuals. These instances underscore that AI reflects structural biases embedded in human data rather than being neutral arbiters. Addressing bias ethically demands recognising its deep roots in social structures.
The question of AI’s creativity has provoked debate as systems like DALL·E and MidJourney generate images and music imitating famous artists such as Picasso or Mozart. While these creations appear original, they are statistical recombinations of pre-existing works without intentionality or emotional input, offering imitation at scale rather than genuine creativity. The rise of AI-generated art has sparked controversies regarding fairness and authorship; notably, in 2023 a Colorado art competition awarded first place to an AI-generated image, igniting discussions on human versus machine creativity. Copyright debates have also emerged, with the US Copyright Office ruling that AI creations lacking human input do not qualify for copyright protection, underscoring the primacy of human agency in authorship. AI thus challenges conventional notions of creativity, compelling society to reassess how we value human and machine outputs.
Despite narratives of autonomy, AI systems rely heavily on human oversight and intervention. Self-driving vehicles tested by Tesla and Waymo still require human supervision, updating, and retraining to function safely. Investigations into fatal accidents involving Tesla’s autopilot highlight how human monitoring and improved safety measures are critical. AI models also suffer from ‘model drift’ as they become less accurate over time without fresh data and human recalibration. Regulatory frameworks like the EU AI Act ban harmful uses such as social scoring systems, recognising the dangers of unchecked AI surveillance. Conversely, countries like Sri Lanka lack comprehensive AI governance, leaving their populations vulnerable to misuse, especially in sensitive areas like elections and public security. These realities attest that AI’s future hinges on human decisions, ethical guidelines, and regulation rather than on autonomous machine evolution.
In essence, AI is far less ‘artificial’ than commonly perceived. It is a mirror reflecting human data, ethics, and societal structures, extending human capabilities computationally rather than replacing them. Its flaws—bias, imitation, deterioration—are magnifications of human limitations, while its capabilities are human achievements realised at scale. The critical challenge lies in shaping AI responsibly to prevent entrenching existing inequalities and to ensure it benefits all of humanity. While bodies like the EU and UNESCO have pioneered frameworks emphasising human rights, dignity, and accountability, many developing countries, including Sri Lanka, remain without comprehensive policies. Reframing AI as a socio-technical system with human accountability at its core is vital. Ultimately, AI’s trajectory will be shaped not by machines themselves but by the governance, ethics, and laws humanity enacts.
📌 Reference Map:
- Paragraph 1 – [1]
- Paragraph 2 – [1], [2], [4], [5], [6], [7]
- Paragraph 3 – [1]
- Paragraph 4 – [1]
- Paragraph 5 – [1]
- Paragraph 6 – [1]
- Paragraph 7 – [1]
- Paragraph 8 – [1], [3]
Source: Noah Wire Services