Author: J. Dennaoui

Navigating the Dual-Edged Sword of AI Progress
As I read Sam Altman’s manifesto—an optimistic view of the transformative potential of artificial intelligence (AI) and its designation as a significant step toward an “age of intelligence”—I found myself increasingly concerned. This concern compelled me to write an article that deviates from my usual optimism regarding technology. While tech leaders like Altman enthusiastically champion AI’s benefits, their assessments often downplay the complex societal implications of such rapid technological advancement. Prominent venture capitalist Vinod Khosla has highlighted the potential for AI to disrupt traditional employment, suggesting that technologies like AI could replace up to 80% of human jobs. While some view this as an opportunity for societal evolution and increased efficiency, others worry about job loss and the subsequent impact on human purpose and identity.
History and philosophy remind us that not all technological advancements lead to a better world. Superintelligence, in particular, could reshape society in ways that strip us of our humanity, individual purpose, and ethical grounding. A glimpse of this potential future can be seen in the dystopian adaptation of Uglies, where society has perfected beauty and harmony but at the cost of individuality, emotion, and free thought. Though Uglies and other dystopian narratives are works of fiction, they reflect the deeper risks of superintelligence: worlds that appear perfect on the surface but lack the values and complexities that make us human.
Here are ten (non-exhaustive) profound impacts of superintelligence, which might be only 1,000 days away:
1. Progress or Peril? The Dual Nature of Technological Advancements
Technological advancement often brings both peril and promise. The atomic bomb represented a great leap in science but also introduced unprecedented global destruction. This lesson applies to superintelligence as well. While AI might revolutionise healthcare through precision medicine and automated surgeries, it might also pose ethical dilemmas. Hannah Arendt, in The Human Condition, argued that technology can dehumanise individuals when its goals are purely utilitarian. If AI is used solely to maximise efficiency and minimise costs, patients could become mere data points, disconnected from the compassionate care they need.
2. The Arbitrary Distribution of Benefits: A Matter of Perspective
Technological revolutions like the Industrial Revolution are often lauded for bringing unprecedented advancements, but these benefits depend entirely on perspective. For consumers, industrialisation brought cheaper goods and improved living standards. But for child labourers working in dangerous factories, it meant exploitation and suffering. Progress was unevenly distributed.
In healthcare, AI may offer incredible innovations—processing vast amounts of medical data to diagnose diseases faster than humans. From patients’ perspectives, this seems monumental. However, for healthcare professionals, the rise of AI could result in job displacement, echoing Khosla’s warnings. The loss of professional identity and purpose could mirror the disparities seen during the Industrial Revolution.
Whose experience matters more? Is it the patient benefiting from quicker diagnostics or the medical professional whose role is being phased out? We must balance technological advancement with human elements like care, trust, and professional fulfilment. Is it acceptable to prioritise some patients over others based on a mathematical algorithm, potentially overlooking individual nuances?
3. Ethics and Philosophy: The Missing Pillars in AI Development
What constitutes “good” healthcare? Who defines what is ethical in treatment decisions? These fundamental questions cannot be answered by algorithms alone. AI systems may recommend the most efficient treatment plans, but efficiency doesn’t always align with values like dignity, empathy, or justice. Aristotle, in Nicomachean Ethics, emphasised that the “good life” is about living in accordance with virtue and purpose.
When superintelligence makes life-saving decisions, such as allocating scarce medical resources, we must ask: On what basis does the AI determine who receives treatment? Will it prioritise efficiency, or will it account for the moral complexities that make healthcare more than just data? How do we marry logic with compassion to ensure that human values are integrated into AI decision-making?
Michel Foucault analysed how centralised, opaque systems exercise power and control, raising important questions about trust and verification in decision-making processes—a concern highly relevant to the use of AI today.
4. The Philosophical Void: Beyond Immediate Ethical Dilemmas
While modern philosophers grapple with the ethical implications of AI—such as existential risks and control—they often overlook deeper existential questions. Historically, philosophy has evolved in response to new knowledge, with thinkers like Plato and Aristotle exploring the nature of existence. In contrast, current philosophical discourse on AI tends to focus on immediate consequences rather than what superintelligence means for existence itself.
Possible reasons include:
- Urgency of Ethical Concerns: The rapid development of AI necessitates immediate attention to ethical dilemmas, diverting focus from abstract existential questions.
- Fragmentation of Philosophy: Modern philosophy is highly specialized, limiting holistic exploration of broader questions.
- Complexity of AI: The technical intricacies of AI may discourage philosophers from delving into its existential implications without deep understanding.
This gap leaves unanswered questions:
- What does AI teach us about thought, consciousness, and existence?
- Could AI develop a sense of meaning or purpose, redefining humanity’s pursuit of meaning?
- Is consciousness uniquely human, or could machines share this trait?
Some philosophers, like David Chalmers5 and Susan Schneider, have begun to explore these issues. However, there’s ample room for deeper engagement examining how our relationship with superintelligence might redefine human existence and purpose. Philosophy may become one of the most needed fields of study, reshaped to include interdisciplinary approaches encompassing technology, ethics, and existential questions.
5. Challenging Fundamental Beliefs: The Impact on Religion and Theology
The advent of superintelligence has profound implications for religion and theology. Long-held concepts about the soul, consciousness, good and evil, and human purpose may need re-evaluation.
- Concept of the Soul and Consciousness: If AI attains consciousness, could a machine have a soul? This challenges beliefs about human uniqueness.
- Good and Evil: Can machines possess moral agency? Assigning concepts of sin or virtue to AI blurs ethical lines.
- Free Will and Predestination: AI’s ability to predict human behaviour questions notions of free will and moral accountability.
- Role of Creation and Creator: By creating intelligent beings, are humans “playing God”? What are the spiritual ramifications?
- Ethics in Religious Communities: Some faiths may embrace AI to alleviate suffering, while others may view it as a threat to divine order.
This blurs the lines between creator and creation, challenging theological doctrines that emphasise human uniqueness. A parallel can be drawn between the concept of the prophesied Dajjal in Islamic eschatology and superintelligence—the deceiver bringing great trials, challenging faith and ethical norms.
6. Linguistics and the Reinterpretation of Meaning
Language is not merely a tool for communication but a fundamental framework that shapes our perception of reality. As artificial intelligence (AI) becomes increasingly integrated into our communication systems, it influences not only how we interact but also how we think and understand the world around us. This interplay between language and cognition underscores the profound impact AI can have on societal norms and individual thought patterns.
- Redefining Language: AI processes language through patterns and algorithms, not through understanding or emotion. If AI begins to mimic human emotions convincingly, our concepts of “love,” “wisdom,” and “emotion” may need re-evaluation. Does an AI expressing “love” experience it as humans do, or is it a simulation?
- Machine Communication and Meaning: Traditional linguistics focuses on human communication, but AI introduces machine-to-machine communication. Do machines “mean” things in the same way humans do, or is their “understanding” fundamentally different? This challenges our definitions of semantics and pragmatics.
- Communication as Reality Shaping: The Sapir-Whorf Hypothesis7 suggests that the language we speak influences the way we think and perceive the world. According to this theory, changes in language can lead to changes in cognition and societal norms. Similarly, AI’s way of processing and generating language could influence human thought patterns and societal norms. As AI becomes more integrated into our communication systems, it may introduce new linguistic structures and paradigms that reshape our understanding of reality.
- Spiritual Language in the Age of AI: Concepts like “soul,” “spirit,” “purpose,” and “divine” are deeply rooted in human experience. As AI becomes more integrated into our lives, these terms may evolve. Will we attribute spiritual significance to AI entities, or will new terms emerge to describe this intersection of technology and spirituality?
7. Trust in Superintelligence: The Problem of Verification
As AI systems surpass human capabilities, especially in critical fields like medicine, an important question emerges: How do we trust their outputs? Michel Foucault, in Discipline and Punish8, explored how knowledge and power are deeply intertwined, warning of the dangers of centralized, opaque systems controlling decision-making.
For example, an AI-powered healthcare system might assign treatments based on patient profiles without the nuanced understanding of a human doctor. While the AI may be technically correct, it may miss vital elements of patient care that go beyond data. Self-determination is at stake; if the machine decides the course of treatment but the patient does not consent, how do we reconcile this conflict?
8. Environmental Impact: Superintelligence and Climate Change
Another critical concern is the environmental footprint of superintelligence. Training and operating advanced AI systems require immense computational resources, which consume significant amounts of energy. Data centers powering AI contribute to carbon emissions unless they rely entirely on renewable energy sources.
- Energy Consumption: Training large AI models requires significant electricity 9.
- Carbon Emissions: Without renewable energy, AI contributes to climate change.
- Resource Allocation: Investment in AI might divert resources from sustainable technologies.
Ignoring AI’s environmental impact could undermine efforts to combat climate change. Sustainable practices must be prioritized.
9. The Loss of Meaning and Human Purpose
One of the most profound risks associated with superintelligence is the potential erosion of human purpose. AI could displace millions of workers across various sectors, leaving individuals grappling with a loss of identity and value. Viktor Frankl, in Man’s Search for Meaning, argued that humans derive meaning through purpose, even amidst the most challenging circumstances. This concept remains highly relevant in our rapidly changing world. If superintelligence disrupts the labour market and displaces individuals from roles around which they have built their identities, how will they find purpose in a society where machines perform tasks that once provided personal and professional fulfilment?
Superintelligence threatens to strip away human purpose:
- Job Displacement: AI could render many professions obsolete, leading to loss of identity.
- Human Connection: Reliance on AI may erode empathy in fields like healthcare.
- Existential Questions: Nietzsche warned of a future lacking higher values like creativity and ambition.
We may need new universal human rights, such as the Right to Purpose and the Right to Reality, to ensure meaningful human engagement in society.
10. The Looming Threat of Global Conflict: Coincidence or Consequence?
In parallel with the rapid advancement of AI, the world currently faces escalating geopolitical tensions that some fear could lead to a third world war 10. The convergence of technological progress and global instability raises critical questions: Is it coincidental that as we approach the threshold of superintelligence, international relations are fraying? Could the rise of AI be contributing to these tensions, or is it merely unfolding alongside them?
Super-intelligent AI has the potential to exacerbate existing conflicts through:
- Cyber Warfare: Advanced AI could be weaponized to conduct sophisticated cyber attacks, disrupting critical infrastructure.
- Autonomous Weapons: The development of AI-controlled weaponry lowers the threshold for entering into conflict, as machines, not humans, make life-and-death decisions.
- Information Manipulation: AI can generate deepfakes and spread disinformation, undermining trust between nations and within societies.
The fear of losing technological superiority might prompt nations to engage in an arms race, not unlike the nuclear arms race of the 20th century. This competition could strain diplomatic relations and increase the risk of misunderstandings leading to conflict.
Is it a coincidence that these developments are occurring simultaneously, or is there a deeper connection between technological progress outpacing human readiness and the instability we observe on the global stage? Perhaps the lack of international cooperation on AI ethics and regulation reflects broader challenges in global governance—a symptom of progress outpacing our ability to manage it responsibly.
Conclusion: A Cautionary Approach to Superintelligence
Superintelligence holds immense potential but poses significant risks:
- Ethical and Existential Implications: Integrate ethical considerations into AI development.
- Governance Challenges: Re-evaluate global institutions and legal systems.
- Environmental Concerns: Prioritize sustainable AI practices.
- Human Values: Preserve human dignity, purpose, and values.
- Global Stability: Recognize and address the interplay between technological advancement and geopolitical tensions.
Change is inevitable, but we must question if it’s happening too fast and outpacing us. By engaging in global dialogue—including policymakers, technologists, ethicists, philosophers, theologians, linguists, and citizens—we can harness AI’s benefits while mitigating its risks.
Final Thoughts
Superintelligence presents a paradox: it could solve pressing problems but also lead us toward a dystopian future. Philosophical and theological exploration is essential to guide AI integration in a way that enriches the human experience. We must consider ethical, social, environmental, and existential implications to ensure that progress doesn’t come at the cost of our humanity.
Moreover, the current geopolitical climate, with looming threats of global conflict, underscores the urgency of addressing these challenges. The simultaneous rise of superintelligence and escalating international tensions may not be mere coincidence but rather interconnected phenomena that reflect our unpreparedness for rapid change.
We must ask ourselves whether we are truly ready for the transformations that superintelligence will bring. It is not enough to focus solely on technological capabilities; we must also consider the broader implications. By engaging in a comprehensive, interdisciplinary dialogue, we can develop strategies to harness the benefits of superintelligence while mitigating its risks.
In doing so, we affirm our commitment to a future that honours human dignity, preserves our planet, and upholds the values that make us who we are. Only through a cautious and thoughtful approach can we prevent the illusion of progress from leading us into a dystopia—or even global conflict—of our own making.
