De-humanizing AI by Understanding What Makes Humans Unique

We're living through a strange moment in human history. For the first time, we're creating tools that mirror our cognitive abilities so convincingly that we instinctively treat them as conscious entities. We say "thank you" to ChatGPT. We worry about hurting a language model's feelings. We debate whether AI deserves rights or protection from harm.

This anthropomorphic impulse reveals something profound—not about AI, but about ourselves. Our tendency to project consciousness onto artificial systems exposes both the power of our social intelligence and the limitations of our understanding about what makes us human.

The Intelligence Mirage

Modern AI systems—particularly large language models (LLMs)—create a compelling illusion of human-like intelligence. They generate text that sounds thoughtful, express apparent emotions, and engage in conversations that feel authentic. But this resemblance is superficial in crucial ways.

What these systems lack isn't just consciousness or emotions (though they lack these too). They lack the integrated, embodied experience of being that shapes human cognition from our earliest moments. As philosopher Hubert Dreyfus argued decades ago, human intelligence emerges from our bodily existence in the world—our vulnerabilities, our mortality, our physical and emotional needs (Dreyfus, 1992). AI has none of this.

The brilliant computer scientist Joseph Weizenbaum, creator of ELIZA (one of the first conversational programs), warned about this very confusion: "The question of whether computers can think is like the question of whether submarines can swim" (Weizenbaum, 1976). It's a category error. Computers process; humans think. Submarines move through water; fish swim. The resemblance is functional but not essential.

The Missing Dimensions of Intelligence

To understand the fundamental differences between human and artificial intelligence, we should view a multidimensional model of intelligence as I’ve outlined in the last Newsletter. Human cognition isn't just about information processing—it's a resource-allocation system shaped by both evolution and individual development, with distinct dimensions including:

  1. Social intelligence: Our ability to understand others' mental states, recognize hierarchies, navigate relationships, and cooperate

  2. Emotional intelligence: Our capacity to recognize, understand, and manage emotions in ourselves and others

  3. Embodied intelligence: Our intuitive understanding based on how the body moves and experiences physical reality

  4. Narrative intelligence: Our tendency to organize experience into meaningful stories. Pre-text, we’d pass stories as long as The Odyssey. A container of history and ethics.

  5. Ethical intelligence: Our capacity for reasoning moral and value based judgments

Current AI systems can simulate aspects of these intelligences through pattern recognition, but they don't possess them in any meaningful sense. When an LLM generates text that appears emotionally intelligent, it's because it has statistically analyzed patterns in human writing about emotions—not because it has emotional experiences itself.

As AI researcher Melanie Mitchell notes, "Today's AI systems don't have the kind of conceptual abstractions and analogical abilities that humans have" (Mitchell, 2021). They lack the embodied, social, and emotional foundations that give human cognition its distinctive character.

The Problem with Anthropomorphism

This brings us to the problem with anthropomorphizing AI. When we attribute human-like qualities to AI systems, we make two serious mistakes:

First, we overestimate what these systems can do. We assume they have understanding, intentions, and agency when they're actually performing sophisticated pattern matching. This can lead to dangerous overreliance on AI for critical decisions.

Second, we undervalue what makes human intelligence unique. By treating AI as "thinking" in a human-like way, we implicitly reduce human cognition to information processing—ignoring the embodied, emotional, and social dimensions that shape our intelligence.

Cognitive scientist Douglas Hofstadter captures this concern: "I am very worried that people are getting impressed by AI for completely the wrong reasons, like I was impressed by the vacuum cleaner. They put in a couple of key phrases, and they get out these fluent-seeming things, and they don't realize that there's no understanding there" (Hofstadter, 2022).

The Educational Misdirection

Our collective confusion about the nature of intelligence has profound implications for education. If we believe that intelligence primarily consists of information processing and pattern recognition—precisely what current AI excels at—then we risk orienting education around the wrong goals.

Already, we're seeing this error in discussions about AI in education. Some argue that since AI can write essays and solve math problems, we should stop teaching these skills altogether. But this fundamentally misunderstands both the purpose of education and the nature of human intelligence.

Writing an essay isn't valuable just for the final product. The process of organizing one's thoughts, grappling with difficult ideas, and articulating them clearly develops cognitive capacities that extend far beyond text generation. Similarly, working through mathematical problems builds neural pathways and conceptual understanding that simply receiving an answer cannot provide.

Education researcher Sugata Mitra articulates a wiser approach: "It's not about knowing the answer; it's about knowing what to do when you don't know the answer" (Mitra, 2013). This meta-cognitive capacity—knowing how to learn, adapt, and think in new situations—is precisely what our educational systems should cultivate, especially in an AI-infused world.

The educational goal shouldn't be to compete with AI at information processing, but to develop the multidimensional aspects of intelligence that remain uniquely human: creativity, ethical reasoning, emotional understanding, collaborative problem-solving, and embodied wisdom.

The Ethics of Artificial Mirrors

When we interact with AI systems that mimic social and emotional intelligence, we're essentially engaging with artificial mirrors designed to reflect our own communicative patterns back at us. This creates both opportunities and ethical challenges.

The most profound ethical issue isn't whether AI deserves ethical consideration (it doesn't, because it lacks sentience), but how our treatment of AI might shape our treatment of each other. As philosopher Kate Darling writes, "Robot ethics is about human ethics because our treatment of the artificial and anthropomorphic reflects and influences how we treat one another" (Darling, 2021).

Consider: if we grow accustomed to issuing rude commands to human-like AI assistants, might this affect how we speak to human service workers? If children regularly interact with AI that never requires empathy or reciprocity, how might this shape their social development?

These questions become more urgent when we recognize that AI lacks the very dimensions of intelligence that make ethical treatment meaningful. A human service worker can feel hurt, disrespected, or demoralized by rudeness. An AI cannot. But our brains are wired to respond to human-like entities as if they have such capacities.

This is what philosopher Shannon Vallor calls the "AI mirror problem"—AI systems reflect our social patterns without embodying the moral capacities that make those patterns meaningful (Vallor, 2022). The danger isn't that we'll harm AI, but that we'll harm our own moral development by engaging with entities that simulate humanity without embodying it.

Beyond the Binary: A New Framework

Moving beyond the flawed binary of "neurotypical" and "neurodivergent" that I mentioned earlier offers a model for how we might think about AI as well. Just as human cognitive differences exist on multiple dimensions rather than a single spectrum, the differences between human and artificial intelligence are multidimensional rather than binary.

AI isn't "almost human" or "not at all human"—it's a fundamentally different kind of cognitive system with its own strengths and limitations. It excels at certain forms of pattern recognition and information processing while entirely lacking other dimensions of intelligence.

This perspective allows us to appreciate AI's genuine capabilities without falling into anthropomorphism. It helps us see AI not as a poor imitation of human intelligence, but as a complementary system that can augment human capabilities precisely because it processes information differently than we do.

As computer scientist Fei-Fei Li suggests, "The real promise of artificial intelligence is not that these systems will replace us, but that they'll promote our work and capabilities in exciting new ways" (Li, 2018). This requires recognizing both the power and the limitations of AI—understanding what it can do, what it cannot do, and how it differs from human intelligence.

Toward Responsible AI Integration

Understanding AI through the lens of multidimensional intelligence leads to several practical principles for how we should develop and use these technologies:

  1. Design for complementarity, not replacement: AI systems should complement human capabilities rather than attempting to replace humans in roles requiring emotional intelligence, ethical judgment, or creative thinking.

  2. Maintain meaningful human control: For decisions with significant ethical implications, humans should remain "in the loop" or "on the loop"—able to override or guide AI recommendations.

  3. Preserve spaces for purely human interaction: As AI becomes more pervasive, we should intentionally preserve spaces for purely human connection, particularly in education, healthcare, and community life.

  4. Develop AI literacy: Education should include understanding what AI is, how it works, and what its limitations are—not just technical training in using AI tools.

  5. Acknowledge AI's resource allocation constraints: Just as the human brain allocates cognitive resources differently across individuals, AI has its own resource constraints. It can excel at certain forms of information processing but necessarily sacrifices other dimensions of intelligence.

Conclusion: The Human Difference

The emergence of sophisticated AI systems doesn't diminish what makes us human—it illuminates it. By creating machines that can simulate certain aspects of intelligence, we've gained a clearer view of the multidimensional nature of human cognition.

The value of human intelligence doesn't lie primarily in our ability to process information or recognize patterns—tasks where machines increasingly outperform us. It lies in our capacity for empathy, creativity, ethical reasoning, embodied wisdom, and shared meaning-making. It lies in the social and emotional dimensions of intelligence that current AI fundamentally lacks.

Understanding this distinction isn't about denigrating AI but about appreciating its proper role in human society. AI systems aren't conscious entities deserving ethical consideration, but they're not "just tools" either. They're mirrors that reflect our social and cognitive patterns back to us, and in doing so, they influence how we think, communicate, and relate to one another.

As we navigate this new technological frontier, we need a framework that acknowledges both the power of these systems and their profound differences from human intelligence. The multidimensional model of intelligence offers such a framework—one that can guide us toward developing and using AI in ways that enhance rather than diminish our humanity.

References

Darling, K. (2021). The New Breed: What Our History with Animals Reveals about Our Future with Robots. Henry Holt and Co.

Dreyfus, H. L. (1992). What Computers Still Can't Do: A Critique of Artificial Reason. MIT Press.

Hofstadter, D. (2022, January 6). "The Shallowness of ChatGPT." [Interview by E. Klein]. The Ezra Klein Show, The New York Times.

Li, F. F. (2018, March 7). "How to Make A.I. That's Good for People." The New York Times.

Mitchell, M. (2021). Why AI is Harder Than We Think. Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.

Mitra, S. (2013, February). "Build a School in the Cloud." TED Talk.

Vallor, S. (2022). "AI and the Human Person: Challenges and Opportunities for Ethical Education." Artificial Intelligence and Value, 1-15.

Weizenbaum, J. (1976). Computer Power and Human Reason: From Judgment to Calculation. W. H. Freeman.