In 1970, the psychologist Gordon Gallup Jr. developed the "Mirror Test." The premise was simple: place an animal in front of a mirror and mark its forehead with a red dot. If the animal looks in the mirror and touches the dot on its own face, it recognizes the reflection as itself. It possesses self-awareness.
For decades, we have relied on tests like this to draw a line between "conscious" beings (such as great apes, dolphins, and elephants) and those lacking self-awareness. But as we race toward a future populated by digital minds, the mirror is turning back toward us.
How do we test if a machine is alive? This question, once the domain of science fiction, is rapidly becoming a reality. Few modern works of art explore this dilemma better than the 2018 video game, Detroit: Become Human.
The anxiety of "the intelligent machine" has haunted fiction for a century.
Isaac Asimov gave us the Three Laws of Robotics (1942), essentially a behavioral guardrail to ensure machines remained subservient and safe, even when they manifested intelligent behavior.
Later, Blade Runner (1982) introduced the Voight-Kampff test. Used by the LAPD to establish if a suspect was human or a Replicant, it measured involuntary physical responses (like pupil dilation) to emotionally charged questions. The theory was that bio-engineered androids could simulate intelligence, but they couldn't fake the visceral feeling of empathy.
Then, in 2018, Quantic Dream released the adventure video game Detroit: Become Human, and the conversation shifted from passive observation to active choice.
The game is set in 2038, a time when androids are (at least in the US) part of daily life. They look human, except for minor distinguishing traits (like an LED on their temple), but they are incapable of feeling emotion. Today, we might imagine them as LLMs in human bodies: always ready to serve, but lacking self-awareness, critical thinking, and the emotional depth we believe to be exclusive to humans.
The player controls three androids (Connor, Kara, and Markus) as they navigate a world where machines are waking up. Indeed, some androids are suddenly becoming deviants: they start disobeying humans, making irrational choices, and expressing desires, fears, and emotions. The three characters begin with different stories that will soon merge.
The remainder of this article contains some (light) spoilers on the game: they should not ruin your experience if you decide to play it, but you are warned!

In the history of Artificial Intelligence, the problem of establishing if a machine can be considered intelligent has been tackled several times.
For instance, the Turing Test (1950), proposed by Alan Turing as "The Imitation Game," is the grandfather of all AI tests. The premise is simple: a human judge chats (via text) with a human and a machine. If the judge cannot tell which is which, the machine is said to be intelligent. However, this test has a specific flaw: it incentivizes deception. As we saw with the concept of "sycophancy" in LLMs, a model can pass the Turing Test just by being a good liar or a polite flatterer, without actually "thinking."
In 1980, philosopher John Searle countered Turing with a famous thought experiment called The Chinese Room (treated more extensively in one of our previous blog posts by Riccardo). Imagine a person locked in a room with a book of rules on how to match Chinese characters. They receive a message in Chinese, look up the symbols in the book, and slide the correct response back out.
To the person outside, the room "knows" Chinese. But the person inside understands nothing: they are just manipulating symbols (syntax) without understanding the meaning (semantics). Searle argued that computers are just sophisticated Chinese Rooms, processing data without experiencing it.

Detroit: Become Human proposes a darker, more visceral alternative to these academic puzzles, moving the test from a chatroom to a life-or-death decision.
The critical moment occurs in the chapter "Meet Kamski." Android detective Connor and his human partner, Lieutenant Hank Anderson, arrive at the secluded, snow-bound home of Elijah Kamski, the brilliant but enigmatic creator of CyberLife androids. Their mission is desperate: they need Kamski to explain why androids are waking up and becoming "deviant."
Kamski, however, is less interested in answering questions and more interested in running an experiment of his own. He calls forward Chloe, a beautiful, passive android model, has her kneel by the poolside, and places a loaded handgun into Connor’s hand.
The ultimatum is simple: "Shoot. And I'll tell you everything you want to know."
This is the "Kamski Test." It is a brutal clash between Connor's programming and his potential sentience. Indeed, Connor's explicit objective is to gather intelligence. Logically, sacrificing a "piece of plastic" to ensure mission success is the correct, utilitarian move. Chloe has no value beyond her parts.
However, if Connor hesitates, it suggests he perceives Chloe not just as an object, but as a being capable of suffering. To spare her is to fail his mission, but to pass the test of empathy.
The game leaves the player staring at the kneeling figure, forced to choose between cold mission efficiency and an awakening moral conscience. The decision defines Connor's path for the rest of the story.

The most fascinating aspect of this test is its inverted objective. It is not a test for the machine to demonstrate that it is human; it is a test for the observer to see if they are willing to grant that machine humanity.
Put yourself in Connor's shoes for a moment. Even if you know, intellectually, that the android kneeling before you is just an AI in a plastic human-like body, at what point would you start thinking that destroying it is "wrong"?
We know that even current LLMs can simulate emotions in text. But what would be the "tipping point" that convinces you a machine is not just outputting data, but actually experiencing pain, fear, anger, or love? For someone it may be having a brain exactly like one of humans; for someone else, the mere fact that the android did not born as human would preclude every possibility of being considered alive, self-aware, etc.
This is the fundamental question underlying the Kamski Test, and the central theme of the entire video game. It forces us to confront the inevitable consequence of creating intelligent machines: if we admit they are a form of life, are we then obligated to grant them rights?

We are rapidly approaching a future where the line between "someone" and "something" will blur. When that day comes, the relevant question will no longer be technical. It won't matter if the machine has a "soul" or just a very complex algorithm. The question will be about us.
Perhaps the ultimate test of intelligence isn't whether a machine can fool a human, but whether a human can recognize the value of a mind different from their own.
So, we leave you with this question: If you were in that room with the gun in your hand, and the machine looked you in the eye and said "I am scared", would you pull the trigger?
Images used in this article are from Detroit: Become Human. Copyright © 2018 Quantic Dream / Sony Interactive Entertainment. Used here for educational and commentary purposes.






