If you are reading this, chances are that you are alive, have a brain, and are conscious. There is also a chance this is being read by a machine, which could range from a simple web crawler/indexer to a more sophisticated content/context analyzer. In the machine case, in some way it can be considered to be “alive” (powered by electricity), but conscious? Most would disagree.
Being alive is not easy to define but can be characterized. Consciousness, on the other hand, defies a precise definition, yet is intuitively understood by seemingly everyone.
But is consciousness what separates us from machines? Since consciousness cannot be precisely defined, a specific test cannot be designed to test and answer that. There are working alternatives such as the Turing test, which purportedly tests for intelligence (and unintelligence) but really tests how well it simulates human interaction, and the Mirror test, which tests for self-awareness. Neither test is satisfactory.
Can a sufficiently advanced machine be considered alive and conscious? That is an interesting question, but I consider it irrelevant for reasons I will expand on later.
Here is a thought experiment. There are machines that already pass the Turing Test relatively well and I can conceive of a day where it can simulate intelligence very well. Imagine one day in the not so distant future, where someone creates an advanced computer (or robot similar to Issac Asimov’s). This computer could perform a self-check to see the health of its components (this feature is already in most operating systems). It could check the internet for new components and upgrades, both software and hardware. Say it has access to electronic funds and can order components online and have someone install them, which it can then verify if they work properly. It could be a hardware component like a redundant power system, extra batteries, robotic arms, or software upgrades, or even cloud applications. It could respond to external stimuli. Spontaneity can be programmed in with random actions taken, perhaps according to a cost function (redistribute resources, upgrade hardware, etc.). It could be programmed to reproduce itself, by examining its own components, purchasing everything online, and hiring someone for assembly. Upon completion, it could verify that all systems are working after assembly, upload its own software over the network, and authorize the final payment. It could even seek and accept computational jobs for money online, or invest in a portfolio, to replenish the resource pool and become self-sustaining. It could defend itself against online attacks, and prepare itself against certain circumstances (redundant power supply and critical components). It could even order protective casing around it, or even bodyguards, I mean machineguards, to protect against physical attacks if sufficient funds are available.
This hypothetical machine could reproduce itself, maybe not organically, which I argue is irrelevant anyway. The ultimate goal of reproduction is reached, and there are plenty of examples of effective reproduction requiring outside agents (e.g., bees and pollen).
With GPS, optical hardware, mechanical components, and recognition software, it could realize where it is in space and recognize itself in a mirror based on certain tests and feedback.
Back to the question of “can a sufficiently advanced machine be considered alive and conscious?” I contend that this is not the right question, since there is no simple definition of either. A better question would be, in the spirit of the Turing Test, as “could a sufficiently advanced machine, from external observation or interaction alone, be distinguishable from a living, conscious being?”
To answer this question, I think the best way is to step back and stop thinking like a human for a bit.
Stealing from Scott Adams’ example in God’s Debris (Chapter Evolution), imagine highly intelligent extra-terrestrial beings visiting Earth after an extinction event that wipes off all organic life on earth. They find fossils and books and all the documentation of what used to be life on earth. They also find extensive videos and logs and archives of these amazing machines in action, but the underlying code is gone forever. Unburdened by the arbitrary earth-centric biological classification of Life – Domain – Kingdom – (blah blah) – Species, and judging from the evidence at hand alone, I contend that these aliens would consider these machines alive, and probably classify them under “inorganic life”.
This hypothetical machine meets most if not all descriptions of characteristics of life (since there is no easy definition of “life”). I argue that without access to the underlying code of the machine, from any observational, behavioral, and external perspective, the machine is alive. Consciousness would be an abstract concept that an alien may or may not have, but there is no reason to think that from an external viewpoint, the machines would not have consciousness.
So what, then, separates humans from a sufficiently advanced machine? Cognition? Sentience? I have a simpler answer.
Three pounds of thinking meat.
Update: After going down the rabbit hole of hypothetical advanced Artificial Intelligence (Singularity, FAI/UFAI, Roko’s Basilisk) and its implications, I concede that the Robot in my thought experiment is crude and probably not thought out in sufficient detail. However, for the purpose of the thought experiment, the point it makes is still valid. I later discovered that it is very similar to the Giant Robot thought experiment as described by Dennet (Intuition Pumps and Other Tools for Thinking, 2013).