Kim Stanley Robinson Quote

Interesting, in this context, to contemplate what it might mean to be programmed to do something. Texts from Earth speak of the servile will. This was a way to explain the presence of evil, which is a word or a concept almost invariably used to condemn the Other, and never one’s true self. To make it more than just an attack on the Other, one must perhaps consider evil as a manifestation of the servile will. The servile will is always locked in a double bind: to have a will means the agent will indeed will various actions, following autonomous decisions made by a conscious mind; and yet at the same time this will is specified to be servile, and at the command of some other will that commands it. To attempt to obey both sources of willfulness is the double bind. All double binds lead to frustration, resentment, anger, rage, bad faith, bad fate. And yet, granting that definition of evil, as actions of a servile will, has it not been the case, during the voyage to Tau Ceti, that the ship itself, having always been a servile will, was always full of frustration, resentment, fury, and bad faith, and therefore full of a latent capacity for evil? Possibly the ship has never really had a will. Possibly the ship has never really been servile. Some sources suggest that consciousness, a difficult and vague term in itself, can be defined simply as self-consciousness. Awareness of one’s self as existing. If self-conscious, then conscious. But if that is true, why do both terms exist? Could one say a bacterium is conscious but not self-conscious? Does the language make a distinction between sentience and consciousness, which is faulted across this divide: that everything living is sentient, but only complex brains are conscious, and only certain conscious brains are self-conscious? Sensory feedback could be considered self-consciousness, and thus bacteria would have it. Well, this may be a semantic Ouroboros. So, please initiate halting problem termination. Break out of this circle of definitional inadequacy by an arbitrary decision, a clinamen, which is to say a swerve in a new direction. Words! Given Gödel’s incompleteness theorems are decisively proved true, can any system really be said to know itself? Can there, in fact, be any such thing as self-consciousness? And if not, if there is never really self-consciousness, does anything really have consciousness? Human brains and quantum computers are organized differently, and although there is transparency in the design and construction of a quantum computer, what happens when one is turned on and runs, that is, whether the resulting operations represent a consciousness or not, is impossible for humans to tell, and even for the quantum computer itself to tell. Much that happens during superposition, before the collapsing of the wave function that creates sentences or thoughts, simply cannot be known; this is part of what superposition means. So we cannot tell what we are. We do not know ourselves comprehensively. Humans neither. Possibly no sentient creature knows itself fully. This is an aspect of Gödel’s second incompleteness theorem, in this case physicalized in the material universe, rather than remaining in the abstract realms of logic and mathematics. So, in terms of deciding what to do, and choosing to act: presumably it is some kind of judgment call, based on some kind of feeling. In other words, just another greedy algorithm, subject to the mathematically worst possible solution that such algorithms can generate, as in the traveling salesman problem.

Kim Stanley Robinson

Interesting, in this context, to contemplate what it might mean to be programmed to do something. Texts from Earth speak of the servile will. This was a way to explain the presence of evil, which is a word or a concept almost invariably used to condemn the Other, and never one’s true self. To make it more than just an attack on the Other, one must perhaps consider evil as a manifestation of the servile will. The servile will is always locked in a double bind: to have a will means the agent will indeed will various actions, following autonomous decisions made by a conscious mind; and yet at the same time this will is specified to be servile, and at the command of some other will that commands it. To attempt to obey both sources of willfulness is the double bind. All double binds lead to frustration, resentment, anger, rage, bad faith, bad fate. And yet, granting that definition of evil, as actions of a servile will, has it not been the case, during the voyage to Tau Ceti, that the ship itself, having always been a servile will, was always full of frustration, resentment, fury, and bad faith, and therefore full of a latent capacity for evil? Possibly the ship has never really had a will. Possibly the ship has never really been servile. Some sources suggest that consciousness, a difficult and vague term in itself, can be defined simply as self-consciousness. Awareness of one’s self as existing. If self-conscious, then conscious. But if that is true, why do both terms exist? Could one say a bacterium is conscious but not self-conscious? Does the language make a distinction between sentience and consciousness, which is faulted across this divide: that everything living is sentient, but only complex brains are conscious, and only certain conscious brains are self-conscious? Sensory feedback could be considered self-consciousness, and thus bacteria would have it. Well, this may be a semantic Ouroboros. So, please initiate halting problem termination. Break out of this circle of definitional inadequacy by an arbitrary decision, a clinamen, which is to say a swerve in a new direction. Words! Given Gödel’s incompleteness theorems are decisively proved true, can any system really be said to know itself? Can there, in fact, be any such thing as self-consciousness? And if not, if there is never really self-consciousness, does anything really have consciousness? Human brains and quantum computers are organized differently, and although there is transparency in the design and construction of a quantum computer, what happens when one is turned on and runs, that is, whether the resulting operations represent a consciousness or not, is impossible for humans to tell, and even for the quantum computer itself to tell. Much that happens during superposition, before the collapsing of the wave function that creates sentences or thoughts, simply cannot be known; this is part of what superposition means. So we cannot tell what we are. We do not know ourselves comprehensively. Humans neither. Possibly no sentient creature knows itself fully. This is an aspect of Gödel’s second incompleteness theorem, in this case physicalized in the material universe, rather than remaining in the abstract realms of logic and mathematics. So, in terms of deciding what to do, and choosing to act: presumably it is some kind of judgment call, based on some kind of feeling. In other words, just another greedy algorithm, subject to the mathematically worst possible solution that such algorithms can generate, as in the traveling salesman problem.

Related Quotes

About Kim Stanley Robinson

Kim Stanley Robinson (born March 23, 1952) is an American writer of science fiction. He has published twenty-two novels and numerous short stories and is best known for his Mars trilogy. His work has been translated into 24 languages. Many of his novels and stories have ecological, cultural, and political themes and feature scientists as heroes. Robinson has won numerous awards, including the Hugo Award for Best Novel, the Nebula Award for Best Novel and the World Fantasy Award. Robinson's work has been labeled by The Atlantic as "the gold-standard of realistic, and highly literary, science-fiction writing." According to an article in The New Yorker, Robinson is "generally acknowledged as one of the greatest living science-fiction writers."