I visualize a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.
~Claude Shannon, The Mathematical Theory of Communication
Let’s start with the three fundamental Rules of Robotics. We have: one, a robot may not injure a human being, or, through inaction, allow a human being to come to harm. Two, a robot must obey the orders given it by human beings except where such orders would conflict with the First Law. And three, a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
~Isaac Asimov, Astounding Science Fiction, Mar. 1942
The danger of the future is that men may become robots. True enough, robots do not rebel. But given man’s nature, robots cannot live and remain sane, they become “Golems.” They will destroy their world and themselves because they cannot stand any longer the boredom of a meaningless life.
~Erich Fromm, The Sane Society
Around computers it is difficult to find the correct unit of time to measure progress. Some cathedrals took a century to complete. Can you imagine the grandeur and scope of a program that would take as long?
~SIGPLAN, Association for Computing Machinery (1992) “Epigrams in Programming,” September 1982
Part of the inhumanity of the computer is that, once it is competently programmed and working smoothly, it is completely honest.
~Isaac Asimov in “Change!” (1983), quoted in Reader’s Digest (1987), 131, Nos. 783-787, p. 1
We are beginning to see intimations of this in the implantation of computer devices into the human body.
The most exciting phrase to hear in science, the one that heralds new discoveries, is not ‘Eureka!’ (I found it!) but ‘That’s funny . . . .’
~Isaac Asimov, in Ashton Applewhite, William R. Evans and Andrew Frothingham, And I Quote (2003), p. 467
Human judges can show mercy. But against the laws of nature, there is no appeal.
I believe in evidence. I believe in observation, measurement, and reasoning, confirmed by independent observers. I’ll believe anything, no matter how wild and ridiculous, if there is evidence for it. The wilder and more ridiculous something is, however, the firmer and more solid the evidence will have to be.
~Isaac Asimov in David S. Bradford, In the Beginning: Building the Temple of Zion? (2008)
Science fiction writers foresee the inevitable, and although problems and catastrophes may be inevitable, solutions are not.
~Isaac Asimov, “How Easy to See the Future” in Asimov on Science Fiction (1981), p. 86
Without your existential super-self you will certainly perish in wars of the future out among the satellites, overcome by cosmic thought patterns too convoluted for the human brain to contemplate, or, if not that, torn apart by humanoids in the death throes of their own identity crises, or exploded by technological advances available not only to the future but known already to the present and, if not one or more of the above, inevitably coarsened by Earthlings of your own kind.
~Carol Emshwiller, “The Childhood of the Human Hero” (1973)
H. G. Wells [. . . .] saw the obvious and foresaw the inevitable. What is really amazing and frustrating is mankind’s habit of refusing to see the obvious and inevitable, until it is there, and then muttering about unforeseen catastrophes.
~Isaac Asimov, “How Easy to See the Future!” (1975)
Men have an extraordinary, and perhaps fortunate, ability to tune out of their consciousness the most awesome future possibilities.
~Arthur C. Clarke, The Fountains of Paradise (1979)
The greatest problem of the future is civilizing the human race.
~Arthur C. Clarke, “Aladdin’s Lamp“ (1962)
Do you see, then, that the important prediction is not the automobile, but the parking problem; not radio, but the soap-opera; not the income tax but the expense account; not the Bomb but the nuclear stalemate? Not the action, in short, but the reaction?
~Isaac Asimov, “Future? Tense!“ (1965)
“It’s best to try to prevent a negative circumstance from occurring than to wait for it to occur and then be reactive,” Musk said, according to The Verge. “This is a case where the range of negative outcomes, some of them are quite severe. It’s not clear whether we’d be able to recover from some of these negative outcomes. In fact, you can construct scenarios where recovery of human civilization does not occur. When the risk is that severe, it seems like you should be proactive and not reactive.”