There has been much speculation about the future of humanity in the face of super-humanly intelligent machines. Most of the dystopian scenarios seem to be driven by plain fear that entities arise that could be smarter and stronger than us.
After all, how are we supposed to know which goals the machines will be driven by? Is it possible to have “friendly” AI? If we attempt to turn them off, will they care? Would they care about their own survival in the first place? There is no a priori reason to assume that intelligence necessarily implies any goals, such as survival and reproduction.
But, in spite of being rather an optimist otherwise, some seemingly convincing thoughts led me to the conclusion that there is a reason and that we can reasonably expect those machines to be a potential threat to us. The reason is, as I will argue, that the evolutionary process that has created us and the living world will continue to be valid for future intelligent machines. Just as this process has installed the urge for survival and reproduction in us, it will do so in the machines as well.
How come the average newcomer publishing in my field apparently feels compelled to read 0 literature before trying to contribute their ideas?
The author of “Will super-human artificial intelligence be subject to evolution?” is:
- Not aware of Omohundro’s 2008 “Basic AI Drives”
-
Not aware of Salamon et al’s 2010 “Reducing Long-Term Catastrophic Risks from Artificial Intelligence”
- Not even aware of Kirkpatrick’s 1983 “Optimization by Simulated Annealing”
- Spends most of his paper re-developing Omohundro’s resource drive argument (poorly)
- Confuses Friendly AI with Asimov-style, hard-coded goals
- Is unaware that as of 1983, his statement that “science does not have a general algorithm that is guaranteed to find the peak of the highest mountain” is no longer true
See on hplusmagazine.com