The likely advent of AGI and the long-established trend of improving computational hardware promise a dual revolution in coming decades: machines which are both more intelligent and more numerous than human beings. This possibility raises substantial concern over the moral nature of such intelligent machines, and of the changes they will cause in society. Will we have the chance to determine their moral character, or will evolutionary processes and/or runaway self-improvement take the choices out of our hands? Keywords. machine ethics, self-improving AI, Singularity, hard takeoff Background We can predict with a fair confidence that two significant watersheds will have been passed by 2030: a molecular manufacturing nanotechnology which can produce a wide variety of mechanisms with atomic precision; and artificial intelligence. Detailed arguments for these predictions have been given elsewhere and need not be repeated here (Drexler [1], Kurzweil [2], Moravec [3], Hall [4,5]). We are con...
J. Storrs Hall