Nick Bostrom, in his new book SuperIntelligence, argues that the the creation of an artificial intelligence with human-level intelligence will be followed fairly soon by the existence of an almost omnipotent superintelligence, with consequences that may well be disasterous for humanity. He considers that it is therefore a top priority for mankind to figure out how to imbue such a superintelligence with a sense of morality; however, he considers that this task is very difficult. I discuss a number of flaws in his analysis, particularly the viewpoint that implementing ethical behavior is an especially difficult problem in AI research. Review of SuperIntelligence: Paths, Dangers, Strategies by Nick Bostrom (Oxford U. Press, 2013) Nick Bostrom, in his new book SuperIntelligence, argues that, sooner or later, one way or another, it is very likely that an artificial intelligence (AI) will achieve intelligence comparable to a human. Soon after this has happened — probably within a few ...