Sciweavers

AGI
2015

Safe Baby AGI

8 years 7 months ago
Safe Baby AGI
Out of fear that artificial general intelligence (AGI) might pose a future risk to human existence, some have suggested slowing or stopping AGI research, to allow time for theoretical work to guarantee its safety. Since an AGI system will necessarily be a complex closed-loop learning controller that lives and works in semi-stochastic environments, its behaviors are not fully determined by its design and initial state, so no mathematico-logical guarantees can be provided for its safety. Until actual running AGI systems exist – and there is as of yet no consensus on how to create them – that can be thoroughly analyzed and studied, any proposal on their safety can only be based on weak conjecture. As any practical AGI will unavoidably start in a relatively harmless baby-like state, subject to the nurture and education that we provide, we argue that our best hope to get safe AGI is to provide it proper education.
Jordi Bieger, Kristinn R. Thórisson, Pei Wa
Added 13 Apr 2016
Updated 13 Apr 2016
Type Journal
Year 2015
Where AGI
Authors Jordi Bieger, Kristinn R. Thórisson, Pei Wang
Comments (0)