The Case For A “Confined Oracle” Future

Authors: Andrew K. Wang and Oona Lagercrantz

Summary: The development of AGI is predicted to radically transform human societies, including posing significant S and X-risks. Imagining positive goals and visions for the future is an important step in attaining them and, as a result, there is an urgent need for broader reflection upon what a desirable AGI future could look like. A useful starting point here is Max Tegmark’s 2017 bestseller Life 3.0 [MT], which describes twelve long term futures of different ways of dealing with superintelligent AI and encourages readers to choose their preferred scenario. In this article, we seek to contribute to, and expand upon, these discussions on AGI futures. In particular, we critique three of the futures found most popular in Tegmark’s survey [FLS], “Libertarian Utopia”, “Egalitarian Utopia” and “Protector God”, on feasibility, desirability, and safety grounds. We argue that a future along the lines of Tegmark’s “Enslaved God” scenario is more convincing, but also critique several aspects of it. We therefore suggest that it is better to aim for a future we term “Confined Oracle”, which lines up closely with current targets in AI Safety research. Overall, our aim is to stimulate further debate on this important topic, rather than providing final answers, and we therefore want this to be an easily accessible piece that could be of interest to anyone.

Link to PDF version.

Previous
Previous

When it doesn’t do what you told it to do...