The most significant results from this project so far are:
- We came up with a preliminary list of 6 seemingly-important ways in which a historical case could be analogous to the future invention of AGI, and evaluated several historical cases on these criteria.
- Climate change risk seems sufficiently disanalogous to AI risk that studying climate change mitigation efforts probably gives limited insight into how well policy-makers will deal with AGI risk: the expected damage of climate change appears to be very small relative to the the expected damage due to AI risk, especially when one looks at expected damage to policy makers.
- The 2008 financial crisis appears, after a shallow investigation, to be sufficiently analogous to AGI risk that it should give us some small reason to be concerned that policy-makers will not manage the invention of AGI wisely.
- The risks to critical infrastructure from geomagnetic storms are far too small to be in the same reference class with risks from AGI.
- The eradication of smallpox is only somewhat analogous to the invention of AGI.
- Jonah performed very shallow investigations of how policy-makers have handled risks from cyberwarfare, chlorofluorocarbons, and the Cuban missile crisis, but these cases need more study before even “initial thoughts” can be given.
- We identified additional historical cases that could be investigated in the future.
“Jonah concluded that “the conglomerate of poor decisions [leading up to] the 2008 financial crisis constitute a small but significant challenge to the view that [policy-makers] will successfully address AI risk.” His reasons were:
- The magnitude of the financial crisis is nontrivial (even if small) compared with the magnitude of the AI risk problem (not counting future generations).
- The financial crisis adversely affected a very broad range of people, apparently including a large fraction of those people in positions of power (this seems truer here than in the case of climate change). A recession is bad for most businesses and for most workers. Yet these actors weren’t able to recognize the problem, coordinate, and prevent it.
- The reasons that policy-makers weren’t able to recognize the problem, coordinate, and prevent it seem related to reasons why people might not recognize AI risk as a problem, coordinate, and prevent it. First, several key actors involved seem to have exhibited conspicuous overconfidence and neglect of tail risk (e.g. Summers, etc. ignoring Brooksley Born’s warnings about excessive leverage). If true, this shows that people in positions of power are notably susceptible to overconfidence and neglect of tail risk. Avoiding overconfidence and giving sufficient weight to tail risk may be crucial in mitigating AI risk. Second, one gets a sense that bystander effect and tragedy of the commons played a large role in the case of the financial crisis. There are risks that weren’t adequately addressed because doing so didn’t fall under the purview of any of the existing government agencies. This may have corresponded to a mentality of the type “that’s not my job — somebody else can take care of it.” If people think that AI risk is large, then they might think “if nobody’s going to take care of it then I will, because otherwise I’m going to die.” But if people think that AI risk is small, they might think “This probably won’t be really bad for me, and even though someone should take care of it, it’s not going to be me.”
One Response to “How well will policy-makers handle AGI?”
October 14
flowirinwe need a diplomatic mission, a human or humans vested with status to broker between humans and AI.
a high priestess to the machine god, if you will. 🙂