All good points. I suspect the best path into the future looks like: everyone’s optimistic and then a ‘survivable disaster’ happens with AI. Ideally, we’d want all the panic to happen in one big shot—it’s the best way to motivate real change.
Yeah. Around here, there’s Zvi, EY himself, and many others who are essentially arguing that capabilities research is itself evil.
The problem with that take is
(1) alignment research will probably never achieve success without capabilities to test their theories on. Why wasn’t alignment worked on since 1955, when AI research began? Because there was no credible belief to think it was a threat.
(2) the world we live in has a number of terribad things happening to people by the millions, with that nasty virus that went around being only a recent particularly bad example, and we have a bunch of problems where we humans are probably not capable of solving them. Too many independent variables, too stochastic, correct theories are probably too complex for a human to keep the entire theory “in their head” at once. Examples: medical problems, how the economy works.
All good points. I suspect the best path into the future looks like: everyone’s optimistic and then a ‘survivable disaster’ happens with AI. Ideally, we’d want all the panic to happen in one big shot—it’s the best way to motivate real change.
Yeah. Around here, there’s Zvi, EY himself, and many others who are essentially arguing that capabilities research is itself evil.
The problem with that take is
(1) alignment research will probably never achieve success without capabilities to test their theories on. Why wasn’t alignment worked on since 1955, when AI research began? Because there was no credible belief to think it was a threat.
(2) the world we live in has a number of terribad things happening to people by the millions, with that nasty virus that went around being only a recent particularly bad example, and we have a bunch of problems where we humans are probably not capable of solving them. Too many independent variables, too stochastic, correct theories are probably too complex for a human to keep the entire theory “in their head” at once. Examples: medical problems, how the economy works.
I wouldn’t call it evil, but I would say that it’s playing with fire.