Your first link appears to be broken. Did you meant to link here? It looks like the last letter of the address got truncated somehow. If so, I’m glad you found it valuable!
For what it’s worth, although I think deceptive alignment is very unlikely, I still think work on making AI more robustly beneficial and less risky is a better bet than accelerating capabilities. For example, my posts don’t address thesestories. There are also a lot of other concerns about potential downsides of AI that may not be existential, but are still very important.
I edited it to include the correct link, thank you for asking.
My difference, and why I framed it as accelerating AI is good, comes down to the fact that in my view of AI risk, as well as most LWers models of AI risk, deceptive alignment and to a quite lesser extent, the pointers problem are the dominant variables for how likely existential risk is,and given your post as well as some other posts, I had to conclude that much of my and LWers pessimism over AI capabilities increases was pretty wrong.
Now a values point, I only stated that it was positive expected value to increase capabilities, not that all problems are gone. Not all problems are gone, nevertheless arguably the biggest problem of AI was functionally a non-problem.
That makes sense. I misread the original post as arguing that capabilities research is better than safety work. I now realize that it just says capabilities research is net positive. That’s definitely my mistake, sorry!
I strong upvoted your comment and post for modifying your views in a way that is locally unpopular when presented with new arguments. That’s important and hard to do!
Your first link appears to be broken. Did you meant to link here? It looks like the last letter of the address got truncated somehow. If so, I’m glad you found it valuable!
For what it’s worth, although I think deceptive alignment is very unlikely, I still think work on making AI more robustly beneficial and less risky is a better bet than accelerating capabilities. For example, my posts don’t address these stories. There are also a lot of other concerns about potential downsides of AI that may not be existential, but are still very important.
I edited it to include the correct link, thank you for asking.
My difference, and why I framed it as accelerating AI is good, comes down to the fact that in my view of AI risk, as well as most LWers models of AI risk, deceptive alignment and to a quite lesser extent, the pointers problem are the dominant variables for how likely existential risk is,and given your post as well as some other posts, I had to conclude that much of my and LWers pessimism over AI capabilities increases was pretty wrong.
Now a values point, I only stated that it was positive expected value to increase capabilities, not that all problems are gone. Not all problems are gone, nevertheless arguably the biggest problem of AI was functionally a non-problem.
That makes sense. I misread the original post as arguing that capabilities research is better than safety work. I now realize that it just says capabilities research is net positive. That’s definitely my mistake, sorry!
I strong upvoted your comment and post for modifying your views in a way that is locally unpopular when presented with new arguments. That’s important and hard to do!