I edited it to include the correct link, thank you for asking.
My difference, and why I framed it as accelerating AI is good, comes down to the fact that in my view of AI risk, as well as most LWers models of AI risk, deceptive alignment and to a quite lesser extent, the pointers problem are the dominant variables for how likely existential risk is,and given your post as well as some other posts, I had to conclude that much of my and LWers pessimism over AI capabilities increases was pretty wrong.
Now a values point, I only stated that it was positive expected value to increase capabilities, not that all problems are gone. Not all problems are gone, nevertheless arguably the biggest problem of AI was functionally a non-problem.
That makes sense. I misread the original post as arguing that capabilities research is better than safety work. I now realize that it just says capabilities research is net positive. That’s definitely my mistake, sorry!
I strong upvoted your comment and post for modifying your views in a way that is locally unpopular when presented with new arguments. That’s important and hard to do!
I edited it to include the correct link, thank you for asking.
My difference, and why I framed it as accelerating AI is good, comes down to the fact that in my view of AI risk, as well as most LWers models of AI risk, deceptive alignment and to a quite lesser extent, the pointers problem are the dominant variables for how likely existential risk is,and given your post as well as some other posts, I had to conclude that much of my and LWers pessimism over AI capabilities increases was pretty wrong.
Now a values point, I only stated that it was positive expected value to increase capabilities, not that all problems are gone. Not all problems are gone, nevertheless arguably the biggest problem of AI was functionally a non-problem.
That makes sense. I misread the original post as arguing that capabilities research is better than safety work. I now realize that it just says capabilities research is net positive. That’s definitely my mistake, sorry!
I strong upvoted your comment and post for modifying your views in a way that is locally unpopular when presented with new arguments. That’s important and hard to do!