I don’t know if there’s much counterargument beyond “no, if you’re building an ML system that helps you think longer about anything important, you already need to have solved the hard problem of searching through plan-space for actually helpful plans.”
This is definitely a problem, but I would say human amplification further isn’t a solution because humans aren’t aligned.
I don’t really have a good what human values are, even in an abstract English definition sense, but I’m pretty confident that “human values” are not, and are not easily transformable from, “a human’s values.”
Though maybe that’s just most of the reason why you’d have to have your amplifier already aligned, and not a separate problem itself.
This is definitely a problem, but I would say human amplification further isn’t a solution because humans aren’t aligned.
I don’t really have a good what human values are, even in an abstract English definition sense, but I’m pretty confident that “human values” are not, and are not easily transformable from, “a human’s values.”
Though maybe that’s just most of the reason why you’d have to have your amplifier already aligned, and not a separate problem itself.