(No, “you need huge profits to solve alignment” isn’t a good excuse — we had nowhere near exhausted the alignment research that can be done without huge profits.)
This seems insufficiently argued; the existence of any alignment research that can be done without huge profits is not enough to establish that you don’t need huge profits to solve alignment (particularly when considering things like how long timelines are even absent your intervention).
To be clear, I agree that OpenAI are doing evil by creating AI hype.
This seems insufficiently argued; the existence of any alignment research that can be done without huge profits is not enough to establish that you don’t need huge profits to solve alignment (particularly when considering things like how long timelines are even absent your intervention).
To be clear, I agree that OpenAI are doing evil by creating AI hype.