I did argue that closing the Lightcone offices was the right thing, but my point is that part of the reasoning relies on a core assumption that AI Alignment isn’t very iterable and will generally cost capabilities that I find probably false.
I am open to changing my mind, but I see a lot of reasoning on AI Alignment that is kinda weird to me by Habryka and Ben Pace.
I did argue that closing the Lightcone offices was the right thing, but my point is that part of the reasoning relies on a core assumption that AI Alignment isn’t very iterable and will generally cost capabilities that I find probably false.
I am open to changing my mind, but I see a lot of reasoning on AI Alignment that is kinda weird to me by Habryka and Ben Pace.