The ‘message’ surprised me since it seems to run counter to the whole point of LW.
That non-super-geniuses, mostly just moderately above average folks, can participate and have some chance of producing genuinely novel insights, that future people will actually care to remember. Based on the principle of the supposed wisdom of the userbase ‘masses’ rubbing their ideas together enough times.
Plus a few just-merely-geniuses shepherding them.
But if this method can’t produce any meaningful results in the long term...
OpenAI never advocated for the aforementioned so it isn’t as surprising if they adopt the everything hinges on the future ubermensch plan.
Maybe. But it wouldn’t make sense to judge an approach to a technical problem, alignment, based on what philosophy it was produced with. If we tried that philosophy and it didn’t work, that’s a reasonable thing to say and advocate for.
I don’t think Eliezer’s reasoning for that conclusion is nearly adequate, and we still have almost no idea how hard alignment is, because the conversation has broken down.
The ‘message’ surprised me since it seems to run counter to the whole point of LW.
That non-super-geniuses, mostly just moderately above average folks, can participate and have some chance of producing genuinely novel insights, that future people will actually care to remember. Based on the principle of the supposed wisdom of the userbase ‘masses’ rubbing their ideas together enough times.
Plus a few just-merely-geniuses shepherding them.
But if this method can’t produce any meaningful results in the long term...
OpenAI never advocated for the aforementioned so it isn’t as surprising if they adopt the everything hinges on the future ubermensch plan.
Maybe. But it wouldn’t make sense to judge an approach to a technical problem, alignment, based on what philosophy it was produced with. If we tried that philosophy and it didn’t work, that’s a reasonable thing to say and advocate for.
I don’t think Eliezer’s reasoning for that conclusion is nearly adequate, and we still have almost no idea how hard alignment is, because the conversation has broken down.