Sorry if I got the names here or substance here wrong, I couldn’t find the original thread, and it seemed slightly better to be specific so we could dig into a concrete example
FWIW, I don’t seem to remember the exact conversation you mentioned (but it does sound sorta plausible). Also, I personally don’t mind you using a fake example with me in it.
[Unimportant, but whatever] Quickly on the object level of the plausibly fictional conversation (lol):
had a bunch of traction on producing a plan that would at least reasonably help if we had to align superintelligent AIs in the near future.
I would more say “seems like it would reasonably help a lot in getting a huge amount of useful work out of AIs”. (And then this work could plausibly help with aligning superintelligent AIs, but that isn’t clearly the only or even main thing we’re initially targeting.)
I would more say “seems like it would reasonably help a lot in getting a huge amount of useful work out of AIs”. (And then this work could plausibly help with aligning superintelligent AIs, but that isn’t clearly the only or even main thing we’re initially targeting.)
Yeah I think if I thought more carefully before posting I’d have come up with this rephrasing myself. Matches my understanding of what you’re going for.
FWIW, I don’t seem to remember the exact conversation you mentioned (but it does sound sorta plausible). Also, I personally don’t mind you using a fake example with me in it.
[Unimportant, but whatever] Quickly on the object level of the plausibly fictional conversation (lol):
I would more say “seems like it would reasonably help a lot in getting a huge amount of useful work out of AIs”. (And then this work could plausibly help with aligning superintelligent AIs, but that isn’t clearly the only or even main thing we’re initially targeting.)
Yeah I think if I thought more carefully before posting I’d have come up with this rephrasing myself. Matches my understanding of what you’re going for.