I would not recommend new aspiring alignment researchers to read the Sequences, Superintelligence, some of MIRI’s earlier work or trawl through the alignment content on Arbital despite reading a lot of that myself.
I think aspiring alignment researchers should read all these things you mention. This all feels extremely premature. We risk throwing out and having to rediscover concepts at every turn. I think Superinelligence, for example, would still be very important to read even if dated in some respects!
We shouldn’t assume too much based on our current extrapolations inspired by the systems making headlines today.
GPT-4′s creators already want to take things in a very agentic direction, which may yet negate some of the apparent dated-ness.
I think aspiring alignment researchers should read all these things you mention. This all feels extremely premature. We risk throwing out and having to rediscover concepts at every turn. I think Superinelligence, for example, would still be very important to read even if dated in some respects!
We shouldn’t assume too much based on our current extrapolations inspired by the systems making headlines today.
GPT-4′s creators already want to take things in a very agentic direction, which may yet negate some of the apparent dated-ness.
That statement is from Microsoft Research not OpenAI.