So my model of progress has allowed me to observe our prosaic scaling without surprise, but it doesnāt allow me to make good predictions since the reason for my lack of surprise has been from Vingean prediction of the form āI donāt know what progress will look like and neither do youā.
This is indeed a locally valid way to escape one form of the claimāwithout any particular prediction carrying extra weight, and the fact that reality has to go some way, there isnāt much surprise in finding yourself in any given world.
I do think thereās value in another version of the word āsurprise,ā here, though. For example: the cross-entropy loss between the predicted distribution with respect to the observed distribution. Holding to a high uncertainty model of progress will result in continuously high āsurpriseā in this sense, because it struggles to narrow to a better distribution generator. Itās a sort of overdamped epistemological process.
I think we have enough information to make decent gearsy models of progress around AI. As a bit of evidence, some such models have already been exploited to make gobs of money. Iām also feeling pretty good[1] about many of my predictions (like this post) that contributed to me pivoting entirely into AI; thereās an underlying model that has a bunch of falsifiable consequences which has so far survived a number of iterations, and that model has implications through the development of extreme capability.
What I have been surprised about has been governmental reaction to AI...
Yup! That was a pretty major (and mostly positive) update for me. I didnāt have a strong model of government-level action in the space and I defaulted into something pretty pessimistic. My policy/āgovernance model is still lacking the kind of nuance that you only get by being in the relevant rooms, but Iāve tried to update here as well. Thatās also part of the reason why Iām doing what Iām doing now.
In any case, Iāve been hoping for the last few years I would have time to do my undergrad and start working on the alignment without a misaligned AI going RSI, and Iām still hoping for that. So thatās lucky I guess. šš
šāāļø
This is indeed a locally valid way to escape one form of the claimāwithout any particular prediction carrying extra weight, and the fact that reality has to go some way, there isnāt much surprise in finding yourself in any given world.
I do think thereās value in another version of the word āsurprise,ā here, though. For example: the cross-entropy loss between the predicted distribution with respect to the observed distribution. Holding to a high uncertainty model of progress will result in continuously high āsurpriseā in this sense, because it struggles to narrow to a better distribution generator. Itās a sort of overdamped epistemological process.
I think we have enough information to make decent gearsy models of progress around AI. As a bit of evidence, some such models have already been exploited to make gobs of money. Iām also feeling pretty good[1] about many of my predictions (like this post) that contributed to me pivoting entirely into AI; thereās an underlying model that has a bunch of falsifiable consequences which has so far survived a number of iterations, and that model has implications through the development of extreme capability.
Yup! That was a pretty major (and mostly positive) update for me. I didnāt have a strong model of government-level action in the space and I defaulted into something pretty pessimistic. My policy/āgovernance model is still lacking the kind of nuance that you only get by being in the relevant rooms, but Iāve tried to update here as well. Thatās also part of the reason why Iām doing what Iām doing now.
May you have the time to solve everything!
ā¦ epistemically