External observables on what the current racers are doing, leads me to be fairly confident that they say some right things, but the reality is they move as fast as possible basically “ship now, fix later”.
Then we have the fact that interpretability is in its infancy, currently we don’t know what happens inside SOTA models. Likely not something exotic, but we can’t tell, and if you can’t tell on current narrow systems, how are we going to fare on powerful systems[1]?
In that world, I think this would be very probable
owners fail to notice and control its early growth.
Without any metrics on the system, outside of the output it generates, how do you tell?
And then we have the fact, that once somebody gets there, they will be compelled to move into the “useful but we cannot do” regime very quickly.
Not necessarily by the people who built it, but by the C suite and board of whatever company got there first.
At that point, it seems to come down to luck.
Lets assume that I am wrong, my entire ontology[2] is wrong, which means all my thinking is wrong, and all my conclusion are bunk.
So what does the ontology look like in a world where
owners fail to notice and control its early growth.
does not happen.
I should add, that this is a genuine question.
I have an ontology that seems to be approximately the same as EY’s, which basically means whatever he says / writes, I am not confused or surprised.
But I don’t know what Robins looks like, and maybe I am just dumb, and its coherently extractable from his writing and talks, and I failed to do so (likely).
I any case, I really would like to have that understanding, to the point where I can Steelman whatever Robin writes or says. That’s a big ask, and unreasonable, but maybe understanding the above, would get me going.
I avoid the usual 2 and 3 letter acronyms. They are memetic attractors, and they are so powerful that most people can’t get unstuck, which leads to all talk being sucked into irrelevant things.
They are systems, mechanistic nothing more.
Powerful system translates to “do useful task, that we don’t know how to do”, and useful here means things we want.
You don’t have to invoke it per se.
External observables on what the current racers are doing, leads me to be fairly confident that they say some right things, but the reality is they move as fast as possible basically “ship now, fix later”.
Then we have the fact that interpretability is in its infancy, currently we don’t know what happens inside SOTA models. Likely not something exotic, but we can’t tell, and if you can’t tell on current narrow systems, how are we going to fare on powerful systems[1]?
In that world, I think this would be very probable
Without any metrics on the system, outside of the output it generates, how do you tell?
And then we have the fact, that once somebody gets there, they will be compelled to move into the “useful but we cannot do” regime very quickly.
Not necessarily by the people who built it, but by the C suite and board of whatever company got there first.
At that point, it seems to come down to luck.
Lets assume that I am wrong, my entire ontology[2] is wrong, which means all my thinking is wrong, and all my conclusion are bunk.
So what does the ontology look like in a world where
does not happen.
I should add, that this is a genuine question.
I have an ontology that seems to be approximately the same as EY’s, which basically means whatever he says / writes, I am not confused or surprised.
But I don’t know what Robins looks like, and maybe I am just dumb, and its coherently extractable from his writing and talks, and I failed to do so (likely).
I any case, I really would like to have that understanding, to the point where I can Steelman whatever Robin writes or says. That’s a big ask, and unreasonable, but maybe understanding the above, would get me going.
I avoid the usual 2 and 3 letter acronyms. They are memetic attractors, and they are so powerful that most people can’t get unstuck, which leads to all talk being sucked into irrelevant things.
They are systems, mechanistic nothing more.
Powerful system translates to “do useful task, that we don’t know how to do”, and useful here means things we want.
The above is a sliver of what that looks like, but for brevities sake my ontology looks about the same as EY’s (at least as far as I can tell)