This was a fascinating, original idea as usual. I loved the notion of a brilliant, condescending sort of robot capable of doing a task perfectly who chooses (in order to demonstrate its own artistry) to predict and act out how we would get it wrong.
It did make me wonder though, whether when we reframe something like this for GPTs it’s also important to apply the reframing to our own human intelligence to determine if the claim is distinct; in this case asking the question “are we imitators, simulators or predictors?”. It might be possible to make the case that we are also predictors in as much as our consciousness projects an expectation of the results of our behaviour on to the world, an idea well explained by cognitive scientist Andy Clark.
I agree though, it would be remarkable if GPTs did end up thinking the way we do. And ironically, if they don’t think the way we do, and instead begin to do away with the inefficiencies of predicting and playing out human errors, that would put us in the position of doing the hard work of predicting what how they will act.
This was a fascinating, original idea as usual. I loved the notion of a brilliant, condescending sort of robot capable of doing a task perfectly who chooses (in order to demonstrate its own artistry) to predict and act out how we would get it wrong.
It did make me wonder though, whether when we reframe something like this for GPTs it’s also important to apply the reframing to our own human intelligence to determine if the claim is distinct; in this case asking the question “are we imitators, simulators or predictors?”. It might be possible to make the case that we are also predictors in as much as our consciousness projects an expectation of the results of our behaviour on to the world, an idea well explained by cognitive scientist Andy Clark.
I agree though, it would be remarkable if GPTs did end up thinking the way we do. And ironically, if they don’t think the way we do, and instead begin to do away with the inefficiencies of predicting and playing out human errors, that would put us in the position of doing the hard work of predicting what how they will act.