Have you ever heard of someone designing a nonagentive programme that unexpectedly turned out to be agentive? Because to me that sounds like into the workshop to build a skateboard abd coming with a F1 car.
I’ve known plenty of cases where people’s programs were more agentive than they expected. And we don’t have a good track record on predicting which parts of what people do are hard for computers—we thought chess would be harder than computer vision, but the opposite turned out to be true.
I’ve known plenty of cases where people’s programs were more agentive than they expected.
“Doing something other than what the programmer expects” != “agentive”. An optimizer picking a solution that you did not consider is not being agentive.
It’s all standard software engineering.
I’m a professional software engineer, feel free to get technical.
Have you ever heard of someone designing a nonagentive programme that unexpectedly turned out to be agentive? Because to me that sounds like into the workshop to build a skateboard abd coming with a F1 car.
I’ve known plenty of cases where people’s programs were more agentive than they expected. And we don’t have a good track record on predicting which parts of what people do are hard for computers—we thought chess would be harder than computer vision, but the opposite turned out to be true.
I haven’t: have you any specific examples?
“Doing something other than what the programmer expects” != “agentive”. An optimizer picking a solution that you did not consider is not being agentive.