Ethan Mollick: To be clear, AI is not the root cause of cheating. Cheating happens because schoolwork is hard and high stakes. And schoolwork is hard and high stakes because learning is not always fun and forms of extrinsic motivation, like grades, are often required to get people to learn. People are exquisitely good at figuring out ways to avoid things they don’t like to do, and, as a major new analysis shows, most people don’t like mental effort. So, they delegate some of that effort to the AI.
This also characterizes quite a few areas like becoming healthier, or losing weight, or exercising more, because unfortunately getting healthier, losing weight, or exercising more both requires a lot of effort to both do and maintain, and doing those things is unfortunately way less fun and easy than other options.
Here, there are definitely tools to make it a little better, but I’d still say that this is a big reason why Americans are quite unhealthy today.
The key point Eliezer is trying to make is that, while intelligence is weird and will advance relatively far in different places in unpredictable ways, at some point none of that matters. There is a real sense in which ‘smart enough to figure the remaining things out’ is a universal threshold, in both AIs and humans. A sufficiently generally smart human, or a sufficiently capable AI, can and will figure out pretty much anything, up to some level of general difficulty relative to time available, if they put their mind to doing that.
When people say ‘ASI couldn’t do [X]’ they are either making a physics claim about [X] not being possible, or they are wrong. There is no third option. Instead, people make claims like ‘ASI won’t be able to do [X]’ and then pre-AGI models are very much sufficient to do [X].
While people are often wrong about when AI will do X, especially relative to another task Y, I think there’s another reading of Roon’s tweet thread that is also valuable to inject into LW discourse, and it’s that @So8res and @Eliezer Yudkowsky and MIRI were pretty wrong about there being a core of general intelligence that is primarily algorithmic that humans have and no other species has.
While g as a construct does work for general intelligence, it’s way less powerful as an explanation than Nate Soares and Eliezer Yudkowsky and MIRI thought.
Roon’s tweet thread is about how even in AI takeoff, AIs will still have real weaknesses, as well as areas where AIs are worse than humans at some tasks.
Also, this:
at some point none of that matters. There is a real sense in which ‘smart enough to figure the remaining things out’ is a universal threshold, in both AIs and humans. A sufficiently generally smart human, or a sufficiently capable AI, can and will figure out pretty much anything, up to some level of general difficulty relative to time available, if they put their mind to doing that.
Even if this happens, it will still take quite a lot of time, on the order of 1-3 decades at least after AI replaces humans at lots of jobs, and thus the time period where AIs both are smarter than humans in some very important areas but aren’t universally better than humans matters a lot in the takeoff.
So Roon’s thread is mostly about how there’s no real core of intelligence in both humans and AIs, and how AI and human capabilities will absolutely vary a lot, even in takeoff scenarios.
This BTW is why I hate the AGI concept, since it’s way too ill-defined and ultimately looks like a grab-bag of things humans have and AIs don’t have, and we need to start thinking more quantitatively on AI progress.
This also characterizes quite a few areas like becoming healthier, or losing weight, or exercising more, because unfortunately getting healthier, losing weight, or exercising more both requires a lot of effort to both do and maintain, and doing those things is unfortunately way less fun and easy than other options.
Here, there are definitely tools to make it a little better, but I’d still say that this is a big reason why Americans are quite unhealthy today.
While people are often wrong about when AI will do X, especially relative to another task Y, I think there’s another reading of Roon’s tweet thread that is also valuable to inject into LW discourse, and it’s that @So8res and @Eliezer Yudkowsky and MIRI were pretty wrong about there being a core of general intelligence that is primarily algorithmic that humans have and no other species has.
While g as a construct does work for general intelligence, it’s way less powerful as an explanation than Nate Soares and Eliezer Yudkowsky and MIRI thought.
Roon’s tweet thread is about how even in AI takeoff, AIs will still have real weaknesses, as well as areas where AIs are worse than humans at some tasks.
Also, this:
Even if this happens, it will still take quite a lot of time, on the order of 1-3 decades at least after AI replaces humans at lots of jobs, and thus the time period where AIs both are smarter than humans in some very important areas but aren’t universally better than humans matters a lot in the takeoff.
So Roon’s thread is mostly about how there’s no real core of intelligence in both humans and AIs, and how AI and human capabilities will absolutely vary a lot, even in takeoff scenarios.
This BTW is why I hate the AGI concept, since it’s way too ill-defined and ultimately looks like a grab-bag of things humans have and AIs don’t have, and we need to start thinking more quantitatively on AI progress.