The first part of the article was not meant to me taken on it’s own, rather, it was meant to be a premise on which to say: “There is no guarantee AGI will happen or be useful and based on current evidence of how the things we want to model AGI after scale I’m inclined to think it won’t be useful”.
Compare it, if you wish, to a someone giving the argument “planes have flown through the clouds and satellites have taken photos of space and studies have been done on the effects of prayers and up until now no gods have been found nor their effects seen”. It’s a stupid argument, it doesn’t in itself prove that there is no God, but it’s necessary and I think it can help when everyone thinks there is a God.
Since everyone except for (funny enough) most ML researchers I’ve ever meet (as in, people that are actually building the AI, guys like Francois Chollet, not philosophers/category-theorists/old professors that haven’t publish a relevant paper in 30 years :p), seem to come from the (seemingly irrational) premise that AGI is a given considering how technology advances.
I don’t particularly think that this argument is back-up-able more so than the “there is no God” or “money has no inherent value” argument is back-up-able. Since, funnily enough, arguing against the purely imaginary things is basically impossible.
It’s impossible to prove the in-existance of AGI or the equivalency of humans with AGI or argue about power consumption and I/O overhead + algorithmic complexity required for synchronization.… on computer that don’t exist. At most I can say “computer today are much less efficient in terms of power consumption than the human brain at tasks like NLP and image-detection” and I can back it up with empirical evidence like how much power a GPU consumes vs how much claories your average human requires, and comparing those as W and as cost-of-production. At most I can come up with examples like Alexa and Google using m-turk like systems for hard NLP tasks rather than ML algorithms (despite both of them having invested literal billions in this are and academia having worked on it for 60+ years).
But at the end of the day I know that these argument don’t disprove AGI, they just prove that I don’t understand technology enough to realize that AGI is inevitable.
Still, I think these kind of arguments are useful to hopefully make fence-sitters realize how silly the AGI position is, the later two chapters are my arguments for *why* even the AGI God converts imagine will not be as all-powerful as we might think.
b)
All the more reason to be concerned! People will create and deploy these things before they have tested them properly!
I think there’s a lot of place where I’m unclear in the article because I oscillate between what kind of language to use. E.g.:
Here by “test” I mean something like “Given an intrinsic motivation agent augmented by bleeding-edge semi-supervised models to help it at complex & common tasks, it would still need to be given a small amount of physical resources for it to train on how to use them in a non-simulated environment and for us to evaluate it’s performance.… which would take a lot of time, since you can’t speed up real life and would be expensive” rather than “Allow skynet to take control of the nuclear arsenal for experimental purposes”
I think that it’s mainly my fault for not being more explicit with stuff like this, but the other side of that is articles turning into boring 10,000 page essays with a lot of ****, I will try to update that particular statement though.
c)
It seems like some of your arguments are too general; they prove too much; for example, they could just as well be used to argue for the conclusion that ML won’t lead to tremendous human progress.
I actually think that, from your perspective, I could be seen as arguing this.
I’m pretty sure that from your perspective I would actually hold this view, the clarification I made was to specify that I don’t think this view is absolute (i.e. I think that AI will leads to x human progress and most proponents of AGI seem to think it will lead to x * 100,000,000, but in spite of that difference I think even x will be significant).
At least if you count human progress in something simple to measure like how much energy we capture and how little energy we have to spend on building nice human housing an delicious human food (e.g. a civilization with a Dyson sphere would be millions of times as advanced as one without one under this definition)
a)
The first part of the article was not meant to me taken on it’s own, rather, it was meant to be a premise on which to say: “There is no guarantee AGI will happen or be useful and based on current evidence of how the things we want to model AGI after scale I’m inclined to think it won’t be useful”.
Compare it, if you wish, to a someone giving the argument “planes have flown through the clouds and satellites have taken photos of space and studies have been done on the effects of prayers and up until now no gods have been found nor their effects seen”. It’s a stupid argument, it doesn’t in itself prove that there is no God, but it’s necessary and I think it can help when everyone thinks there is a God.
Since everyone except for (funny enough) most ML researchers I’ve ever meet (as in, people that are actually building the AI, guys like Francois Chollet, not philosophers/category-theorists/old professors that haven’t publish a relevant paper in 30 years :p), seem to come from the (seemingly irrational) premise that AGI is a given considering how technology advances.
I don’t particularly think that this argument is back-up-able more so than the “there is no God” or “money has no inherent value” argument is back-up-able. Since, funnily enough, arguing against the purely imaginary things is basically impossible.
It’s impossible to prove the in-existance of AGI or the equivalency of humans with AGI or argue about power consumption and I/O overhead + algorithmic complexity required for synchronization.… on computer that don’t exist. At most I can say “computer today are much less efficient in terms of power consumption than the human brain at tasks like NLP and image-detection” and I can back it up with empirical evidence like how much power a GPU consumes vs how much claories your average human requires, and comparing those as W and as cost-of-production. At most I can come up with examples like Alexa and Google using m-turk like systems for hard NLP tasks rather than ML algorithms (despite both of them having invested literal billions in this are and academia having worked on it for 60+ years).
But at the end of the day I know that these argument don’t disprove AGI, they just prove that I don’t understand technology enough to realize that AGI is inevitable.
Still, I think these kind of arguments are useful to hopefully make fence-sitters realize how silly the AGI position is, the later two chapters are my arguments for *why* even the AGI God converts imagine will not be as all-powerful as we might think.
b)
I think there’s a lot of place where I’m unclear in the article because I oscillate between what kind of language to use. E.g.:
Here by “test” I mean something like “Given an intrinsic motivation agent augmented by bleeding-edge semi-supervised models to help it at complex & common tasks, it would still need to be given a small amount of physical resources for it to train on how to use them in a non-simulated environment and for us to evaluate it’s performance.… which would take a lot of time, since you can’t speed up real life and would be expensive” rather than “Allow skynet to take control of the nuclear arsenal for experimental purposes”
I think that it’s mainly my fault for not being more explicit with stuff like this, but the other side of that is articles turning into boring 10,000 page essays with a lot of ****, I will try to update that particular statement though.
c)
I actually think that, from your perspective, I could be seen as arguing this.
I’m pretty sure that from your perspective I would actually hold this view, the clarification I made was to specify that I don’t think this view is absolute (i.e. I think that AI will leads to
x
human progress and most proponents of AGI seem to think it will lead tox * 100,000,000
, but in spite of that difference I think evenx
will be significant).At least if you count human progress in something simple to measure like how much energy we capture and how little energy we have to spend on building nice human housing an delicious human food (e.g. a civilization with a Dyson sphere would be millions of times as advanced as one without one under this definition)