imagenet was my fire alarm. and alphago. and alphazero. or maybe gpt3. actually, the fire alarm hadn’t gone off until alphafold, at which time it really started ringing. sorry, I mean alphafold 2. actually, PaLM was what really convinced me agi was soon. well, I mean, not really soon, but hey, maybe if they scale RWKV or S4 or NPT and jam them into a MuZero it somehow won’t be agi, despite that it obviously would be. I wonder how the EfficientZero followups are looking these days? don’t worry, agi can’t happen, they finally convinced me language models aren’t real intelligence because they can’t do real causal reasoning. they’re not good enough at using a causal information bottleneck and they don’t have the appropriate communication patterns of real physics. they’re prone to stereotyping and irrational, unlike real intelligence,
at this point if people aren’t convinced it’s soon, they’re not going to be convinced until after it happens. there’s no further revelation that could occur. [it’ll be here within the year] → {edit 1yr later: well, some folks have included gpt4 in “agi”, and gpt4 was released after I said this, but I think actually what I was really expecting didn’t happen quite as I expected it}, and I don’t know why it’s been so hard for people to see. I guess the insistence on yudkowskian foom has immunized people against real life slow takeoff? but that “slow” is speeding up, hard.
anyway, I hope y’all are using good ai tools. I personally most recommend metaphor.systems, summarize.tech, and semanticscholar.
imagenet was my fire alarm. and alphago. and alphazero. or maybe gpt3. actually, the fire alarm hadn’t gone off until alphafold, at which time it really started ringing. sorry, I mean alphafold 2. actually, PaLM was what really convinced me agi was soon. well, I mean, not really soon, but hey, maybe if they scale RWKV or S4 or NPT and jam them into a MuZero it somehow won’t be agi, despite that it obviously would be. I wonder how the EfficientZero followups are looking these days? don’t worry, agi can’t happen, they finally convinced me language models aren’t real intelligence because they can’t do real causal reasoning. they’re not good enough at using a causal information bottleneck and they don’t have the appropriate communication patterns of real physics. they’re prone to stereotyping and irrational, unlike real intelligence,
at this point if people aren’t convinced it’s soon, they’re not going to be convinced until after it happens. there’s no further revelation that could occur. [it’ll be here within the year] → {edit 1yr later: well, some folks have included gpt4 in “agi”, and gpt4 was released after I said this, but I think actually what I was really expecting didn’t happen quite as I expected it}, and I don’t know why it’s been so hard for people to see. I guess the insistence on yudkowskian foom has immunized people against real life slow takeoff? but that “slow” is speeding up, hard.
anyway, I hope y’all are using good ai tools. I personally most recommend metaphor.systems, summarize.tech, and semanticscholar.