I’m not; the main point of the comparison method I propose is that it sidesteps the need to relate neurons in the brain to operations in AI systems.
At best it may show we need a constant factor amount less than the brain has (I am highly doubtful about this claim) to reach its intelligence.
And no one disputes that we can get better than human performance at narrower tasks with worse than human compute. However, such narrow AI also augment human capabilities.
How exactly are you side stepping computation requirements? The brain is fairly efficient at what it has to do. I would be surprised if given the brains constraints, you could get more than 1 OOM more efficient. A brain also has much longer to learn.
But it wouldn’t surprise me if there is enough existing compute already lying around for AGI, given the right algorithms. (And further, that those algorithms are not too hard for current human researchers to discover, for reasons covered in this post and elsewhere.)
Do you have any evidence for these claims? I don’t think your evolution argument is strong in proving that they’re easy to find. I am also not convinced that current hardware is enough. The brain is also far more efficient and parallel at approximate calculations than our current hardware. The exponential growth we’ve seen in model performance has always been accompanied by an exponential growth in hardware. The algorithms used are typically really simple which makes them scalable.
Maybe an algorithm can make the computer I’m using super intelligent but I highly doubt that.
Also I think it would be helpful to retract the numbers or at least say it’s just a guess.
At best it may show we need a constant factor amount less than the brain has (I am highly doubtful about this claim) to reach its intelligence.
And no one disputes that we can get better than human performance at narrower tasks with worse than human compute. However, such narrow AI also augment human capabilities.
How exactly are you side stepping computation requirements? The brain is fairly efficient at what it has to do. I would be surprised if given the brains constraints, you could get more than 1 OOM more efficient. A brain also has much longer to learn.
Do you have any evidence for these claims? I don’t think your evolution argument is strong in proving that they’re easy to find. I am also not convinced that current hardware is enough. The brain is also far more efficient and parallel at approximate calculations than our current hardware. The exponential growth we’ve seen in model performance has always been accompanied by an exponential growth in hardware. The algorithms used are typically really simple which makes them scalable.
Maybe an algorithm can make the computer I’m using super intelligent but I highly doubt that.
Also I think it would be helpful to retract the numbers or at least say it’s just a guess.