Neural sims probably have glitches too. Adversarial examples exist.
Yes. That’s why I specifically mentioned :
and they need to output a ‘confidence’ metric so you lower the learning rate when the sim is less confident the real world estimation is correct
Confidence is a trainable parameter, and you scale down learning rate when confidence is low.
Ok. And there is our weak link. All our robots are going to be sitting around broken. Because the bottleneck is human repair people.
This is a lengthy discussion but the simple answer is that what a human ‘repair person’ does can be described as a simple algorithm that you can write in ordinary software. I’ve repaired a few modern things, this is from direct knowledge and watching videos of someone repairing a Tesla.
The algorithm in essence is every module is self diagnosing, and there is a graph flow of relationships between modules. There are simple experiments you do, it tells you in the manual many of them, to get better evidence.
Then you disassemble the machine partially—if you were a robot and had the right licenses you could download the assembly plan for this machine and reverse it—remove the suspect module, and replace. If the issues don’t resolve, you remove the module that said the suspect module was bad or was related to it.
For PCs, this is really easy. Glitches on your screen from your GPU? Replace the cable. Observe if the glitches go away. Try a different monitor. Still broken? Put in a different GPU. That doesn’t resolve it? Go and memtest86 the RAM. Does that pass? It’s either the motherboard or the processors.
This comes from simply understanding how the components interconnect, and obviously current AI can do this easily better than humans.
The hard part is the robotics.
The ‘simple’ parts like “connect a multimeter to <point 1>, <point 2>. “sand off the corrosion, wipe off the grease”, “does the oil have metal shavings in it”, “remove that difficult to reach screw” are what has been a bottleneck for 60 years.
You can’t just tell the robot “automate the production of rubber gloves”. You need humans to do a lot of work designing a robot that picks out the gloves and puts them on the hand shaped metal molds to the rubber can cure.
Yes economic growth exists. It’s not that fast. It really isn’t clear how AI fits into your discussion of robots.
Because it’s what humans want AI for, and due to the relationships between the variables, it is possible we will not ever get uncontrollable superintelligence before first building a lot of robots, ICs, collecting revenue, and so on.
Isn’t this supposed to be about AI? Are you expecting a regieme where
Most of the worlds compute is going into AI.
Chip production increases by A LOT (at least 10x) within this regieme.
Most of the AI progress in this regieme is about throwing more compute at it.
yes I think AI and robotics and compute construction are all interrelated. That log(Compute) means actually 10x probably is nowhere near enough for strong ASI.
I also personally think it is an...interesting...world model to imagine this ASI that can design a bridge or DNA editor, people are stupid enough to trust it, yet it cannot replace a rusty bolt on the underside of that same bridge or manipulate basic glassware in a lab.
lower the learning rate when the sim is less confident the real world estimation is correct
Adversarial examples can make an image classifier be confidently wrong.
Because it’s what humans want AI for, and due to the relationships between the variables, it is possible we will not ever get uncontrollable superintelligence before first building a lot of robots, ICs, collecting revenue, and so on.
You are talking about robots, and a fairly specific narrow “take the screws out” AI.
Quite a few humans seem to want AI for generating anime waifus. And that is also a fairly narrow kind of AI.
Your “log(compute)” term came from a comparison which was just taking more samples. This doesn’t sound like an efficient way to use more compute.
Someone, using a pretty crude algorithmic approach, managed to get a little more performance for a lot more compute.
Yes. That’s why I specifically mentioned :
Confidence is a trainable parameter, and you scale down learning rate when confidence is low.
This is a lengthy discussion but the simple answer is that what a human ‘repair person’ does can be described as a simple algorithm that you can write in ordinary software. I’ve repaired a few modern things, this is from direct knowledge and watching videos of someone repairing a Tesla.
The algorithm in essence is every module is self diagnosing, and there is a graph flow of relationships between modules. There are simple experiments you do, it tells you in the manual many of them, to get better evidence.
Then you disassemble the machine partially—if you were a robot and had the right licenses you could download the assembly plan for this machine and reverse it—remove the suspect module, and replace. If the issues don’t resolve, you remove the module that said the suspect module was bad or was related to it.
For PCs, this is really easy. Glitches on your screen from your GPU? Replace the cable. Observe if the glitches go away. Try a different monitor. Still broken? Put in a different GPU. That doesn’t resolve it? Go and memtest86 the RAM. Does that pass? It’s either the motherboard or the processors.
This comes from simply understanding how the components interconnect, and obviously current AI can do this easily better than humans.
The hard part is the robotics.
The ‘simple’ parts like “connect a multimeter to <point 1>, <point 2>. “sand off the corrosion, wipe off the grease”, “does the oil have metal shavings in it”, “remove that difficult to reach screw” are what has been a bottleneck for 60 years.
Because it’s what humans want AI for, and due to the relationships between the variables, it is possible we will not ever get uncontrollable superintelligence before first building a lot of robots, ICs, collecting revenue, and so on.
yes I think AI and robotics and compute construction are all interrelated. That log(Compute) means actually 10x probably is nowhere near enough for strong ASI.
I also personally think it is an...interesting...world model to imagine this ASI that can design a bridge or DNA editor, people are stupid enough to trust it, yet it cannot replace a rusty bolt on the underside of that same bridge or manipulate basic glassware in a lab.
Adversarial examples can make an image classifier be confidently wrong.
You are talking about robots, and a fairly specific narrow “take the screws out” AI.
Quite a few humans seem to want AI for generating anime waifus. And that is also a fairly narrow kind of AI.
Your “log(compute)” term came from a comparison which was just taking more samples. This doesn’t sound like an efficient way to use more compute.
Someone, using a pretty crude algorithmic approach, managed to get a little more performance for a lot more compute.