An AGI might become a dictator in every country on earth while still not being able to wash dishes or make errors when it comes to driving 100,000 miles. Physical coordination is not required.
It’s not clear to me what practical implicates it has to measure reason about the math abilities with models with a no calculator rule. If someone would build an AGI, it makes sense for the AGI to be able to access subprocesses for tasks like calculators.
An AGI might become a dictator in every country on earth while still not being able to wash dishes or make errors when it comes to driving 100,000 miles. Physical coordination is not required.
How would you expect an AI to take over the world without physical capacity? Attacking financial systems, cybersecurity networks, and computer-operated weapons systems all seem possible from an AI that can simply operate a computer. Is that your vision of an AI takeover, or are there other specific dangerous capabilities you’d like the research community to ensure that AI does not attain?
An AGI might become a dictator in every country on earth while still not being able to wash dishes or make errors when it comes to driving 100,000 miles. Physical coordination is not required.
It’s not clear to me what practical implicates it has to measure reason about the math abilities with models with a no calculator rule. If someone would build an AGI, it makes sense for the AGI to be able to access subprocesses for tasks like calculators.
How would you expect an AI to take over the world without physical capacity? Attacking financial systems, cybersecurity networks, and computer-operated weapons systems all seem possible from an AI that can simply operate a computer. Is that your vision of an AI takeover, or are there other specific dangerous capabilities you’d like the research community to ensure that AI does not attain?