The use of early AIs to solve AI safety problems creates an attractor for “safe, powerful AI.”
What kind of “AI safety problems” are we talking about here? If they are like the “FAI Open Problems” that Eliezer has been posting, they would require philosophers of the highest (perhaps even super-human) caliber to solve. How could “early AIs” be of much help?
We see pretty big boosts already, IMO—largely by facilitating networking effects. Idea recombination and testing happen faster on the internet.
We see pretty big boosts already, IMO—largely by facilitating networking effects. Idea recombination and testing happen faster on the internet.