Once we get superintelligence, we might get every other technology that the laws of physics allow, even if we aren’t that “close” to these other technologies.
Maybe they believe in a chance of superintelligence by 2039.
PS: Your comment may have caused it to drop to 38%. :)
This is an important point. AI alignment/safety organizations take money as input and write very abstract papers as their output, which usually have no immediate applications. I agree it may appear very unproductive.
However, if we think from first principles, a lot of other things are like that. For instance, when you go to school, you study the works of Shakespeare, you learn to play the guitar, and you learn how Spanish pronouns work. These things appear to be a complete waste of time. If 50 million students in the US spend 1 hour a day on these kinds of activities, and each hour is valued at only $10, that’s $180 billion/year.
But we know these things are not a waste of time, because in hindsight, when you study how students grow up, this work somehow helps them later in life.
Lots of things appear useless, but are valuable for reasons beyond the intuitive set of reasons we evolved to understand.
Studying the nucleus of atoms might appear like a useless curiosity, if you didn’t know it’ll lead to nuclear energy. There are no real world applications for a long time but suddenly there are enormous applications.
Pasteur’s studies on fermentation might appear limited to modest winemaking improvements, but it led to the discovery of germ theory which saved countless lives.
The stone age people studying weird rocks may have discovered obsidian and copper. Those who studied the strange seeds that plants produce may have discovered agriculture.
We don’t know how valuable this alignment work is. We should cope with this uncertainty probabilistically: if there is a 50% chance it will help us, the benefits per cost is halved, but that doesn’t reduce ideal spending to zero.