Obviously one possibility (the inside view) is simply that rationality compels you to focus on FAI. But if we take the outside view for a second, it does seem like FAI has a special attraction for armchair rationalists: it’s the rare heroic act that can be accomplished without ever confronting reality.
I think the last sentence here is a big leap. Why is this a more plausible explanation then the idea that aspiring rationalist simply find AI-risk and FAI compelling. Furthermore, since this community was founded by someone who is deeply interested in both topics, members who are attracted to the rationality side of this community get a lot of exposure to the AI-risk side. As such, if we accept the premiss that AI-risk is a topic that aspiring rationalists are more likely to find interesting than a random member of the general public, then it’s not surprising that many end up thinking/caring about it after being exposed to this community.
You seem to attempt to justify this last sentence of the quoted text with the following:
After all, if you want to save the planet from an asteroid, you have to do a lot of work! You have to build stuff and test it and just generally solve a lot of gritty engineering problems. But if you want to save the planet from AI, you can conveniently do the whole thing without getting out of bed.
I would respond to this by saying that thinking/caring about AI-risk ≠ working on AI-risk. I imagine there are also lots of people who think about the risks of asteroid impacts, but aren’t working on solving them, and wouldn’t claim they are. Also, this paragraph could be interpreted a saying that, people who claim to be doing work on AI-risk (e.g., SI) aren’t actually doing any work. It would be one thing to claim the work is misdirected, but to claim they aren’t working hard (to me) seems misinformed or disingenuous.
Which then leads into the following:
Indeed, as the Tool AI debate as shown, SIAI types have withdrawn from reality even further. There are a lot of AI researchers who spend a lot of time building models, analyzing data, and generally solving a lot of gritty engineering problems all day. But the SIAI view conveniently says this is all very dangerous and that one shouldn’t even begin to try implementing anything like an AI until one has perfectly solved all of the theoretical problems first.
I think a more accurate characterization of SI’s stance would be that there are lots of important philosophical and mathematical problems that if solved will increase the likely hood of a positive Singularity, and that those doing what you call the “gritty engineering” haven’t properly considered the risks. Your statement seems to trivialize this work, and you state Holden’s criticism as evidence. What specifically in this “debate”—including the responses from SI—lead you to believe that SI’s approach is “withdrawn from reality.”
I think the last sentence here is a big leap. Why is this a more plausible explanation then the idea that aspiring rationalist simply find AI-risk and FAI compelling. Furthermore, since this community was founded by someone who is deeply interested in both topics, members who are attracted to the rationality side of this community get a lot of exposure to the AI-risk side. As such, if we accept the premiss that AI-risk is a topic that aspiring rationalists are more likely to find interesting than a random member of the general public, then it’s not surprising that many end up thinking/caring about it after being exposed to this community.
You seem to attempt to justify this last sentence of the quoted text with the following:
I would respond to this by saying that thinking/caring about AI-risk ≠ working on AI-risk. I imagine there are also lots of people who think about the risks of asteroid impacts, but aren’t working on solving them, and wouldn’t claim they are. Also, this paragraph could be interpreted a saying that, people who claim to be doing work on AI-risk (e.g., SI) aren’t actually doing any work. It would be one thing to claim the work is misdirected, but to claim they aren’t working hard (to me) seems misinformed or disingenuous.
Which then leads into the following:
I think a more accurate characterization of SI’s stance would be that there are lots of important philosophical and mathematical problems that if solved will increase the likely hood of a positive Singularity, and that those doing what you call the “gritty engineering” haven’t properly considered the risks. Your statement seems to trivialize this work, and you state Holden’s criticism as evidence. What specifically in this “debate”—including the responses from SI—lead you to believe that SI’s approach is “withdrawn from reality.”