Were the questions eventually answered to your satisfaction? If so, what/who did it? Or did you end up concluding that the AI Safety people have no idea what they are talking about when they mention “intelligence”? Or was the inferential gap just too large and you ended up doing all the work on your own?Or did something else happen?
The inferential gap didn’t end up being worked out through conversation and I ended up mainly working that out by reading (Superintelligence, The Precipice, AGI Safety Fundamentals in that order) and bridging the other side of information with my own. I think this was pretty unfortunate time-wise though. Some of the things that were helpful included:
- Increased understanding on my end of how ML worked such that I could understand what “learning” looked like. Once I understood this, it was easier to see how my initial questions might have sounded irrelevant to someone working on AI Safety. - A better understanding of what an AI planning multiple steps in advance (such as behaving until a treacherous turn) might look like. - Encountering terms like APS or TAI, which communicated the ideas in ways that don’t try to say “general intelligence”
I’d mostly thank AGI Safety Fundamentals for these! I don’t regret reading any of those resources, but I do think I’d have come to find AI Safety to be important more quickly if someone had addressed my questions with more understanding of my own background in the early stages.
Were the questions eventually answered to your satisfaction? If so, what/who did it? Or did you end up concluding that the AI Safety people have no idea what they are talking about when they mention “intelligence”? Or was the inferential gap just too large and you ended up doing all the work on your own?Or did something else happen?
The inferential gap didn’t end up being worked out through conversation and I ended up mainly working that out by reading (Superintelligence, The Precipice, AGI Safety Fundamentals in that order) and bridging the other side of information with my own. I think this was pretty unfortunate time-wise though. Some of the things that were helpful included:
- Increased understanding on my end of how ML worked such that I could understand what “learning” looked like. Once I understood this, it was easier to see how my initial questions might have sounded irrelevant to someone working on AI Safety.
- A better understanding of what an AI planning multiple steps in advance (such as behaving until a treacherous turn) might look like.
- Encountering terms like APS or TAI, which communicated the ideas in ways that don’t try to say “general intelligence”
I’d mostly thank AGI Safety Fundamentals for these! I don’t regret reading any of those resources, but I do think I’d have come to find AI Safety to be important more quickly if someone had addressed my questions with more understanding of my own background in the early stages.