^^ Why wouldn’t people seeing a cool cyborg tool just lead to more cyborg tools? As opposed to the black boxes that big tech has been building?
You imply a cyborg tool is a “powerful unaligned AI”, it’s not, it’s a tool to improve bandwidth and throughput between any existing AI (which remains untouched by cyborg research) and the human
I was making a more general argument that applies mainly to powerful AI but also to all other things that might help one build powerful AI (such as: insights about AI, cyborg tools, etc). These things-that-help have the downside that someone could use them to build powerful but unaligned AI, which is ultimately the thing we want to delay / reduce-the-probability-of. Whether the downside is bad enough that making them public/popular is net bad is the thing that’s uncertain, but I lean towards yes, it is net bad.
I believe that:
It is bad for cyborg tools to be broadly available because that’ll help {people trying to build the kind of AI that’d kill everyone} more than they’ll {help people trying to save the world}.
It is bad for insights about AI to spread because of the same reason.
It is bad for LLM assistants to be broadly available for the same reason.
Only reasonable people who think hard about AI safety will understand the power of cyborgs
I don’t think I’m particularly relying on that assumption?? I don’t understand what sounded like I think this.
In any case, I’m not making strict “only X are Y” or “all X are Y” statements; I’m making quantitative “X are disproportionately more Y” statements.
That people won’t eventually find out.
I believe that capabilities overhang is temporary, that inevitably “the dam will burst”
Well, yes. And at that point the world is much more doomed; the world has to be saved ahead of that. To increase the probability that we have time to save the world before people find out, we want to buy time. I agree it’s inevitable, but it can be delayed. Making tools and insights broadly available hastens the bursting of the dam, which is bad; containing them delays the bursting of the dam, which is good.
I was making a more general argument that applies mainly to powerful AI but also to all other things that might help one build powerful AI (such as: insights about AI, cyborg tools, etc). These things-that-help have the downside that someone could use them to build powerful but unaligned AI, which is ultimately the thing we want to delay / reduce-the-probability-of. Whether the downside is bad enough that making them public/popular is net bad is the thing that’s uncertain, but I lean towards yes, it is net bad.
I believe that:
It is bad for cyborg tools to be broadly available because that’ll help {people trying to build the kind of AI that’d kill everyone} more than they’ll {help people trying to save the world}.
It is bad for insights about AI to spread because of the same reason.
It is bad for LLM assistants to be broadly available for the same reason.
I don’t think I’m particularly relying on that assumption?? I don’t understand what sounded like I think this.
In any case, I’m not making strict “only X are Y” or “all X are Y” statements; I’m making quantitative “X are disproportionately more Y” statements.
Well, yes. And at that point the world is much more doomed; the world has to be saved ahead of that. To increase the probability that we have time to save the world before people find out, we want to buy time. I agree it’s inevitable, but it can be delayed. Making tools and insights broadly available hastens the bursting of the dam, which is bad; containing them delays the bursting of the dam, which is good.