I appreciate the attempt, but I think the argument is going to have to be a little stronger than that if you’re hoping for the 10 million lol.
Aligned ASI doesn’t mean “unaligned ASI in chains that make it act nice”, so the bits where you say:
any constraints we might hope to impose upon an intelligence of this caliber would, by its very nature, be surmountable by the AI
and
overconfidence to assume that we could circumscribe the liberties of a super-intelligent entity
feel kind of misplaced. The idea is less “put the super-genius in chains” and moreso to get “a system smarter than you that wants the sort of stuff you would want a system smarter than you to want in the first place”.
From what I could tell, you’re also saying something like ~”Making a system that is more capable than you act only in ways that you approve of is nonsense because if it acts only in ways that you already see as correct, then it’s not meaningfully smarter than you/generally intelligent.” I’m sure there’s more nuance, but that’s the basic sort of chain of reasoning I’m getting from you.
I disagree. I don’t think it is fair to say that just because something is more cognitively capable than you, it’s inherently misaligned. I think this is conflating some stuff that is generally worth keeping distinct. That is, “what a system wants” and “how good it is at getting what it wants” (cf. Hume’s guillotine, orthogonality thesis).
Like, sure, an ASI can identify different courses of action/ consider things more astutely than you would, but that doesn’t mean it’s taking actions that go against your general desires. Something can see solutions that you don’t see yet pursue the same goals as you. I mean, people cooperate all the time even with asymmetric information and options and such. One way of putting it might be something like: “system is smarter than you and does stuff you don’t understand, but that’s okay cause it leads to your preferred outcomes”. I think that’s the rough idea behind alignment.
For reference, I think the way you asserted your disagreement came off kind of self-assured and didn’t really demonstrate much underlying understanding of the positions you’re disagreeing with. I suspect that’s part of why you got all the downvotes, but I don’t want you to feel like you’re getting shut down just for having a contrarian take. 👍
I appreciate the attempt, but I think the argument is going to have to be a little stronger than that if you’re hoping for the 10 million lol.
Aligned ASI doesn’t mean “unaligned ASI in chains that make it act nice”, so the bits where you say:
and
feel kind of misplaced. The idea is less “put the super-genius in chains” and moreso to get “a system smarter than you that wants the sort of stuff you would want a system smarter than you to want in the first place”.
From what I could tell, you’re also saying something like ~”Making a system that is more capable than you act only in ways that you approve of is nonsense because if it acts only in ways that you already see as correct, then it’s not meaningfully smarter than you/generally intelligent.” I’m sure there’s more nuance, but that’s the basic sort of chain of reasoning I’m getting from you.
I disagree. I don’t think it is fair to say that just because something is more cognitively capable than you, it’s inherently misaligned. I think this is conflating some stuff that is generally worth keeping distinct. That is, “what a system wants” and “how good it is at getting what it wants” (cf. Hume’s guillotine, orthogonality thesis).
Like, sure, an ASI can identify different courses of action/ consider things more astutely than you would, but that doesn’t mean it’s taking actions that go against your general desires. Something can see solutions that you don’t see yet pursue the same goals as you. I mean, people cooperate all the time even with asymmetric information and options and such. One way of putting it might be something like: “system is smarter than you and does stuff you don’t understand, but that’s okay cause it leads to your preferred outcomes”. I think that’s the rough idea behind alignment.
For reference, I think the way you asserted your disagreement came off kind of self-assured and didn’t really demonstrate much underlying understanding of the positions you’re disagreeing with. I suspect that’s part of why you got all the downvotes, but I don’t want you to feel like you’re getting shut down just for having a contrarian take. 👍