Sounds like you’ve got the “things from the stars” story flipped—in that parable, we (or our more-intelligent doppelgangers) are the AI, being simulated in some computer by weird 5-dimensional aliens. The point of the story is that high processing speed and power relative to whoever’s outside the computer is a ridiculously great advantage.
Yeah, I think the idea behind keeping the transcripts unavailable is to force an outside view—“these people thought they wouldn’t be convinced, and they were” rather than “but I wouldn’t be convinced by that argument”. Though possibly there are other, shadier reasons! As for the encryption metaphor, I guess in this case the encryption is known (people) but the attack is unknown—and in fact whatever attack would actually be used by an AI would be different and better, so we don’t really get a chance to prepare to defend against it.
And yep, that’s another standard objection—we can’t just make safely constrained AIs, because someone else will make an unconstrained AI, therefore the most important problem to work on is how to make a safe and unconstrained AI before we die horribly.
Sounds like you’ve got the “things from the stars” story flipped—in that parable, we (or our more-intelligent doppelgangers) are the AI, being simulated in some computer by weird 5-dimensional aliens. The point of the story is that high processing speed and power relative to whoever’s outside the computer is a ridiculously great advantage.
Yeah, I think the idea behind keeping the transcripts unavailable is to force an outside view—“these people thought they wouldn’t be convinced, and they were” rather than “but I wouldn’t be convinced by that argument”. Though possibly there are other, shadier reasons! As for the encryption metaphor, I guess in this case the encryption is known (people) but the attack is unknown—and in fact whatever attack would actually be used by an AI would be different and better, so we don’t really get a chance to prepare to defend against it.
And yep, that’s another standard objection—we can’t just make safely constrained AIs, because someone else will make an unconstrained AI, therefore the most important problem to work on is how to make a safe and unconstrained AI before we die horribly.