I immediately found myself brainstorming creative ways to pressure the CEO into delaying the launch (seems like strategically the first thing to focus on) and then thought ‘is this the kind of thing I want to be available online for said CEOs to read if any of this happens?’
I’d suggest for those reasons people avoid posting answers along those lines.
I’m imagining the CEO having a thought process more like...
- I have no idea how my team will actually react when we crack AGI— Let’s quickly Google ‘what would you do if you discovered AGI tomorrow?’* - Oh Lesswrong.com, some of my engineering team love this website - Wait what?! - They would seriously try to [redacted] - I better close that loophole asap
I’m not saying it’s massively likely that things play out in exactly that way but a 1% increased chance that we mess up AI Alignment is quite bad in expectation.
*This post is already the top result on Google for that particular search
This reply sounds way scarier than what you originally posted lol. I don’t think a CEO would be too concerned by what you wrote (given the context), but now there’s the creepy sense of the infohazardous unknown
I immediately found myself brainstorming creative ways to pressure the CEO into delaying the launch (seems like strategically the first thing to focus on) and then thought ‘is this the kind of thing I want to be available online for said CEOs to read if any of this happens?’
I’d suggest for those reasons people avoid posting answers along those lines.
A CEO that has somehow read and understood that post, despite not reading any part of lesswrong warning that AI might be dangerous?
I’m imagining the CEO having a thought process more like...
- I have no idea how my team will actually react when we crack AGI—
Let’s quickly Google ‘what would you do if you discovered AGI tomorrow?’*
- Oh Lesswrong.com, some of my engineering team love this website
- Wait what?!
- They would seriously try to [redacted]
- I better close that loophole asap
I’m not saying it’s massively likely that things play out in exactly that way but a 1% increased chance that we mess up AI Alignment is quite bad in expectation.
*This post is already the top result on Google for that particular search
Ok. Redacted part of my reply in response.
This reply sounds way scarier than what you originally posted lol. I don’t think a CEO would be too concerned by what you wrote (given the context), but now there’s the creepy sense of the infohazardous unknown