Clearly the development of any AGI is an enormous risk. While I can’t back this up with any concrete argument, a couple decades of working with math and CS problems gives me a gut intuition that statements like “I figure there’s a 50-50 chance it’ll kill us”, or even a “5-15% everything works out” are wildly off. I suspect this is the sort of issue where the probability of survival is funneled to something more like either >0.9999 or <0.0001, of which the latter currently seems far more likely.
Has anyone discussed the concept of deliberately trying to precipitate a global nuclear war? I’m half kidding, but half not; if the risk is really so great and so imminent and potentially final as many on here suspect, then a near-extinction-event like that (presumably wiping out the infrastructure for GPU farms for a long time to come) which wouldn’t actually wipe out the race but could buy time to work the problem (or at least pass the buck to our descendants) could conceivably be preferable.
Obviously, it’s too abhorrent to be a real solution, but it does have the distinct advantage that it’s something that could be done today if the right people wanted to do it, which is especially important given that I’m not at all convinced that we’ll recognize a powerful AGI when we see it, based on how cavalierly everyone is dismissing large language models as nothing more than a sophisticated parlor trick, for instance.
Just want to clarify: this isn’t me, I didn’t write this. My last name isn’t Cappallo. I didn’t find out about this comment until today, when I did a Ctrl + f to find a comment I wrote around the time this was posted.
I’m the victim here, and in fact I have written substantially about the weaponization of random internet randos to manipulate people’s perceptions.
I confess I am perplexed, as I suspect most people are aware there is more than one Trevor in the world. As you point out, that is not your last name. I have no idea who you are, or why you feel this is some targeted “weaponization.”
Here’s an outside-the-box suggestion:
Clearly the development of any AGI is an enormous risk. While I can’t back this up with any concrete argument, a couple decades of working with math and CS problems gives me a gut intuition that statements like “I figure there’s a 50-50 chance it’ll kill us”, or even a “5-15% everything works out” are wildly off. I suspect this is the sort of issue where the probability of survival is funneled to something more like either >0.9999 or <0.0001, of which the latter currently seems far more likely.
Has anyone discussed the concept of deliberately trying to precipitate a global nuclear war? I’m half kidding, but half not; if the risk is really so great and so imminent and potentially final as many on here suspect, then a near-extinction-event like that (presumably wiping out the infrastructure for GPU farms for a long time to come) which wouldn’t actually wipe out the race but could buy time to work the problem (or at least pass the buck to our descendants) could conceivably be preferable.
Obviously, it’s too abhorrent to be a real solution, but it does have the distinct advantage that it’s something that could be done today if the right people wanted to do it, which is especially important given that I’m not at all convinced that we’ll recognize a powerful AGI when we see it, based on how cavalierly everyone is dismissing large language models as nothing more than a sophisticated parlor trick, for instance.
Just want to clarify: this isn’t me, I didn’t write this. My last name isn’t Cappallo. I didn’t find out about this comment until today, when I did a Ctrl + f to find a comment I wrote around the time this was posted.
I’m the victim here, and in fact I have written substantially about the weaponization of random internet randos to manipulate people’s perceptions.
I confess I am perplexed, as I suspect most people are aware there is more than one Trevor in the world. As you point out, that is not your last name. I have no idea who you are, or why you feel this is some targeted “weaponization.”
What weaponization? It would seem very odd to describe yourself as being the “victim” of someone else having the same first name as you.