There’s a post somewhere about two entities discussing how evolution is optimizing so quickly, compared to how things were before. One of them tries to argue that brains will be even faster while the other scoffs that brains making machines with hundreds of moving parts in as little as a thousand years is absurd.
Of course it’s an allegory for the next jump also having a massive time scale difference, with things that used to take years taking only minutes.
Unfortunately I can’t find the post and I can’t remember what it’s called.
I understand the notion, but think of it in terms of preventing a pandemic: There’s a certain set of characteristics of a virus that would overwhelm virtually any attempt to prevent it from wiping out humanity. All existing viruses are pretty safely within the bounds of what our actual public health protocols can handle. On top of that, existing or hypothetical yet plausible protocols can prevent pandemics with viruses that have higher transmissibility, or higher mortality than anything previously experienced.
Realistically, a protocol to deal with AGI will be in a similar position. It will be distinctly “one-shot” but there’s no reason it couldn’t deal with a computer somewhat more intelligent than any existing human being.
There’s a post somewhere about two entities discussing how evolution is optimizing so quickly, compared to how things were before. One of them tries to argue that brains will be even faster while the other scoffs that brains making machines with hundreds of moving parts in as little as a thousand years is absurd.
Of course it’s an allegory for the next jump also having a massive time scale difference, with things that used to take years taking only minutes.
Unfortunately I can’t find the post and I can’t remember what it’s called.
Suprised by brains.
That’s it.
I understand the notion, but think of it in terms of preventing a pandemic: There’s a certain set of characteristics of a virus that would overwhelm virtually any attempt to prevent it from wiping out humanity. All existing viruses are pretty safely within the bounds of what our actual public health protocols can handle. On top of that, existing or hypothetical yet plausible protocols can prevent pandemics with viruses that have higher transmissibility, or higher mortality than anything previously experienced.
Realistically, a protocol to deal with AGI will be in a similar position. It will be distinctly “one-shot” but there’s no reason it couldn’t deal with a computer somewhat more intelligent than any existing human being.
Sounds kind of like “They’re Made of Meat”, though the context is different enough that I doubt that’s what you’re referring to.
It’s probably in the Hanson-Yudkowsky FOOM debate. Maybe on OB?