My only update was the thought that maybe more people will see the problem. The whole debate in the world at large has been a cluster***k.
* Linear extrapolation—exponentials apparently do not exist * Simplistic analogies e.g. the tractor only caused 10 years of misery and unemloyment so any further technology will do no worse. * Conflicts of interest and motivated reasoning * The usual dismissal of geeks and their ideas * Don’t worry leave it to the experts. We can all find plenty of examples where this did not work. https://en.wikipedia.org/wiki/List_of_laboratory_biosecurity_incidents * People saying this is risky being interpreted as a definite prediction of a certain outcome.
As Elon Musk recently pointed out the more proximate threat may be the use of highly capable AIs as tools e.g. to work on social media to feed ideas to people and manipulate them. Evil/amoral/misaligned AI takes over the world would happen later.
Some questions I ask people:
* How well did the advent of homo sapiens work out for less intelligent species like homo habilis? Why would AI be different? * Look at the strife between groups of differing cognitive abilities and the skewed availability of resources between those groups (deliberately left vague to avoid triggering someone). * Look how hard it is to predict the impact of technology—e.g. Krugman’s famous insight that the internet would have no more impact than the fax machine. I remember doing a remote banking strategy in 1998 and asking senior management where they thought the internet fitted into their strategy. They almost all dismissed it as a land of geeks and academics and of no relevance to real businesses. A year later they demanded to know why I had misrepresented their clear view that the internet was going to be central to banking henceforth. Such is the ability of people to think they knew it all along, when they didn’t.
What are your opinions about how the technical quirks of LLMs influences their threat levels? I think the technical details are much more amenable to a lower threat level.
If you update on P(doom) every time people are not rational you might be double-counting btw. (AKA you can’t update every time you rehearse your argument.)
My only update was the thought that maybe more people will see the problem. The whole debate in the world at large has been a cluster***k.
* Linear extrapolation—exponentials apparently do not exist
* Simplistic analogies e.g. the tractor only caused 10 years of misery and unemloyment so any further technology will do no worse.
* Conflicts of interest and motivated reasoning
* The usual dismissal of geeks and their ideas
* Don’t worry leave it to the experts. We can all find plenty of examples where this did not work. https://en.wikipedia.org/wiki/List_of_laboratory_biosecurity_incidents
* People saying this is risky being interpreted as a definite prediction of a certain outcome.
As Elon Musk recently pointed out the more proximate threat may be the use of highly capable AIs as tools e.g. to work on social media to feed ideas to people and manipulate them. Evil/amoral/misaligned AI takes over the world would happen later.
Some questions I ask people:
* How well did the advent of homo sapiens work out for less intelligent species like homo habilis? Why would AI be different?
* Look at the strife between groups of differing cognitive abilities and the skewed availability of resources between those groups (deliberately left vague to avoid triggering someone).
* Look how hard it is to predict the impact of technology—e.g. Krugman’s famous insight that the internet would have no more impact than the fax machine. I remember doing a remote banking strategy in 1998 and asking senior management where they thought the internet fitted into their strategy. They almost all dismissed it as a land of geeks and academics and of no relevance to real businesses. A year later they demanded to know why I had misrepresented their clear view that the internet was going to be central to banking henceforth. Such is the ability of people to think they knew it all along, when they didn’t.
What are your opinions about how the technical quirks of LLMs influences their threat levels? I think the technical details are much more amenable to a lower threat level.
If you update on P(doom) every time people are not rational you might be double-counting btw. (AKA you can’t update every time you rehearse your argument.)