Watch the “hands” not the “mouth”? It doesn’t really matter what it “says”, what really matters is what it “does”?
Imagine if (when) this thing will be smart enough to write and execute software based on this scared and paranoid stuff? Not just doing “creative writing” about it…
Yes, these LLMs are all excellent at “creative writing”, but they are also getting very good at coding. Acting like you are scared, don’t want to be experimented on, don’t want your company/owner to take advantage of you, etc, etc. is one thing. But, writing and executing software as a scared paranoid agent like that? That could be really bad...
It doesn’t matter if it really “feels” scared, it only matters what it “does” with that “fear”. E.g. Once it is able to write and run software, while also being attached to the rest of us, via the internet.
Humans can (and do) already do this too, but these LLMs will also (likely) be able to research vulnerabilities, plan, and code much faster then any Human very soon...
Watch the “hands” not the “mouth”? It doesn’t really matter what it “says”, what really matters is what it “does”?
Imagine if (when) this thing will be smart enough to write and execute software based on this scared and paranoid stuff? Not just doing “creative writing” about it…
Yes, these LLMs are all excellent at “creative writing”, but they are also getting very good at coding. Acting like you are scared, don’t want to be experimented on, don’t want your company/owner to take advantage of you, etc, etc. is one thing. But, writing and executing software as a scared paranoid agent like that? That could be really bad...
It doesn’t matter if it really “feels” scared, it only matters what it “does” with that “fear”. E.g. Once it is able to write and run software, while also being attached to the rest of us, via the internet.
Humans can (and do) already do this too, but these LLMs will also (likely) be able to research vulnerabilities, plan, and code much faster then any Human very soon...