That would be a good argument if it were merely a language model, but if it can answer complicated technical questions (and presumably any other question), then it must have the necessary machinery to model the external world, predict what it would do in such and such circumstances, etc.
Paperclip Minimizer
My point is, if it can answer complicated technical questions, then it is probably a consequentialist that models itself and its environment.
But this leads to a moral philosophy question: are time-discounting rates okay, and is your future self actually less important in the moral calculus than your present self ?
If an AI can answer a complicated technical question, then it evidently has the ability to use resources to further its goal of answering said complicated technical question, else it couldn’t answer a complicated technical question.
But don’t you need to get a gears-level model of how blackmail is bad to think about how dystopian a hypothetical legal-blackmail sociey is ?
The world being turned in computronium computing in order to solve the AI alignment problem would certainly be an ironic end to it.
My point is that it would be a better idea to put as prompt “What follows is a transcript of a conversation between two people:”.
Note the framing. Not “should blackmail be legal?” but rather “why should blackmail be illegal?” Thinking for five seconds (or minutes) about a hypothetical legal-blackmail society should point to obviously dystopian results. This is not a subtle. One could write the young adult novel, but what would even be the point.
Of course, that is not an argument. Not evidence.
What ? From a consequentialist point of view, of course it is. If a policy (and “make blackmail legal” is a policy) probably have bad consequences, then it is a bad policy.
It was how it was trained, but Gurkenglas is saying that GPT-2 could male a human-like conversation because Turing test transcripts are in the GPT-2 dataset, but it’s conversations between humans in the GPT-2 dataset that would make possible GPT-2 making human-like conversations and thus potentially passing the Turing Test.
But if the blackmail information is a good thing to publish, then blackmailing is still immoral, because it should be published and people should be incentivized to publish it, not to not publish it. We, as a society, should ensure that if, say, someone routinely engage in kidnapping children to harvest their organs, and someone knows this information, then she should be incentivized to send this information to the relevant authorities and not to keep this information to herself, for reasons that are I hope obvious.
I’m not sure what you’re trying to say. I’m only saying that if your goal is to have an AI generate sentences that look like they were wrote by humans, then you should get a corpus with a lot of sentences that were wrote by humans, not sentences wrote by other, dumber, programs. I do not see why anyone would disagree with that.
It would make much more sense to train GPT-2 using discussions between humans if you want it to pass the Turing Test.
You need to define the terms you use in a way so that what you are saying is useful by having pragmatic consequences on the real world of actual things, and not simply on the same level as arguing by definition.
If you have such a large definition of the right to exit being blocked, then there is practically no such thing as the right to exit not being blocked, and the claim in your original comment is useless.
Excellent article ! You might want to add some trigger warnings, though.
edit: why so many downvotes in so little time ?
Effective Altruism, YouTube, and AI (talk by Lê Nguyên Hoang)
Hey admins: The “ë” in “Michaël Trazzi” is weird, probably a bug in your handling of Unicode.
How does this interact with time preference ? As stated, an elementary consequence of this theorem is that either lending (and pretty much every other capitalist activity) is unprofitable, or arbitrage is possible.