A link appears to have broken, does anyone know what “null” was supposed to link to in “policy null ” (note the extra spaces around “null”
cwillu
There are severe issues with the measure I’m about to employ (not least is everything listed in https://www.sqlite.org/cves.html) , but the order of magnitude is still meaningful:
https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=sqlite 170 records
https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=postgresql 292 records (+74 postgres and maybe another 100 or so under pg; the specific spelling “postgresql” isn’t used as consistently as “sqlite” and “mysql” is)
https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=mysql 2026 records
On the first picture of the feeder, if you screw through a small piece of wood on the inside, it’ll act as a washer and make it much harder for the screw to pull through the plastic if a cat gets kinetic with it.
Literally does not apply to any existing AI
Does so by attacking open source models
1 contradicts 3.
The management interfaces are backed into the cpu dies these days, and typically have full access to all the same busses as the regular cpu cores do, in addition to being able to reprogram the cpu microcode itself. I’m combining/glossing over the facilities somewhat, bu the point remains that true root access to the cpu’s management interface really is potentially a circuit-breaker level problem.
Solomon wise, Enoch old.
(I may have finished rereading Unsong recently)
introduce two new special tokens unused during training, which we will call the “keys”
during instruction tuning include a system prompt surrounded by the keys for each instruction-generation pair
finetune the LLM to behave in the following way:
generate text as usual, unless an input attempts to modify the system prompt
if the input tries to modify the system prompt, generate text refusing to accept the input
don’t give users access to the keys via API/UI
Besides calling the special control tokens “keys”, this is identical to how instruction-tuning works already.
There is a single sharp, sweet, one-short-paragraph idea waiting to escape from the layers of florid prose it’s tangled in.
Then it would be judged for what it is, rather than for the (tacky) clothing its wearing.
A well-made catspaw, with a fine wide chisel on one end, and a finely tapered nail puller on the other (most cheap catspaws’ pullers are way too blunt) is very useful for light demo work like this, as they’re a single tool you can just keep in your hand. It’s basically a demolition prybar with a claw and hammer on the opposite end.
Pictured above is the kind I usually use.
This isn’t the link I was thinking of (I was remembering something in the alignment discussion in the early days of lw, but I can’t find it), but this is probably a more direct answer to your request anyway: https://www.lesswrong.com/posts/FgsoWSACQfyyaB5s7/shutdown-seeking-ai
[…] or reward itself highly without actually completing the objective […]
This is standard fare in the existing alignment discussion. See for instance https://www.lesswrong.com/posts/TtYuY2QBug3dn2wuo/the-problem-with-aixi or anything referring to wireheading.
[…] The notion of an argument that convinces any mind seems to involve a little blue woman who was never built into the system, who climbs out of literally nowhere, and strangles the little grey man, because that transistor has just got to output +3 volts: It’s such a compelling argument, you see.
But compulsion is not a property of arguments, it is a property of minds that process arguments.
[…]
And that is why (I went on to say) the result of trying to remove all assumptions from a mind, and unwind to the perfect absence of any prior, is not an ideal philosopher of perfect emptiness, but a rock. What is left of a mind after you remove the source code? Not the ghost who looks over the source code, but simply… no ghost.
So—and I shall take up this theme again later—wherever you are to locate your notions of validity or worth or rationality or justification or even objectivity, it cannot rely on an argument that is universally compelling to all physically possible minds.
Nor can you ground validity in a sequence of justifications that, beginning from nothing, persuades a perfect emptiness.
[…]
The first great failure of those who try to consider Friendly AI, is the One Great Moral Principle That Is All We Need To Program—aka the fake utility function—and of this I have already spoken.
But the even worse failure is the One Great Moral Principle We Don’t Even Need To Program Because Any AI Must Inevitably Conclude It. This notion exerts a terrifying unhealthy fascination on those who spontaneously reinvent it; they dream of commands that no sufficiently advanced mind can disobey. The gods themselves will proclaim the rightness of their philosophy! (E.g. John C. Wright, Marc Geddes.)
The truth is probably somewhere in the middle.
Not a complete answer, but something that helps me, that hasn’t been mentioned often, is letting yourself do the task incompletely.
I don’t have to fold all the laundry, I can just fold one or three things. I don’t have to wash all the dishes, I can just wash one more than I actually need to eat right now. I don’t have to pick up all the trash laying around, just gather a couple things into an empty bag of chips.
It doesn’t mean anything, I’m not committing to anything, I’m just doing one meaningless thing. And I find that helps.
Climbing the ladder of human meaning, ability and accomplishment for some, miniature american flags for others!
“Non-trivial” is a pretty soft word to include in this sort of prediction, in my opinion.
I think I’d disagree if you had said “purely AI-written paper resolves an open millennium prize problem”, but as written I’m saying to myself “hrm, I don’t know how to engage with this in a way that will actually pin down the prediction”.
I think it’s well enough established that long form internally coherent content is within the capabilities of a sufficiently large language model. I think the bottleneck on it being scary (or rather, it being not long before The End) is the LLM being responsible for the inputs to the research.
Bing told a friend of mine that I could read their conversations with Bing because I provided them the link.
Is there any reason to think that this isn’t a plausible hallucination?
Regarding musicians getting paid ridiculous amounts of money for playing gigs, I’m reminded of the “Making chalk mark on generator $1. Knowing where to make mark $9,999.” story.
The work happens off-stage, for years or decades, typically hours per day starting in childhood, all of which is uncompensated; and a significant level of practice must continue your entire life to maintain your ability to perform.
My understanding is that M&B is intended to be broader than that, as per:
“So it is, perhaps, noting the common deployment of such rhetorical trickeries that has led many people using the concept to speak of it in terms of a Motte and Bailey fallacy. Nevertheless, I think it is clearly worth distinguishing the Motte and Bailey Doctrine from a particular fallacious exploitation of it. For example, in some discussions using this concept for analysis a defence has been offered that since different people advance the Motte and the Bailey it is unfair to accuse them of a Motte and Bailey fallacy, or of Motte and Baileying. That would be true if the concept was a concept of a fallacy, because a single argument needs to be before us for such a criticism to be made. Different things said by different people are not fairly described as constituting a fallacy. However, when we get clear that we are speaking of a doctrine, different people who declare their adherence to that doctrine can be criticised in this way. Hence we need to distinguish the doctrine from fallacies exploiting it to expose the strategy of true believers advancing the Bailey under the cover provided by others who defend the Motte.” [bold mine]
http://blog.practicalethics.ox.ac.uk/2014/09/motte-and-bailey-doctrines/
I’m deeply suspicious of any use of the term “violence” in interpersonal contexts that do not involve actual risk-of-blood violence, having witnessed how the game of telephone interacts with such use, and having been close enough to be singed a couple times.
It’s a motte and bailey: the people who use the word as part of a technical term clearly and explicitly disavow the implication, but other people clearly and explicitly call out the implication as if it were fact. Accusations of gaslighting sometimes follow.
It’s as if “don’t-kill-everyoneism” somehow got associated with the ethics-and-unemployment branch of alignment, but then people started making arguments that opposing, say, RLHF-imposed guardrails for proper attribution, implied that you were actively helping bring about the robot apocalypse, merely because the technical term happens to include “kill everyone”.
Downside of most any information being available to use from any context, I guess.
The corresponding arbital page is now (apparently) dead.