Interested in math, Game Theory, etc.
Pattern
It might be a browser compatibility issue?
This should be spoilered. I typed it, and didn’t copy paste it.
This seems like it might be useful to post to that subreddit.
What are people using to load and analyze the data?
Is your writing online anywhere?
Usually posts with: “D&D.Sci” in the title.
As a speaker of a native language that has only genderneutral pronouns and no gendered ones, I often stumble and misgender people out of disregard of that info because that is just not how referring works in my brain. I suspect that natives don’t have this property and the self-reports are about them.
What language is this?
It reminds me of a move made in a lawsuit.
But you said that I should use orange juice as a replacement because it’s similarly sweet.
Does ChatGPT think tequila is sweet, orange juice is bitter...or is it just trying to sell you drinks?*
tequila has a relatively low alcohol content
Relative to what ChatGPT drinks no doubt.
And tequila doesn’t have any sugar at all.
*Peer pressure you into it drinking it maybe.
At best this might describe some drinks that have tequila in them. Does it know the difference between “tequila” and “drinks with tequila”?
Does ChatGPT not differentiate between sweet and sugar, or is ChatGPT just an online bot that improvises everything, and gaslights you when it’s called on it? It keeps insisting:
...”I was simply pointing out that both orange juice and tequila can help to balance out the flavors of the other ingredients in the drink, and that both can add a nice level of sweetness to the finished beverage.”...
Does someone want to try the two recipes out and compare them?
these success stories seem to boil down to just buying time, which is a good deal less impressive.
The counterpart to ‘faster vaccination approval’ is ‘buying time’ though. (Whether or not it ends up being well used, it is good at the time. The other reason to focus on it is—how much can you affect pool testing versus vaccination approval speed? Other stuff like improving statistical techniques might be easier for a lot of people than changing a specific organization.
Overall this was pretty good.
That night, Bruce dreamt of being a bat, of swooping in to save his parents. He dreamt of freedom, and of justice, and of purity. He dreamt of being whole. He dreamt of swooping in to protect Alfred, and Oscar, and Rachel, and all of the other good people he knew.
The part about “purity” didn’t make sense.
Bruce would act.
This is bit of a change from before—something more about the mistake seems like it would make more sense. Not worry. (‘Bruce would get it right this time’ or something about ‘Bruce would act (and it would make things better this time)’.) ‘Bruce wouldn’t be afraid’ maybe?
I was thinking
The rules don’t change over time, but what if on...the equivalent of the summer solstice, fire spells get +1 fire mana or something. i.e, periodic behavior. Wait, I misread that. I meant more like, rules might be different, say, once every hundred years (anniversary of something important) - like there’s more duels that day, so you might have to fight multiple opponents, or something.
This is a place where people might look at the game flux, and go ‘the rules don’t change’.
Our world is so inadequate that seminal psychology experiments are described in mangled, misleading ways. Inadequacy abounds, and status only weakly tracks adequacy. Even if the high-status person belongs to your in-group. Even if all your smart friends are nodding along.
It says he started with the belief. Not, that he was right, or ended with it. Keeping the idea contained to the source, so it’s clear it’s not being stated could be improved, yes.
This is what would happen if you were magically given an extraordinarily powerful AI and then failed to aligned it,
Magically given a very powerful, unaligned, AI. (This ‘the utility function is in code, in one place, and can be changed’ assumption needs re-examination. Even if we assert it exists in there*, it might be hard to change in, say, a NN.)
* Maybe this is overgeneralizing from people, but what reason do we have to think an ‘AI’ will be really good at figuring out its utility function (so it can make changes without changing it, if it so desires). The postulate ‘it will be able to improve itself, so eventually it’ll be able to figure everything out (including how to do that)‘, seems to ignore things like ‘improvements might make it more complex and harder to do that while improving.’ Where and how do you distinguish between ‘this is my utility function’ and ‘this is a bias I have’? (How have you improved this, and your introspecting abilities? How would a NN do either of those?)
One important factor seems to be that Eliezer often imagines scenarios in which AI systems avoid making major technical contributions, or revealing the extent of their capabilities, because they are lying in wait to cause trouble later. But if we are constantly training AI systems to do things that look impressive, then SGD will be aggressively selecting against any AI systems who don’t do impressive-looking stuff. So by the time we have AI systems who can develop molecular nanotech, we will definitely have had systems that did something slightly-less-impressive-looking.
Now there’s an idea: due to competition, AIs do impressive things (which aren’t necessarily safe). An AI creates the last advance that when implemented causes a FOOM + bad stuff.
Eliezer appears to expect AI systems performing extremely fast recursive self-improvement before those systems are able to make superhuman contributions to other domains (including alignment research),
This doesn’t necessarily require the above to be right or wrong—human level contributions (which aren’t safe) could, worst case scenario...etc.
[6.] Many of the “pivotal acts”
(Added the 6 back in when it disappeared while copying and pasting it here.)
There’s a joke about a philosopher king somewhere in there. (Ah, if only we had, an AI powerful enough to save us from AI, but still controlled by...)
I think Eliezer is probably wrong about how useful AI systems will become, including for tasks like AI alignment, before it is catastrophically dangerous.
I think others (or maybe the OP previously?) have pointed out that AI can affect the world in big ways way before ‘taking it over’. Domain limited, or ‘sub-/on par with/super-’ ‘human performance’, doesn’t necessarily matter which of those it is (though more power → more effect is the expectation). Some domains are big.
Spoilering/hiding questions. Interesting.
Do the rules of the wizards’ duels change depending on the date?
I’ll aim to post the ruleset and results on July 18th (giving one week and both weekends for players). If you find yourself wanting extra time, comment below and I can push this deadline back.
The dataset might not have enough info for this/rules might not be deep enough, but a wizards duel between analysts, or ‘players’, also sounds like it could be fun.
I think that is a flaw of comments, relative to ‘google docs’. Long documents without the referenced areas being tagged in comments, might make it hard to find other people asking the same question you did, even if someone wondered about the same section. (And the difficulty of ascertaining that quickly seems unfortunate.)
It also possesses the ability to levitate and travel through solid objects.
How is it contained?
It’s still a trivial inconvenience sometimes, but:
Two tabs:
one for the response comment writing as reading
one for the reading
Note, sometimes people downvote typo comments. Doesn’t happen often, but, sometimes it seems like, when the author fixes it, it happens?
For example, if our function measures the probability that some particular glass is filled with water, the space near the maximum is full of worlds like “take over the galaxy and find the location least likely to be affected by astronomical phenomena, then build a megastructure around the glass designed to keep it full of water”.
If the function is ‘fill it and see it is filled forever’ then strange things may be required to accomplish that (to us) strange goal.
Idea:
Don’t specify our goals to AI using functions.
Flaw:
Current deep learning methods use functions to measure error, and AI learns by minimizing that error in an environment of training data. This has replaced the old paradigm of symbolic AI, which didn’t work very well. If progress continues in this direction, the first powerful AI will operate on the principles of deep learning.
Even if we build AI that doesn’t maximize a function, it won’t be competitive with AI that does, assuming present trends hold. Building weaker, safer AI doesn’t stop others from building stronger, less safe AI.
Do you have any idea how to do “Don’t specify our goals to AI using functions.”? How are you judging “if we build AI that doesn’t maximize a function, it won’t be competitive with AI that does”?
Idea:
Get multiple AIs to prevent each other from maximizing their goal functions.
Flaw:
The global maximum of any set of functions like this still doesn’t include human civilization. Either a single AI will win, or some subset will compete among themselves with just as little regard for preserving humanity as the single AI would have.
Maybe this list should be numbered.
This one is worse than it looks (though it seems underspecified). Goal 1: some notion of human flourishing. Goal 2: prevent goal 1 from being maximized. (If this is the opposite of 1, you may have just asked to be nuked.)
Idea:
Don’t build powerful AI.
Flaw:
For all the ‘a plan that handles filling a glass of water, generated using time t’ ‘is flawed’ - this could actually work. Now, one might object that a particular entity will try to create powerful AI. While there might be incentives to do so, trying to set limits, or see safeguard deployed (if the AI managing air conditioning isn’t part of your AGI research, add these safeguards now).
This isn’t meant as a pure ‘this will solve the problem’ approach, but that doesn’t mean it might not work (thus ensuring AIs handling cooling/whatever at data centers meet certain criteria).
Once it exists, powerful AI is likely to be much easier to generate or copy than historical examples of dangerous technologies like nuclear weapons.
There’s a number of assumptions here which may be correct, but are worth pointing out.
How big a file do you think an AI is?
1 MB?
1 TB?
That’s not to say that compression exists, but also, what hardware can run this program/software you are imagining (and how fast)?
undesirable worlds near the global maximum.
There’s a lot of stuff in here about maximums. It seems like your beliefs that ‘functions won’t do’ stems from a belief that maximization is occurring. Maximizing a function isn’t always easy, even at the level of ‘find the maximum of this function mathematically’. That’s not to say that what you’re saying is necessarily wrong, but suppose some goal is ‘find out how this protein folds’. It might be a solvable problem, but that doesn’t mean it is an easy problem. It also seems like, if the goal is to fill a glass with water, then the goal is achieved when the glass is filled with water.
Yeah. When something is very unclear, it’s like
Is it good or bad? It’s impossible to decipher, I can’t tell. Is it true or false? No way to tell. (It doesn’t happen often, but it’s usually downvoted.)
ETA: I’m not sure at the moment what other aspects there are.
Still reading the rest of this.
“Playful Thinking” (Curiosity driven exploration) may be serendipitous for other stuff you’re doing, but there isn’t a guarantee. Pursuing it because you want to might help you learn things. Overall it is (or can be) one of the ways you take care of yourself.
A focus only on value which is measurable in one way can miss important things. That doesn’t mean that way should be ignored, but by taking more ways into account, you might get a more complete picture. If in the future we have better ways of measuring important things, and you want to work on that, maybe that could make a big difference.
Overall, ‘I have to be able to justify why I’m working on this’ seems like the wrong approach. This is definitely the case for how you spend ALL of your time. This becoming the default isn’t justified. (‘Your desire to learn will help you learn.’ ‘That sounds inefficient.’ ‘What?’)