I’ve been reading about the difficult problem of building an intelligent agent A that can prove a more intelligent version of itself, A’, will behave according to A’s values. It made me start wondering: what does it mean when a person “proves” something to themselves or others? Is it the mental state change that’s important? The external manipulation of symbols?
blake8086
Recommended
Ordinary claims require ordinary evidence
What Yann LeCun gets wrong about aligning AI (video)
Search Is All You Need
I think you would actually want to use hydrogen. It would essentially be a really powerful light gas gun.
That’s not really related though. I’m asking “what if you build a gun with nukes as propellant?”, not “what if you build a plane that rocket jumps through air/space?”. The idea is to impart the highest fraction of a single bomb’s energy onto a payload. Orion is pretty wasteful in terms of energy conversion.
I think all you need to do to release the payload is to stop flicking it, so that part should be easy.
If one were to build a cannon (say a large, thick pipe buried deep underground) and use a nuclear bomb as propellant, could they achieve anything interesting? For example, boost a first stage payload to orbit, or perhaps Earth escape velocity? The only prior art I know of for this is the Pascal-B nuclear test shot.
I’m really glad to see this post. I think you hit most of the major points, and made most of the strong arguments in favor. Might I recommend you add a section rebutting arguments against? I have advocated for this to many of my friends, and I’ve heard:
Isn’t this a lot like slavery? Or some kind of fractional slavery?
How can young children enter into a contract like this that might bind them for the rest of their lives without getting fucked over?
What prevents parents from maliciously selling all of their child’s future income for short-term gain? (And correspondingly: shady corporations from buying it)
How should this be enforced? What happens if they default?
Won’t this prevent people from entering ?
Won’t this prevent students from becoming “well-rounded”?
Won’t people take short-term gains and not really think long-term about this?
I’m not convinced investors will help the people they’ve invested in… (yes, I know this seems silly, but I’ve really had to rebut this argument)
I worked backwards from “no one in the educational system is directly incentivized to help children grow up into happy, productive adults” and derived that financing exactly like this was the ideal path to building a system that actually works. I’m really glad to see other people thinking these thoughts, and I would love to figure out how to make this a reality.
Why don’t you tell him the reasons you don’t want him to swear? I assume you have reasons, but maybe you’ve never needed to articulate them before. I’m guessing your reason is something along the lines of “lower status people swear, I don’t want people to think of you as lower status”. I imagine a high IQ 10-year-old can understand that.
Also, if swear words are a fun and exciting thing to him, why not teach him all the swear words so he can increase his status among his friends?
I really enjoyed that.
Instead of simply saying “fight confirmation bias”, try to give reasons why to fight confirmation bias. Being less wrong is often not rewarding enough for people.
Fighting confirmation bias makes you sexy! It will make your peers all think you’re smart! You’ll feel better about yourself!
You say “many of us among the academically gifted derive a huge amount of self-worth from thinking that WE ARE RIGHT.” How could you redirect some of that perceived potential self-worth gain into fighting confirmation bias?
I feel like the dragon parable correctly shows, if anything, negative progress being made towards dealing with the dragon, until suddenly, it is dealt with. I suppose one difference is that the anti-dragon projectile seems so much more achievable and imaginable than a cure for aging.
http://systemsandus.com/ uses + and—to denote it, and I guess they just assume you can mostly keep track. I feel like it works on simple diagrams.
I don’t think your game is sequential, if Player 2 doesn’t know Player 1′s move.
You really have two games:
Game 1: Sequential game of Player 1 chooses A or B/C, and determines whether game 2 occurs.
Game 2: Simultaneous game of Player 2 maybe choosing X or Y, against Player 1′s unknown selection of B/C.
edit: And the equilibrium case for Player 1 in the second game is an expected payout of 2, so he should always choose A.
I think you can just compute the Nash Equilibria. For example, use this site: http://banach.lse.ac.uk/
The answer appears to be “always pick A”. Player 2 will never get to move.
I think you make a good point. I kind of cheated in order to resolve the story quickly. I think you still have this problem that a sufficiently powerful black box can potentially tell the difference between training and reality, and it also has to have a perfectly innocuous function it’s optimizing for, or you can have negative consequences. For instance, a GPT-n that optimizes for “continuable outputs” sounds pretty good, but could lead to this kind of problem.