Robin, care to name the next Rubicon? Or a next Rubicon?
Michael, you can help depression by thinking “I wish my brain would stop releasing these depression neurotransmitters”—it doesn’t command your brain but it does prevent you from being helplessly caught up in the feeling and swept away; it stops you from thinking that your life is inherently absolutely awful and immedicable or thinking up more reasons why you should be depressed.
Getting an antidepressant is obviously an act of rebellion against a part of one’s brain (by another part of one’s brain, of course!)
I think that giving anyone who hasn’t shown their ability to build a Friendly AI, the ability to modify their own brain circuitry, is like giving a loaded gun to a 2-year-old. And it’s not that being able to build a Friendly AI means you know enough to modify yourself. It means that you know for yourself why you shouldn’t. Modifying a system as messy as a human has to be left to something smarter than human—hence the point of designing a much cleaner Friendly AI that (provably correctly) self-improves to the point where it can handle the job.
Robin, care to name the next Rubicon? Or a next Rubicon?
Michael, you can help depression by thinking “I wish my brain would stop releasing these depression neurotransmitters”—it doesn’t command your brain but it does prevent you from being helplessly caught up in the feeling and swept away; it stops you from thinking that your life is inherently absolutely awful and immedicable or thinking up more reasons why you should be depressed.
Getting an antidepressant is obviously an act of rebellion against a part of one’s brain (by another part of one’s brain, of course!)
I think that giving anyone who hasn’t shown their ability to build a Friendly AI, the ability to modify their own brain circuitry, is like giving a loaded gun to a 2-year-old. And it’s not that being able to build a Friendly AI means you know enough to modify yourself. It means that you know for yourself why you shouldn’t. Modifying a system as messy as a human has to be left to something smarter than human—hence the point of designing a much cleaner Friendly AI that (provably correctly) self-improves to the point where it can handle the job.