If I’m hearing you correctly, the plan to limit trolls is push them to make fewer posts inside the auto-hidden areas and more posts outside of the auto-hidden areas?
Well, uh, I suppose that’s one way to deal with trolls.
If I’m hearing you correctly, the plan to limit trolls is push them to make fewer posts inside the auto-hidden areas and more posts outside of the auto-hidden areas?
Well, uh, I suppose that’s one way to deal with trolls.
Theoretically, the more sockpuppets you have, the easier it would be to give each one karma.
Then again I don’t think sockpuppets are really a significant problem at the moment. Hopefully they won’t grow with these changes.
Perhaps I can clarify my objections exploding ‘want’ into want/like/approve.
To me it feels like Ellyn Satter takes ‘-want/-like/+approves’ food behaviors then transforms the ‘-like’ and ‘-want’ variables into ‘+like’ and ‘+want’.
Comparatively, I feel like TFN takes ‘+want/+like/-approves’ food behaviors then transforms the ‘-approves’ variable into ‘+approves’.
Ellyn Satter’s stuff is all about behavior and desire modification, but everything I’ve read of TFN emphasizes approval more. In both cases you end up with ‘+want/+like/+approves’, which is a better result psychologically. But TFN just gets that, while Satter’s approach gets that and a better result physically as well. I think TFN would disagree of the original quote, while Satter would agree with caveats.
Ellyn Satter’s website nominally supports “eating what you want as much as you want” but glosses the whole ”… as long as what you want isn’t actually what you want, but instead these other things that you could learn to eat as long as you’ve got rigorously enforced habits.” I should note that Ellyn Satter seems exclusively focused on children, and so a large part of her work seems to be about controlling what they have access to and shaping their desires. It is definitely not advocating limitless access to whatever you want, and is actually very strict about time-based and content-based access to foods.
I guess in the literal sense she advocates ‘eating whatever you like’, but the modified definition of ‘like’ that Ellyn Satter uses is not what I’d consider unrestrained.
Also I should note that it is a much better cite.
Maybe there is serious academic disagreement among academics or researchers or even registered dieticians, but that’s not a good cite for it.
Citing a non-dietician selling training on ‘permission to eat’ at $75 a pop seems like citing a christian faith healer as evidence Christianity is true.
Evolutionary signals are useful inasmuch as they retain the same context as they had in the evolutionary environment. In other words we’re adaptation executors instead of fitness maximizers.
Our evolutionarily derived pleasure and pain signals are still useful even if they are noisy. They remain useful because the noise is systemic and can be compensated for. Eg, we loved calories back then because they were scarce then but they aren’t now; we can note this and compensate. However, we still live a ~20% oxygen environment and going without breathing is bad for us, so we should listen to our lung’s signals when they tell us to breathe.
That’s a good talk. It starts slow but ends on an important lesson about rational personal responsibility. I know we’re not supposed to use rational as a descriptor like that (eg, rational shoes), but it’s so different from what people normally talk about as responsible. It’s the idea that, no matter what happens, the only things your actions count for is what actually happens as a result.
Sadly, it’s not covered well in the sequences. I think trying to try is the closest there is. I know Eliezer believes in it though because it’s in HPMoR. Still, this talk is a great example of it and I’m glad to have seen it.
Edit: Disregard me, NancyLebovitz is correct. Something to Protect covers it exceptionally well. Also the newcomb’s post.
Thank you for the links, they were exactly what I was looking for.
As for friendly upload FOOMs, I consider the chance of them happening at random about equivalent to FIA happening at random.
Even if your post remains voted down, I thought it was funny.
Ah. Well then that is a signficant change. How long had you been working out at that level prior to starting the diet?
Like, could you have been in the (1-2 mo.) muscle building phase right before starting paleo, then the weight loss phase kicked in just as you started it? Or had you been working out for years at that level then decided to start paleo?
I started recording my weight daily simultaneous with beginning the diet [...]
When I started my diet I went to the gym 4-6 days a week, alternating between running and weight-lifting. I currently go to the gym 4-6 days a week, alternating between running and weight-lifting.
So, you started going to the gym a lot and you started measuring your weight daily and and you lost weight and got healthier. (And also happened to change your diet.)
That’s not really an experiment with Paleo. Not unless you’d already been going to the gym like that and paying that much attention to what you ate and how much you weighed.
Edit: I suppose that the “when I started” statement could be read two ways, one of which would imply that you already worked out that hard (6 days per week) prior to paleo. Though it seems odd you’d be able to do so and still be able to lose 20% of your bodyweight in fat, so I’ll assume for now that’s not what it was.
Question: Why don’t people talk about Ems / Uploads as just as disastrous as uncontrolled AGI? Has there been work done or discussion about the friendliness of Ems / Uploads?
Details: Robin Hanson seems to describe the Em age like a new industrial revolution. Eliezer seems to, well, he seems wary of them but doesn’t seem to treat them like an existential threat. Though Nick Bostrom sees them as an existential threat. A lot of people on Lesswrong seem to talk of it as the next great journey for humanity, and not just a different name for uFAI. For my part, I can’t imagine uploads ending up good. I literally can’t imagine it. Every scenario I’ve tried to imagine ends up with a bad end.
As soon as the first upload is successful then patient zero will realize he’s got unimaginable (brain)power, will start talking in ALL CAPS, and go FOOM on the world, bad end. For the sake of argument, lets say we get lucky and first upload is incredibly nice, and just wants to help people. Eventually the second, or the third, or the twenty fifth upload decides to FOOM over everybody. It’s still bad end. We need to have some way to restrain Ems from FOOM-ing, and we need to figure it out before we start uploading. Okay, lets pretend we could even invent a restraint that works against a determined transhiman who is unimaginably more intelligent than us...
Maybe we’ll get as far as, say, Hanson’s Em society. Ems make copies of themselves tailored to situations to complete work. Some of these copies will choose to / be able to replicate more than others; these copies will inherit this propensity to replicate; eventually, processor-time / RAM-time / hard-disk space will become scarce and things won’t be able to copy as well and will have to fight to not have their processes terminated. Welp… that sounds like the 3 ingredients required to invoke the evolution fairy. Except instead of it being the Darwinian evolution we’re used to, this new breed will employ a terrifying mix of uFIA self-modification and Lamarckian super-evolution. Bad end. Okay, but lets say we find some way to stop THAT...
What about other threats? Ems can still talk to one another and convince one another of things. How do we know they won’t all be hijacked by meme-viruses, and transformed Agent Smith style? That’s a bad end. Or hell, how do we know they won’t be hijacked by virus-viruses? Bad end there too. Or one of the trillions of Ems could build a uFAI and it goes FOOM into a Bad End. Or… The potential for Bad Ends is enormous and you only need one for the end of humanity.
It’s not like flesh-based humans can monitor the system. Once ems are in the 1,000,000x era, they’ll be effectively decoupled from humanity. A revolution could start at 10pm after the evening shift goes home, and by the time the morning shift gets in, it’s been 1,000 years in Em subjective time. Hell, in the time it takes to swing an axe and cut the network/power cable, they’ve had about a month to manage their migration and dissemination to every electronic device in the world. Any regulation has to be built inside the Em system and, as mentioned before, it has to be built before we make the first successful upload.
Maybe we can build an invincible regulator or regulation institution to control it all. But we can’t let it self-replicate or we’ll be right back at the evolution problem again. And we can’t let it be modified by the outside world or it’ll be the hijacking problem again. And we can’t let it self-modify, or it’ll evolve in ways we can’t predict (and we’ve already established that it’ll be outside of everything else’s control). So now we have an invulnerable regulator/regulation system that needs to control a world of trillions. And once our Ems start living in 1,000,000x space, it needs to keep order for literally millions of years without ever making a mistake once. So we need to design a system perfect enough to never make a single error while handling trillions of agents for millions of years?
That strikes me as a problem that’s just as hard as FAI. There seems like no way to solve it that doesn’t involve a friendly AGI controlling the upload world.
Can anyone explain to me why Ems are looked at as a competing technology to FAI instead an existential risk with probability of 1.0?
The ancient Egyptians don’t have any incentive to leave records of this embarrassing occurrence. If anything, they would want to cover this event up so as not to be ridiculed by neighboring nations or by their posterity who would view them as weak.
It’s not just about history books and monuments. It’s about every facet of life that gets effected. For example, when the black death hit Europe, we were able to see massive changes to everything.
Economists, archeologists, and historians for example can trace the massive economic disruption of the black death. The deaths of a large percentage of the population created economic pressures, increasing the demand for workers. You then see greater economic mobility for peasants because of the demand, creating a free-er marketplace for labor. Every single written word we have regarding economic exchange from that time notes the massive inflation of wages for peasants (along with Lords grumbling that peasants were getting uppity and greedy demanding wages). But it’s not just that. We can go back and look at how working conditions improved in the state of buildings from back then, as lords suddenly had to compete for peasant labor.
Peasant wages skyrocketed, in some cases 500-1000%. And even though long distance trade went down, consumer-good trade went up since peasants could now afford more. What’s more, archeologists have looked at those times and noted how there was a sharp decline in exotic goods and wealth in the elite holdings, and how there was a sharp increase in goods/tools found in peasant houses. The proof is not just in words (although it’s there too), it’s in the ground.
We can look back, not just at records but at land (keep in mind, there are also extensive records too). Year after year, lords stopped trying to cultivate land and can look at a field and see how decreases in labor translated to more fallow fields. Additionally, we can look in trash piles, and note the increase in animal bones. You see, animals could be fed on lands without much labor, so as you became unable to work land for agriculture, you could increase animal production to compensate. What’s more, we can also note products in the trash-piles. Dramatic shifts in clothing as wool/leather replaced plant based fabric. Additionally low-labor crops like apples, grapes, vegetables, etc replaced high labor crops like wheat.
This is just one aspect of it. You can look at the sizes and styles of buildings during that era. You can note how the Sondergotik, and Brick Gothic, and Rectilinear architecture styles all suddenly appeared at the same time. You can note how technology development and usage changed. You can look at public works. You can look at weapons and armor in war. You can look at the mass graves from plague deaths. You can look at the bones of those who died before and after the plague and note the nutrition differences. Everything felt the ripple effects. An event like that creates massive ripple effects that can be seen in every aspect of life.
I used the Black Death as an example because it’s the most dramatic and most famous shift, but similar results happen with every civilization that has dramatic events occur like wars/plagues/natural disasters. We can look at the Greco-Persian wars and see the impact to villas and peasant homes and trash piles etc. We can look at the end of the Zhou Dynasty in China and see the effects on trade and trash piles and buildings ect. But we can’t look at the plagues in Egypt and the exodus of the Jews. Every piece of evidence… not just writings and monuments but every piece of evidence from trash piles, to agriculture field samples, to architecture, to graves… everything shows that the stories in the bible never occurred. There never were any plagues, there never was a massive die off of first born sons, there never were a bunch of Jews who left. It simply never happened.
It was just a fairy tale made up out of whole cloth by the bible, a complete fabrication.
NB: As a meta-note you can make quotes by using the > command. So instead of using quote marks to quote, you can quote…
like this.
Scenario 1 is still the bolder claim. Egypt wasn’t that far away from Isreal and any member of the village could have actually gone to Egypt. Jerusalem Isreal to Giza Egypt would be about a 2 week travel on foot. Tel-Aviv Isreal to Alexandria Egypt would be about a 3 day boat ride with biblical technology. Yeah, it’s kind of annoying to travel that far, but ancient traders did that all the time to trade goods.
As it turns out, those stories in Exodus were complete lies and fabrications. There is little evidence any significant number of Jews were ever in Egypt during that time period, and zero evidence the plagues ever occurred. The Bible was willing to lie about something so massive it would have made all the history books, and been carved on every monument. That’s really, really bold.
Additionally, we don’t have hundreds of reports of Jesus’ resurrection. We have one report saying that hundreds of people saw it, and that one report was written down a hundred years after Jesus’ death. If I claim that my great-grandfather rose from the grave in 1912, it doesn’t make it any more credible if I claim that 1,000 people also saw it. It would be silly to say that since 1000 is twice as much as 500, so my great-grandfather’s resurrection is twice as likely as Jesus’. The authorities didn’t bother discrediting it at the time any more than the CIA bothers with discrediting Elvis sightings. There was nothing there for them to discredit.
One of those things that Paul was telling King Agrippa about was the death, burial, and resurrection of Jesus Christ. This is arguably the boldest and most daring claim of the entire scriptures, Old and New Testament. Think about it.
No, it’s not. Nowhere even close. You seem unable to distinguish between ‘claims that are bold and daring’ with ‘claims that are important to my faith’. Claiming some guy came back from the dead for a couple of days, then disappeared again, but we totally have witnesses is not a bold claim.
The entire population of Earth being wiped out in a flood is a bold claim.
Two entire cities getting destroyed by supernatural means is a bold claim.
An entire world power getting torn asunder by a series of supernatural plagues is a bold claim.
Jesus’ resurrection isn’t a bold claim. The other claims require unfathomable property damage and loss of life on the multiple-world-war scale. Jesus’ claim requires about as many people who went to my High School Prom all agreeing to tell a lie. That’s what Eliezer means about the difference between large and small miracles.
If the simulation is really accurate, then the GLUT would enter an infinite loop if he uses an ‘always do the opposite’ strategy.
Ie, “Choose either heads or tails. The oracle predicts you will choose .” If his strategy is ‘choose heads because I like heads’ then the oracle will correctly predict it. If his strategy is ‘do what the oracle says’, then the oracle can choose either heads or tails, and the oracle will predict that and get it correct. If his strategy is ‘flip a coin and choose what it says’ then the oracle will predict that action and if it is a sufficiently powerful oracle, get it correct by modeling all the physical interactions that could change the state of the coin.
However, if his strategy is ‘do the opposite’, then the oracle will never halt. It will get in an infinite recursion choosing heads, then tails, then heads, then tails, etc. until it crashes. It’s no different than an infinite loop in a computer program.
It’s not that the oracle is inaccurate. It’s that a recursive GLUT cannot be constructed for all possible agents.
Khan Academy would probably be your best bet. It’s free, it’s in a visual medium, and the instructors are incredibly good. You can also find their videos by searching youtube for “Khan Academy ”, however, youtube only has the lectures and doesn’t have the practice problems. There are lots of other free math tutoring options, but Khan Academy is the only one I know that is accessible to children. (Also this post would probably have been best in this in this month’s open thread, but I don’t think anyone will mind too much.)
You don’t need to leave the site after getting your answer though. There’s lots to see here. You might want to check out the welcome thread for new members or take a look at the FAQ to see what we’re about.
Mental note, never challenge army1987 to a foot race.
Sorry, I was imprecise. I consider it likely that eventually we’ll be able to make uFAI, but unlikely that any particular project will make uFAI. Moreover, we probably won’t get appreciable warning for uFAI because if researchers knew they were making a uFAI then they wouldn’t make one.
Thus, we have to adopt a general strategy that can’t target any specific research group. Sabotage does not scale well, and would only drive research underground while imposing social costs on us meanwhile. The best bet then is to promote awareness of uFAI risks and try to have friendliness theory completed by the time the first AGI goes online. Not surprisingly, this seems to be what SIAI is already doing. Discussion of sabotage just harms that strategy.
It’s been noted before, but it’s a great read.
Your post in particular is a very nice reminder to me because I’m currently in the process of procrastinating writing. I sat down at my computer with express intention to write, I thought “Oh god what if nobody will like my story”, then I came to LessWrong to intentionally procrastinate because I was afraid I wouldn’t get it right the first time. I’d better go back and churn out some words. Thanks for the reminder, sir.