That doesn’t look right—if she just flipped H, then THT is also eliminated. So the renormalization should be:
HH: 1⁄2
HT: 0
THH: 1⁄4
THT: 0
TTH: 1⁄4
TTT: 0
Which means the coin doesn’t actually change anything.
That doesn’t look right—if she just flipped H, then THT is also eliminated. So the renormalization should be:
HH: 1⁄2
HT: 0
THH: 1⁄4
THT: 0
TTH: 1⁄4
TTT: 0
Which means the coin doesn’t actually change anything.
Took it. Comments:
Hopefully you have a way to filter out accidental duplicates (i.e. a hidden random ID field or some such), because I submitted the form by accident several times while filling it out. (I was doing it from my phone, and basically any slightly missed touch on the UI resulted in accidental submission).
Multiple choice questions should always have a “none” option of some kind, because once you select a radio button option there’s no way to deselect it. Most of them did but not all.
I answered “God” with a significant probability because the way the definitions is phrased, I would say it includes whoever is running the simulation if the simulation hypothesis is true. I’m sure many people interpreted it differently. I’d suggest making this distinction explicit one way or the other next time.
Your “dimensionless” example isn’t dimensionless; the dimensions are units of (satandate—whalefire).
You only get something like a reynolds number when the units cancel out, so you’re left with a pure ratio that tells you something real about your problem. Here you aren’t cancelling out any units, you’re just neglecting to write them down, and scaling things so that outcomes of interest happen to land at 0 and 1. Expecting special insight to come out of that operation is numerology.
Great article other than that, though. I hadn’t seen this quote before: “We have practically defined numerical utility as being that thing for which the calculus of mathematical expectations is legitimate.” For me that really captures the essence of it.
Hi all, I’m Jeff.
I’ve started mentally rewarding myself with a happy thought and a smile when I catch myself starting a bad habit (“Hey! I noticed!”) instead of castigating myself (“Doh! I’m doing it again!”). Seems to work so far; we’ll see how it goes.
I started using the Pomodoro technique today (pick a task, work on it for 25 minutes, break for 5, repeat). I’ll had to adjust it somewhat to deal with interruptions during the day, but that wasn’t too hard: when I get done with the interruption, I just have less time before the next break. (I’m keeping the breaks at :25 and :55 to make it easier to keep track.)
There are a number of minor tasks that I’ve been putting off for weeks (or months) that I finished today, just because I was stuck in the middle of a 25-minute assignment and I wasn’t allowing myself to switch to something “more important” until then. So so far Pomodoro is very promising.
I allocated the first time block this morning to scraping together my notes and planning for the week. I didn’t get a plan made, but I did realize how ridiculously overcommitted I was once I started thinking of tasks in terms of available half-hour slots.
I also have a strong aversion to posting my writing publicly, especially if it reveals anything personal about myself. So this post right here is a direct attempt to overcome that by just doing it. I’m not sure if this is using any specific technique from the minicamp, or just making use of the crazy mental energy from the camp while I’m still feeling it.
During-meetup report: is the meetup still on? Brandon and his sign aren’t here, and I don’t see a likely group. The waitress had no idea who I was asking about.
Two different baby showers, though. I could join one of those instead.
Update: located one other LWer. We talked about the sequences and whatnot for an hour; then I had to go. On my way out discovered the table with five more folks.
Lesson for next meetup: bigger sign.
I don’t see how this differs at all from Searle’s Chinese room.
The “puzzle” is created by the mental picture we form in our heads when hearing the description. For Searle’s room, it’s a clerk in a room full of tiles, shuffling them between boxes; for yours, it’s a person sitting at a desk scratching on paper. Since the consciousness isn’t that of the human in the room, where is it? Surely not in a few scraps of paper.
But plug in the reality for how complex such simulations would actually have to be, if they were to actually simulate a human brain. Picture what the scenarios would look like running on sufficient fast-forward that we could converse with the simulated person.
You (the clerk inside) would be utterly invisible; you’d live billions of subjective years for every simulated nanosecond. And, since you’re just running a deterministic program, you would appear no more conscious to us than an electron appears conscious as it “runs” the laws of physics.
What we might see instead is a billion streams of paper, flowing too fast for the eye to follow, constantly splitting and connecting and shifting. Cataracts of fresh paper and pencils would be flowing in, somehow turning into marks on the pages. Reach in and grab a couple of pages, and we could see how the marks on one seemed to have some influence on those nearby, but when we try to follow any actual stimulus through to a response we get lost in a thousand divergent flows, that somehow recombine somewhere else moments later to produce an answer.
It’s not so obvious to me that this system isn’t conscious.
One of the most popular such ideas is to replicate the brain by copying the neurons and seeing what happens. For example, IBM’s Blue Brain project hopes to create an entire human brain by modeling it neuron for neuron, without really understanding why brains work or why neurons do what they do.
No, the Blue Brain project (no longer affiliated with IBM, AFAIK) hopes to simulate neurons to test our understanding of how brains and neurons work, and to gain more such understanding.
If you can simulate brain tissue well enough that you’re reproducing the actual biological spike trains and long-term responses to sensory input, you can be pretty sure that your model is capturing the relevant brain features. If you can’t, it’s a pretty good indication that you should go study actual brains some more to see if you’re missing something. This is exactly what the Blue Brain project is: simulate a brain structure, compare it to an actual rat, and if you don’t get the same results, go poke around in some rat brains until you figure out why. It’s good science.
Where in this system would you place a thorough and accurate, but superficial model that described the phenomenon? If I’ve made a lot of observations, collected a lot of data, and fit very good curves to it, I can do a pretty good job of predicting what’s going to happen—probably better than you, in a lot of cases, if you’re constrained by model that reflects a true understanding of what’s going on inside.
If we’re trying to predict where a baseball will land, I’m going to do better with my practiced curve-fitting than you are with your deep understanding of physics.
Or for a more interesting example, someone with nothing but pop-psychology notions of how the brain works, but lots of experience working with people, might do a far better job than me at modeling what another person will do, no matter how much neuroscience I study.
...to answer myself, I guess this could be seen as a variation on stage 1: you have a formula that works really well, but you can’t explain why it works. It’s just that you’ve created the formula yourself by fitting it to data, rather than being handed it by someone else.
[Edit: changed “non-generative” to “superficial”]
Ah, I misunderstood the comment. I just assumed that Gallo was in on it, and the claim was that customers of Gallo failing to complain constituted evidence of wine tasting’s crockitude.
If Gallo’s wine experts really did get taken in, then yes, that’s pretty strong evidence. And being the largest winery, I’m sure they have many experts checking their wines regularly—too many to realistically be “in” on such a scam.
So you’ve convinced me. Wine tasting is a crock.
If “top winery” means “largest winery”, as it does in this story, I don’t see how it says anything about the ability of tasters to tell the difference. Those who made such claims probably weren’t drinking Gallo in the first place.
They were passing of as expensive, something that’s actually cheap. Where else would that work so easily, for so long?
I think it’s closer to say they were passing off as cheap, something that’s actually even cheaper.
Switch the food item and see if your criticism holds:
Wonderbread, America’s top bread maker, was conned into selling inferior bread. So-called “gourmets” never noticed the difference! Bread tasting is a crock.
Part of the problem stems from different uses of the word “caution”.
There are a range of possible outcomes for the earth’s climate (and the resulting cost in lives and money) over the next century ranging from “everything will be fine” to “catastrophic”; there is also uncertainty over the costs and benefits of any given intervention. So what should we do?
Some say, “Caution! We don’t know what’s going to happen; let’s not change things too fast. Keep our current policies and behaviors until we know more.”
Others say, “Caution! We don’t know what’s going to happen, and we’re already changing things (the atmosphere) very quickly indeed. We need to move quickly politically and economically in order to slow down that change.”
For most people it seems that caution means: assume things will continue on more or less the same and be careful about changing your behavior, rather than seek to avoid a high risk of catastrophic loss.
Discussions about runaway AI often take a similar turn. People will come up with a list of reasons why they think it might not be a problem: maybe the humain brain already operates near the physical limit of computation; maybe there’s some ineffable quantum magic thingy that you need to get “true AI”; maybe economics will continue to work just like it does in econ 101 textbooks and guarantee a soft transition; maybe it’s just a really hard problem and it will be a very long time before we have to worry about it.
Maybe. But there’s no good reason to believe any of those things are true, and if they aren’t, then we have a serious concern.
Personally, I think it’s like we’re driving blindfolded with the accelerator pressed to the floor. There’s a guy in the other seat who says he can see out the window, and he’s yelling “I think there’s a cliff up ahead—slow down!” We’re suggesting he not be too hasty.
But I can see the other side, too: if we radically changed policy every time some crank declared that doom was at hand, we’d be much worse off.
Another form of argumentus interruptus is when the other suddenly weakens their claim, without acknowledging the weakening as a concession
I used to do this quite often. Usually in personal conversations rather than online, because I would get caught up in trying to win. I didn’t really notice I was doing it until I heard someone grumbling about such behavior and realized I was among the guilty. Now I try to catch myself before retreating, and make sure to acknowledge the point.
So not much to add, other than the encouraging observation that people can occasionally improve their behavior by reading this sort of stuff.
It seems like you missed one hypothesis: maybe you’re mistaken about the people in question, and they actually never were all that intelligent. They achieved their status via other means. It’s an especially plausible error because they have high status—surely they must have got where they are by dint of great intellect!
Define a “representative” item sample as one coming from a study containing explicit statements that (a) a natural environment had been defined and (b) the items had been generated by random sampling of this environment.
Can you elaborate on what this actually means in practice? It doesn’t make much sense to me, and the paper you linked to is behind a paywall.
(It doesn’t make much sense because I don’t see how you could rigorously distinguish between a “natural” or “unnatural” environment for human decision-making. But maybe they’re just looking for cases where experimenters at least tried, even without rigor?)
Serious nitpicking going on here. The whole point of my post is that from the information provided, one should arrive at probabilities close to what I said.
It’s not “nitpicking” to calibrate your probabilities correctly. If someone was to answer innocent with probability 0.999, they should be wrong about one time in a thousand.
So what evidence was available to achieve such confidence? No DNA, no bloodstains, no phone calls, no suspects fleeing the country, no testimony. Just a couple of websites. People make stuff up on websites all the time. I wouldn’t have assigned .999 probability to the hypothesis that there even was a trial if I hadn’t heard of it (glancingly) prior to your post.
[edit: I’m referring only to responders who, like me, based their answer on a quick read of the links you provided. Of course more evidence was available for those who took the time to follow up on it, and they should have had correspondingly higher confidence. I don’t think your answer was wrong based on what you knew, but it would have been horribly wrong based on what we knew.]
I’ve seen the paper, but it assumes the point in question in the definition of partially rational agents in the very first paragraph:
If these agents agree that their estimates are consistent with certain easy-to-compute consistency constraints, then… [conclusion follows].
But peoples’ estimates generally aren’t consistent with his constraints, so even for someone who is sufficiently rational, it doesn’t make any sense whatsoever to assume that everyone else is.
This doesn’t mean Robin’s paper is wrong. It just means that faced with a topic where we would “agree to disagree”, you can either update your belief about the topic, or update your belief about whether both of us are rational enough for the proof to apply.
I think there’s another, more fundamental reason why Aumann agreement doesn’t matter in practice. It requires each party to assume the other is completely rational and honest.
Acting as if the other party is rational is good for promoting calm and reasonable discussion. Seriously considering the possibility that the other party is rational is certainly valuable. But assuming that the other party is in fact totally rational is just silly. We know we’re talking to other flawed human beings, and either or both of us might just be totally off base, even if we’re hanging around on a rationality discussion board.
I was unfamiliar with the case. I came up with: 1 − 20% 2 − 20% 3 − 96% 4 - probably in the same direction, but no idea how confident you were.
From reading other comments, it seems like I put a different interpretation on the numbers than most people. Mine were based on times in the past that I’ve formed an opinion from secondhand sources (blogs etc.) on a controversial issue like this, and then later reversed that opinion after learning many more facts.
Thus, about 1 time in 5 when I’m convinced by a similar story of how some innocent person was falsely convicted, then later get more facts, I change my mind about their innocence. Hence the 20%.
I don’t think it’s correct to put any evidential weight on the jury’s ruling. Conditioning on the simple fact that thier ruling is controversial screens off most of its value.
Parrots and other birds seem to be about that intelligent, and octopi are close.
Perhaps that’s an argument for the difficulty of the chimp to human jump: we have (nearly) ape-level intelligence evolving multiple times, so it can’t be that hard, but most lineages plateaued there.