New Year’s Predictions Thread
I would like to propose this as a thread for people to write in their predictions for the next year and the next decade, when practical with probabilities attached. I’ll probably make some in the comments.
- 19 Apr 2012 22:54 UTC; 123 points) 's comment on A question about Eliezer by (
- 2020′s Prediction Thread by 30 Dec 2019 23:18 UTC; 48 points) (
- 2010s Predictions Review by 30 Dec 2019 21:47 UTC; 47 points) (
- New Year’s Prediction Thread (2012) by 1 Jan 2012 9:35 UTC; 27 points) (
- New Year’s Predictions Thread (2011) by 2 Jan 2011 9:58 UTC; 16 points) (
- 1 Jan 2013 23:20 UTC; 13 points) 's comment on New Year’s Prediction Thread (2013) by (
- 11 Aug 2010 5:59 UTC; 0 points) 's comment on PredictionBook.com—Track your calibration by (
I’m 90% confident that the cinematic uncanny valley will be crossed in the next decade. The number applies to movies only, it doesn’t apply to humanoid robots (1%) and video game characters (5%).
Edit: After posting this, I thought that my 90% estimate was underconfident, but then I remembered that we started the decade with Jar-Jar Binks and Gollum, and it took us almost ten years to reach the level of Emily and Jake Sully.
Is there a reason Avatar doesn’t count as crossing the threshold already?
Because the giant blue Na’vi people are not human.
You mean you didn’t notice the shots with the simulated humans in Avatar? ;-)
Avatar and Digital Emily are the reasons why I’m so confident. Digital actors in Avatar are very impressive, and as a (former) CG nerd I do think that Avatar has crossed the valley—or at least found the way across it—I just don’t think that this is proof enough for general audience and critics.
I think before the critics will be satisfied, one would have to make an entirely CGI film that wasn’t Sci Fi, or fantastic in its setting or characters.
Something like a Western that had Clint Eastwood & Lee Van Cleef from their Sergio Leone Glory Days, alongside modern day Western Stars like Christian Bale, or.. That Australian Guy who was in 3:10 to Yuma. If we were to see CGI Movies, such as I mentioned, with the Avatar tech (or Digital Emily), then I am sure the critics and public would sit up and take notice (and immediately launch into how it was really not CGI at all, but really a conspiracy to hide immortality technology from the greater public).
Exactly. I was thinking about something like an Elvis Presley biopic, but your example will do just fine (except that I don’t think that vanilla westerns are commercially viable today).
Vanilla Westerns?!? There is Nothing Vanilla about a Sergio Leone Western! And Clint Eastwood’s Unforgiven was an awesome western, as were Silverado and 3:10 to Yuma (and there are even more that have made a fair killing at the box office).
Westerns are not usually thought of as Block-Busters though, but they do draw a big enough crowd to be profitable.
If one were to draw together Lee Van cleef, Clint Eastwood, and Eli Wallach from their Sergio Leone days together with some of the Big names in Action flics today to make a period western that starred all of these people… I think you’d have a near Block-Buster...
However, the point is really that using this technology one would be able to draw upon stage or film actors of any period or genre (where we had a decent image and voice recording) and to be able to mix actors of the past with those of today.
I just happen to have a passion for a decent Horse Opera. Pity that Firefly was such crap… decent Horse Opera is really no different from a decent Space Opera. Something like Trigun or Cowboy Bebop
Not sure whether it’s been fully crossed, but it’s close.
By 2015 we had a CGI-on-top-of-body-double Paul Walker and audiences weren’t sure when the clips of him were real ones. Rogue One had full-CGI Tarkin and Leia, though those were uncanny for some viewers (and successful for others). Can’t think of another fully CGI human example.
(No, non-human humanoids still don’t count, as impressive as Thanos was.)
You don’t think that the Valley will be crossed for video games in the next ten years?
Considering how rapidly the digital technologies make it from big screen to small, I’m guessing that we can see the Uncanny Valley crossed (for Video Games) within 2 years of its closure in films (the vast majority of digital films having crossed it).
Part of the reason is that the software packages that do things like Digital Emily (mentioned below) are so easy to buy now. They no longer cost hundreds of thousands, as they did in the early days of CGI, and even huge packages like AutoDesk, which used to sell for $25,000, now can be had for only $5,000. And, those packages can be had for a similar price. That is peanuts when compared to the cost of the people who run that software.
I agree with you. The uncanny valley refers to rendering human actors only. It is not necessary to render a whole movie from scratch. It is much more work, but only work.
IMO, The Life of Benjamin Button was the first movie that managed to cross the valley.
My reply is here. BTW, major CG packages like Autodesk Maya and 3DS Max were at the level of $5000 and below for over a decade.
I’ve been out of circulation for a while. Last time I priced Autodesk, was in the early 90s, and it was still tens of thousands. I’m just now getting caught up to basic AutoCAD, and I hope to begin learning 3DS Max and Maya in the next year or so. I am astounded at how cheap these packages are now (and how wrong one of my best friends is/was about how quickly these types of software would be available. In 1989, he said it would be 30 to 40 years before we saw the types of graphics displays & software that were pretty much common by (I have discovered) 1995)… Thanks for the head’s up though.
Interesting, it seems that they are currently ahead with image synthesis than voice/speech synthesis.
In a way, the uncanny valley has already been crossed—video game characters in some games are sufficiently humanlike that I hesitate to kill them.
I once watched a video of an Iraqi sniper at work, and it was disturbingly similar to what I see in realistic military video games (I don’t play them myself, but I’ve seen a couple.)
Why such a big gulf between your confidence for cinema and your confidence for video games?
Movies are ‘pre-computed’ so you can use a real human actor as a data source for animations, plus you have enough editing time to spot and iron out any glitches, but in a video game facial animations are generated on-the-fly, so all you can use is a model that perfectly captures human facial behavior. I don’t think that it can be realistically imitated by blending between pre-recorded animations like it’s done today with mo-cap animations—e.g. you can’t pre-record eye movement for a game character.
As for the robots, they are also real-time, AND they would need muscle / eye / face movement implemented physically (as a machine, not just software), hence the lower confidence level.
The obvious answer would be “offline rendering”.
Even if the non-interactivity of pre-rendered video weren’t an issue, games as a category can’t afford to pre-render more than the occasional cutscene here or there: a typical modern game is much longer than a typical modern movie—typically by at least one order of magnitude, i.e. 15 to 20 hours of gameplay, and the storyline often branches as well. In terms of dollars grossed per hours rendered, games simply can’t afford to keep up. Thus, the rise of real-time hardware 3D rendering in both PC gaming and console gaming.
Rendering is not the problem. I would say that the uncanny valley has already been passed for static images rendered in real time by current 3D hardware (this NVIDIA demo from 2007 gets pretty close). The challenge for video games to cross the uncanny valley is now mostly in the realm of animation. Video game cutscenes rendered in real time will probably cross the uncanny valley with precanned animations in the next console generation but doing so for procedural animations is very much an unsolved problem.
(I’m a graphics programmer in the video games industry so I’m fairly familiar with the current state of the art).
I wasn’t even considering the possibility of static images in video games, because static images aren’t generally considered to count in modern video games. The world doesn’t want another Myst game, and I can only imagine one other instance in a game where photorealistic, non-uncanny static images constitute the bulk of the gameplay: some sort of a dialog tree / disguised puzzle game where one or more still characters’ faces changed in reaction to your dialog choices (i.e. something along the lines of a Japanese-style dating sim).
By ‘static images rendered in real time’ I meant static images (characters not animated) rendered in real time (all 3D rendering occurring at 30+ fps). Myst consisted of pre-rendered images which is quite different.
It is possible to render 3D images of humans in real time on current consumer level 3D hardware that has moved beyond the uncanny valley when viewed as a static screenshot (from a real time rendered sequence) or as a Matrix style static scene / dynamic camera bullet time effect. The uncanny valley has not yet been bridged for procedurally animated humans. The problem is no longer in the rendering but in the procedural animation of human motion.
How would you verify a crossing of the uncanny valley? A movie critic invoking it by name and saying a movie doesn’t trigger it?
An ideal indicator would be a regular movie or trailer screening where the audience failed to detect a synthetic actor who (who?) played a lead role, or at least had significant screen time during the screening.
There isn’t much financial incentive to CGI a human—if they are just acting like a regular human. That’s what actors are for.
I suppose Avatar is a case in point—it’s worth CGIfying human actors because otherwise they would be totally out of place in the SF environment which is completely CGI.
″There are a number of shots of CGI humans,″ James Cameron says. ″The shots of [Stephen Lang] in an AMP suit, for instance — those are completely CG. But there’s a threshold of proximity to the camera that we didn’t feel comfortable going beyond. We didn’t get too close.″
http://www.ew.com/ew/gallery/0,,20336893_7,00.html
A killer application for augmented reality is likely to be the integration of communication channels. Today’s, cellular phones annoy people with constant accountability and stress, not to mention spotty coverage, but if a HUD relay over life can display text messages as they are sent and invite fluid shifts to voice conversation. When video is engaged and shared, people could also see what their potential conversation partner is doing prior to requesting attention, giving distributed social life some of the fluidity and contextual awareness of natural social life. These sorts of benefits will motivate the teenagers of 2020 to broadcast much of their lives and to interpret the absence of their friend’s data streams as a low intensity request not to call. Archival will at first be a secondary but relatively minor benefit from the technology, but will ultimately widen the divide between public and private life, a disaster for privacy advocates but a boon for academic science (by normalizing the publication of all data). Paranormal beliefs will also tend to decline, as the failure to record paranormal events and the fallibility of memory both become more glaring.
Could you operationalize some of the many predictions and theories embedded in this comment? How would one judge all this? (AR apps like Foursquare are already fairly popular but don’t much resemble traditional theories of what AR would look like.)
Robin Hanson makes a similar prediction in ‘Enhancing Our Truth Orientation’ (pp. 362-363):
On a AR theme I think there will be a high level language created within ten years for AR that will try to make the following accessible
Pulling info off the Internet
Machine vision
Precise overlay rendering
People will want to mash up different AR services in one “view” so you don’t have to switch between them. There needs to be a lingua franca and HTML doesn’t seem suited. I’d think it likely that it will be some XML variant.
Aren’t these more likely to be done by libraries than languages?
I hope not. Something like JSON is far less verbose.
If AR gets any sort of popularity, even just among early adopters, I guarantee you that there will be several competing tools for doing what you describe, with more coming out every month.
It already has a sort of popularity. There are already startups working in the field.
If you want to keep abreast of the field keep an eye on Bruce Sterling’s Blog.
There are already plenty of supposedly “paranormal” events recorded on Youtube, as well as elsewhere. With the increase of recording devices, many more such things will be recorded, and paranormal beliefs will increase.
I think these are great predictions.
One word: subcultures.
I think we’ll see an expansion to most of the First World of the trend we see in cities like San Francisco, where the Internet has allowed people to organize niche cultures (steampunk, furries, pyromaniacs, etc.) like never before. I think that, by and large, people would prefer to seek out a smaller culture based on a common idiosyncratic interest if it were an option, not least because rising in status there is often easier than getting noticed in the local mainstream culture. I think that the main reason the mainstream culture is presently so large, therefore, is because it’s hard for a juggling enthusiast in Des Moines to find like-minded people.
I expect that over the next 10 years, more and more niche cultures will arise and begin to sprout their own characteristics, with the measurable effect that cultural products will have to be targeted more narrowly. I expect that the most popular books, music, etc. of the late 2010s will sell fewer copies in the US than the most popular books, music, etc. of the Aughts, but that total consumption of media will go up substantially as a thousand niche bands, niche fiction markets, etc. become the norm. I expect that high schoolers in 2020 will spend less social time with their classmates and more time with the groups they met through the Internet.
And I expect that the next generation of hipsters will find a way to be irritatingly disdainful of a thousand cultures at once.
What do you make of criticism that sales currently show the exact opposite trend?
Well, your criticism was correct.
(Though some other trends have obviously reversed- streaming music ate album and single sales, which were increasing rapidly in the iTunes era of the 2000s.)
Thanks for the link! I didn’t know there was already a version of this theory out there, and I didn’t know the actual figures.
So what do I make of this data (assuming the veracity of the Wikipedia summary, since I’m not dedicated enough to read the papers)? Well, I’m surprised by it.
I’m not especially surprised. Aside from possible confounding factors like the rise of Free & free stuff (strongest in subcultures) which obviously wouldn’t get counted in commercial metrics, technological and economic development means that mass media can spread even further than Internet-borne stuff can. cue anecdotes about Mickey Mouse posters in African huts, etc.
The subcultures seem to me to appeal mostly to the restricted 1st World wealthier demographics that powered the mass media you are thinking of; one might caricature it as ‘white’ stuff. It makes sense that a subculture like anime/manga or FLOSS, which primarily is cannibalizing the ‘white’ market, can shrink ever more in percentage terms as the old ‘white’ stuff like Disney expand overseas into South America, Africa, Southeast Asia and so on.
If you had formulated your thesis in absolute numbers (‘there will be more FLOSS enthusiasts in 2020 than 2010’), then I think you would be absolutely right. You might be able to get away with restricted areas too (‘there will be more otaku in Japan in 2020 than 2010, despite a ~static population’). But nothing more.
You forgot us!
Following up: I was wrong about my most testable prediction. The biggest media hits in the USA are getting proportionally larger, not smaller, though this may be mediated by streaming/ebooks taking away from the traditional outlets.
(If you find more complete sources for any of these, let me know. I restricted to the US because the international market is growing so rapidly it would skew any trends.)
Music: This is obviously confounded by the switch from buying physical albums to streaming music, but in any case, it looks as if I was wrong: the top albums have sold comparable numbers of copies (after averaging out by 5-year increments) since 2005, while the total number of album sales has plummeted. (Maybe people are only buying albums for the most popular artists and massively diversifying their streaming music, but in any case I would have antipredicted the top artist album sales staying constant.)
Books: Total revenue for trade books has stayed remarkably consistent at about $15 billion per year for the past five years; I didn’t find first-half-of-decade results as easily. Top books by print copies might be misleading, but they’re easy to find retrospectively using Publishers Weekly lists like this one. And they’ve been increasing since 2014, though 2013 had the massive outlier of the Fifty Shades series (sigh). Another loss for the theory.
Movies: Domestic box office has been growing slowly, and the biggest domestic hits have been growing rapidly. Essentially, Disney is eating the movie theater market with their big franchises.
And more broadly/vaguely, the US social media landscape looks less like a land of ten thousand subcultures and more like a land of fewer than ten megacultures, each fairly defined by their politics and united on their morality and aesthetics.
So it’s possible that, if we had a really huge, dense, wired city with excellent transportation, we would find a significant subculture of steampunk furries, or vampire gothic lolita hip-hop dance squads? Actually, this sounds like a lot like Tokyo.
It’s easy, really. Practice this phrase: “Man, what weirdos.” You just have to selectively overlook the weirdness of your own subculture while recognizing and stigmatizing it in others. It’s an elegant approach.
For the next decade: Videoconferencing.
Thank maths for videoconferencing, enabling working from home (at least occasionally) for every major tech company.
Videoconferencing what, exactly?
I’ve been using it for years. I’m not sure how to correctly expand your sentence, and it shouldn’t be subject to interpretation.
Eliezer seems to be predicting that videoconferencing will become common in the next decade. Yes, some use it now, but it is still not common. I predict that it will not become common until someone uses a utility to modify your appearance so that when you look at the eyes of the person on the screen, your image on the remote end will look like it is looking at the eyes of the person on the other end. This might well be developed in much less than 10 years, however.
I suspect Eliezer is making broad predictions about what is important in the next 10 years. As if someone said smartphone for the next decade in 2000. Not giving too much detail makes it more likely to be true...
coughmakingbeliefspayrentcough
Next Year
Holiday retail sales will be below consensus forecasts leading to some market turmoil in the early part of the year as the ‘recovery’ starts to look shaky (70%).
A developed country will suffer a currency crisis—most likely either the UK, US or one of the weaker Eurozone economies (60%).
A new round of bank failures and financial turmoil as the wave of Option ARM mortgage resets starts to hit and commercial real estate collapses including at least one major bank failure (a ‘too big to fail’ bank) (75%).
A major terrorist attack in the US (50%) most likely with a connection to Pakistan. The response will be disproportionate to the magnitude of the attack (99%).
Apple will launch a tablet and will aim to do for print media what it has done for music (80%).
Democrats will lose seats in Congress and the Senate in the elections but Republicans will not gain control of either house (70%).
One or more developed countries will see significant civil unrest due to ongoing problems with the economy (50%).
Next Decade
US will undergo a severe currency crisis (more likely) or sovereign default (less likely) (75%).
Developed countries’ welfare states will begin to collapse (state retirement and unemployment benefits and health care will be severely curtailed or eliminated in more than one developed country) (75%).
UK will undergo a severe currency crisis or sovereign default (90%).
One or more countries will drop out of the Euro or the entire system will collapse (75%).
A US state will secede (30%).
A major terrorist attack in the US (50%) most likely with a connection to Pakistan.
I would be very happy to accept a bet with you on those odds if there’s a way to sort it out. I’d define major as any attack with more than ten deaths.
I voted all the betting comments up because I think this is awesome. Does this kind of thing happen often here?
I occasionally offer people bets, but I think this has been the first time for me that the subject of contention is the right shape for betting to be a real possibility.
Do you have a PayPal account? I’d be willing to wager $50 USD to be paid within 2 weeks of Jan 1st 2011 if you’re interested. I can provide my email address. That would rely on mutual trust but I don’t know of any websites that can act as trusted intermediaries. Do you know of anything like that?
For $50, trust-based is OK with me.
How about this wording? “10 or more people will be killed on US soil during 2010 as the result of a deliberate attack by a party with a political goal, not overtly the act of any state”. And if we hit an edge case where we disagree on whether this has been met, we’ll do a poll here on LW and accept the results of the poll. Sound good?
I’d like to change the wording slightly to “on US soil, or on a flight to or from the US” if that’s alright with you (even though I think an attack on an aircraft is less likely than an attack not involving aircraft). A poll here sounds like a fair way to resolve any dispute. I expect to still be reading/posting here fairly regularly in a year but I’m also happy to provide my email address if you want.
Do you think this was a terrorist attack? http://en.wikipedia.org/wiki/Fort_Hood_shooting
The term “terrorism” is usually taken to mean an attack on civilians, though as a legal matter, this is far from settled. This definition would exclude the Fort Hood shooting, where the targets were soldiers. In any case, the bet is over non-state, politically motivated killing, which is broader and would include Fort Hood, I think.
FWIW: The targets at Fort Hood were soldiers, but predictably-disarmed soldiers. In the area Hasan attacked, the soldiers he shot at aren’t allowed to carry weapons or even have them within easy reach. So it’s more analogous to shooting up a bar frequented by soldiers that takes your weapons at the door.
Plus, his attack was intended to spread terror, not to achieve a military objective (any weakness he inflicted on the army capability itself was probably a secondary goal).
I was going to ask whether people would classify the recent attack on the IRS building in Texas as terrorism. It wouldn’t qualify for the bet either way because there was only 1 casualty but I’m curious if people think it would count as terrorism?
Bob Murphy’s post, excerpting Glen Greenwald, summarizes my position very well. In short:
1) What Stack did meets the reasonable definition of terrorism: “deliberate use of violence against noncombatants to achieve political or social goals by inducing terror [in the opposing population]”.
2) Most of what the government is classifying as terrorism, isn’t. Fighting an invading army, no matter how unjust your cause may be, is not terrorism. Whetever injustice you may be committing does not additionally count as terrorism. Yet the label is being applied to insurgents.
3) It’s in the government’s interest, in taking over the terrorism label, that Stack not be called a terrorist, because he seems too (otherwise) normal. People want to think of terrorists as being “different”; a middle-aged, high-earning programmer ain’t the image they have in mind, and if they did have that in mind, they’d be more resistant to make concessions in the name of fighting terrorism.
Excellent question! If such an attack happens this year, I’d say it wasn’t a terrorist attack, but if mattnewport felt that it was I’d pay out without making a poll.
I’d lean towards saying it was a terrorist attack but I’m sufficiently uncertain about how to classify it that I’d be happy to let a community poll settle the question.
Could you email me so I have your address too? paul at ciphergoth.org. Thanks!
Had limited Internet access over the New Year, I’ve sent you an email.
I think I won this one—have emailed the address you sent me. Thanks!
EDIT: paid in full—many thanks!
Fine with me. My email is paul at ciphergoth.org. How exciting!
Re: “10 or more people will be killed on US soil during 2010 as the result of a deliberate attack by a party with a political goal, not overtly the act of any state”.
How come “Pakistan” got dropped? A contributing reason for the claim being unlikely was that it was extremely specific.
From the wording, it seemed that the 50% was for any attack, not just one with Pakistan involved. I think I’m on to a pretty good bet even without it. It’s not as unlikely as a US state seceding, but I didn’t want to wait ten years :-)
The US State seceding is something that many of my friends sit around contemplating. We have had speculations about whether it will be a state like Mississippi, or South Carolina (Red), or if it will be a state like California or Oregon (Blue).
The Red States are pretty easy to understand why they might wish to secede from the heathen atheistic socialist nazi USA… But, the motivations for a Blue State are a bit more complex.
For instance, in California, I have noticed a lot of people complaining about how much money this state pays into Social Security, yet only gets back about 10% of that money. If we were able to get back all of it, instead of supporting states like South Carolina or Mississippi, we would be able to go a long way toward solving many of our own social ills. Not to mention that many in CA chafe under having to belong to the same union as states such as those I have mentioned, and thus have issues with being able to even pursue social solutions that might pay off big (Stem Cell research, Legalization & regulation of narcotics, work and skills training for inmates—and socialization skills for the same, infrastructure work to which the USA is slow to commit, and so on).
All of these are also issues that Red States like to brag about being able to focus on if they were to secede. The only problem with most Red States is, just like in the Civil War, they have little to no economy of their own. Texas (Maybe Florida) is really the exception. Also, should a Red State secede, most of the best and brightest would flee the state (Academics usually don’t like working under ideological bonds, for instance).
It will be interesting to see what would happen should a state try to secede. I think it could be the best thing that could happen to our country if things continue to become divisive.
That’s why California’s economy is 20 billion plus in the red. And has been for years. Fine fiscal management.
That’s why your governor has gone begging Washington for a bailout. Off of our backs, not yours.
You folks should secede. You’d save the rest of us from yourselves.
-- Born an bred in California, escaped the insanity as soon as I could.
As I understand it, our economy is in such dire straights because most of the money in CA’s taxes leaves the state instead of staying in it.
I could be wrong about that. I am mostly dealing with facts I have obtained from Gov’t web sites, so the data could be skewed.
Your statement only deals with the management and not the fiscal reality of the cash flow in CA. It is true that we have a financial shortfall, but that could be the case with anyone, even if they made billions of dollars a year if all of that money was being taken by another party. No management in the world would be able to help in that situation.
Texas is another big tax donor state, yet they turn budget surpluses mostly. The difference is California doesn’t bother to balance their out of control spending with their revenues.
Texas, though, doesn’t contribute more to the US Budget than they get out, and… I hate to say this… Both GW Bush, and his predecessor in the Governors office did pretty good jobs managing the State.
During the Office of Rick Perry, they had some tremendous problems (I am from Texas, and technically, it is still a state of residence for some of my bills). Texas and California are however, the only two states (NY Possibly an exception, but only barely) that could really stand as an independent country in this day and age (They both did so in the past under very different conditions).
Upon thinking about it a bit. CA does have a more out of control spending problem. I still think that the problem could be remedied by a more equitable share of their Federal Tax money (not just Social Security) being returned to the state. Regardless of whether that happened, fiscal responsibility is needed. It doesn’t do any good to increase an income if the expenses rise disproportionately.
Actually, Texas does contribute more than they get back. Texas gets 94% of federal tax contributions back. California gets 79% back.
http://www.taxfoundation.org/blog/show/1397.html
Based on that map we can also see that more agriculturally focused states do well from federal tax dollars. I assume this is mostly farm subsidies.
You are correct about federal taxing vs. spending with respect to states.
California’s uniquely awful budget crisis is mainly due to the state’s consitutional amendment that requires a supermajority to raise state taxes (and the fact that it’s never in the Republican minority’s political interest to agree to a tax hike), along with the lawmakers’ shortsighted tendency to cut taxes when the economy was in great shape.
(N.B: it’s spelled “dire straits”.)
I knew that I wasn’t imagining that bit about the Fed Taxing v spending.
I was also aware of the supermajority thing. Although, I wonder exactly how much of a Republican Schwarzenegger really is (I hope I spelled his name right. I can’t be bothered to find out). He has many beliefs about the rule of law and government that I find to be very at odds with the Republicans, and all I can really find that binds them together is his extreme misogynism and love of guns (alright, I could look further and find more, I am sure, but my point is that he is really a populist candidate/politician who just happened to land in the Republican’s back yard).
CA’s budget crises can also be traced to several Texas Energy companies (Does Enron mean anything to anyone) who gouged the state in all kind of manipulative practices during the late 90s/early 00s.
Also, never mind that California is responsible for around 12% − 14% of the USA’s total economy, or that we have a GDP, all on our own of around 2 trillion dollars (the largest in the USA, and I believe that we are right behind England or France in total GDP)… Yeah, never mind all that (to the naysayers of California).
Oh, I see—sorry!
I looked into who was going to win such a bet.
http://en.wikipedia.org/wiki/List_of_assassinations_and_acts_of_terrorism_against_Americans
...looks like a reasonable resource on the topic.
I’m not sure that the acts of a single person with no associations with anyone else are really the sort of thing I had in mind, but it’s too late to refine the bet now, so we’ll see whether people think such a thing counts if we need to.
“10 or more people [...] as the result of a deliberate attack” seems to suggest that 10 assassinations in 2010 would probably not qualify—unless it was proved that they were all linked. My summary of the link is that there have been few terrorist attacks against Americans on American soil recently.
Agreed.
What makes your think 2010 is the year? I mean, this has even been floating around lately. And at 99%^h^h^h50% confidence!
That was 99% confidence that the response will be disproportionate to the magnitude of the attack, if an attack takes place, not 99% confidence that there will be an attack. My odds of an attack were 50%. I think an attack is fairly unlikely to be on an aircraft—security is relatively tight on aircraft compared to other possible targets.
I’ll agree that if anything happens, or even if something doesn’t (is thwarted), the response will be silly and disproportionate. However, I still think you’re way too high with 50%.
You must specify disproportionately high, or disproportionately low.
I thought disproportionately high went without saying (but then I would with a confidence level that high wouldn’t I?)
A declaration of war, curtailment of liberties, or other expenditure of resources more than ten times the loss of resources (including life, which is not priceless) it tries prevent.
Is there a standard method for assigning a numerical value to liberties?
The money those people would pay to avoid the loss of liberty, had they the option.
That’s a valid measure, but it would require a fairly complicated study to actually get a value for it.
And it’s complicated by loss aversion.
I’ve added this prediction to PredictionBook: http://predictionbook.com/predictions/1565 based on the description at http://wiki.lesswrong.com/wiki/Bets_registry
So now that 2010 is more than half over with no attack that I know of, have you or mattnewport’s opinions changed?
(I notice that domestic terrorism seems kind of spiky—quite a few in one year, and none the next: http://en.wikipedia.org/wiki/Category:Islamist_terrorism_in_the_United_States omits entire years but has several in one year, like 2007 or 2009.)
I am more confident of winning as you’d expect. But I’m finding it counterintuitive to adjust my subjective probability for losing the bet in proportion to the portion of the year that’s lapsed, which means either my initial probability was too low or my current one is too high.
Incidentally, if you have a specific probability for an event occurring in 1 out of 365 days, say, or not occurring at all, you could try to calculate exactly what probability to give it occurring in the rest of the year (considering that it’s August): http://www.xamuel.com/hope-function/ / http://www.gwern.net/docs/1994-falk
(Actually calculating the new probability is left as an exercise for the reader.)
A US state will secede (30%).
I will take a bet on this, if you like. Also, did you perhaps mean “attempt to secede”, or are you predicting actual success? I’ll take the bet either way.
You’ll have to define what constitutes an attempt.
Perhaps a vote goes through the state legislature in favor of secession?
On further reflection I think I need to revise my estimate down somewhat. Thinking on it further my 30% estimate is conditional on general trends that I think are more likely than not to occur but I did not correctly incorporate them into the estimate for secession. I think 10-15% is probably a better estimate taking that into account.
I think the political pressure for secession will stem from an extended period of economic weakness in the US and widespread fiscal crises in states like California and New York. If, as seems likely, federal aid is seen to go disproportionately to certain states that have the most troubled finances then the states that feel they are losing out will begin to see secession as an attractive option. My original estimate did not sufficiently account for the possibility that I am wrong about the economic troubles ahead however.
I would still be willing to take a bet at these odds, given some reasonably clear-cut definition of “attempt to secede”.
I think we could probably hammer out a mutually agreeable definition but the decade time frame for a pay out makes a bet on this impractical I feel. I’m reasonably comfortable making a bet to be settled next January but a bet to be settled in 2020 doesn’t seem practical through an agreement on a forum.
So close on Brexit, but just missed the deadline.
Britain was in the EU, but it kept Pounds Sterling, it never adopted the Euro.
http://en.wikipedia.org/wiki/2010_European_sovereign_debt_crisis
Not bad.
I guess now is a good time for a 6 month review of how the predictions in this thread are panning out.
Retail sales were a bit worse than expected but despite a bit of a dip in the stock market in late Jan / early Feb it took longer than I expected for the recovery in the US to be seriously questioned. It’s only in the last few weeks that talk of a double dip recession has become really widespread. The problems in Europe and more recently in China brought the global recovery into question a bit earlier but overall the jury is still out. I think I could argue that this prediction was correct as written but I was expecting more problems earlier in the year.
I think the problems in Greece (and to a lesser extent Spain and Portugal) and the resulting turmoil in the Euro are sufficient to say this prediction was correct. The UK pound has also had a rough time but in both cases ‘currency crisis’ could still be argued. I expect further problems before the year is out.
Hasn’t happened yet. Option ARM resets will be picking up through the second half of the year so I still expect problems from that. A little less confident that it will mean a major bank failure—that is somewhat dependent on the political climate as well.
The attempted bombing in Times Square appears to have had a Pakistan link. It can’t really be called a ‘major’ attack however. I still think there is a fair chance of this happening before the year is out but odds are a little lower (my estimate of how incompetent most terrorists are has increased a little).
The iPad and iBooks launch bear this out I think.
Won’t know until November. I think the prediction is still reasonable.
The riots and strikes in Greece and strikes in Spain arguably confirm this. The prediction is a little vague however and I was expecting somewhat more serious civil unrest than we’ve seen so far. It remains to be seen what will happen as the rest of the year unfolds.
No change here.
Some early signs of this with retirement age increases and other austerity measures in Greece and elsewhere in Europe. I still expect to see a lot more of this before the decade is out.
Odds on this down slightly I think—there’s some evidence that the new government is serious about addressing the problems. Less evidence that they will succeed.
I think the problems here have been more widely recognized than when I wrote the prediction. My odds haven’t changed much though.
No change here. And it’s secession week.
Shouldn’t the odds go down by about half, just because half the year is used up?
The failed Times Square attack raised my probability for attempts at attacks this year but lowered my probability that any attempted attacks would be effective enough to classify as ‘major’. On balance I think the odds of a major attack in the remaining 6 months are lower than 50% at this point but events since my original prediction weigh into my estimate now and so it’s not a simple matter of adjusting the odds based on elapsed time.
I think this prediction has failed utterly. In the Euro zone, There are/were debt crises in Greece and Ireland, but the currency, the Euro itself did fine. A graph of the variation of the Euro against the US dollar shows no special variation in 2010 compared to its “typical” variations over the last decade. The pound maintained the value in 2010 that it had already fallen to in 2009, hardly even slightly adhering to a prediction about 2010.
Those were exciting predictions. Had you predicted a sovereign debt crisis in a developed country, you would have been right, and it would have been a much less exciting prediction than a currency crisis.
There’s room for debate whether we saw a true currency crisis in the Euro but ‘this prediction has failed utterly’ is overstating it. We saw unusually dramatic short term moves in the Euro in May and there was widespread talk about the future of the Euro being uncertain. Questions about the long term viability of the Euro continue to be raised.
I’d argue that charting any of the major currencies against gold indicates an ongoing loss of confidence in all of them—from this perspective the dollar and the euro have both declined in absolute value over the year while trading places in terms of relative value in response to changing perceptions of which one faces the biggest problems.
‘Currency crisis’ was in retrospect a somewhat ambiguous prediction to make since there is no clear criteria for establishing what constitutes one. I’d argue that the euro underwent the beginnings of a currency crisis in May but that the unprecedented intervention by the ECB forestalled a full blown currency crisis.
I looked at Gold vs Euro from your link over 10 years. It shows a ssteady decline since mid 2004, with no change in that trend to distinguish 2010 from 2009, 2008, 2007, 2006, or 2005. It seems to me that if no special effects in currency vs currency or in currency vs gold can be seen in 2010 that the most rational label for that prediction would be “wrong.” YMMV, but I don’t see why it should. Would you accept “this prediction has failed” if I leave off the utterly?
US states aren’t allowed to secede. Not even Texas. The US government would lose so much prestige from the loss of a state, that they would never allow it. So it would require some kind of armed conflict that no one state could ever win.
Are you really certain that the federal government would send the military in to prevent a state seceding if secession was clearly the democratic will of the people of the state? I wouldn’t rule out the possibility but I think it would be an unlikely outcome.
I’m pretty certain the federal government will not take the blow of a state leaving in the next decade, at least. They might be slightly more likely to let a quirky, small state like Vermont or New Hampshire leave, since clamping down on a tiny state would look bad, and the loss would be negligible. But then they would set a dangerous precedent for more important possible secessionist states like Texas (Texans are somewhat nationalistic, though also often super-american/patriotic), New Mexico (majority-minority state) or Alaska (active secessionist movement).
What exactly is the federal government going to do about it though? I think using the military to suppress a state that was attempting a peaceful secession would be very hard for the government to justify. It’s a possibility but I think the probability is low that US troops would be deployed on US soil to prevent a state seceding. Plus I expect the federal government to have very major financial problems which will limit its ability to act.
Few people in 1982 would have predicted that the USSR would allow its constituent republics to secede peacefully within a decade.
It is settled legally, that the states do not have the authority to secede, they tried during the Civil War. Many people thought that states could leave the union at that time. However the precedent set by Lincoln’s actions are unchallenged now by the legal establishment.
Anyway, the procedure would go like this:
State government announces secession.
then
2a. Federal government challenges legality of secession in courts.
3a. Supreme court declares the secession unconstitutional.
Or:
2b. Federal government charges rebels with Treason.
3b. Federal government arrests the secessionists. Using federal troops would likely not be necessary, since national guards are ultimately under the authority of the president, if he calls them up for national service.
Finally, if there was an armed insurrection by natives, they would be put down as domestic terrorists. It would certainly be embarrassing, but not as dangerous as the precedent set by a state leaving the union without a shot fired.
Obviously if the Federal government financially collapses in the next decade, this wouldn’t be a problem. But that is very unlikely, since the government has the power to inflate away its debts. With the dollar as global reserve currency, it doesn’t really have to worry about an Argentina situation.
I think this is about right. The US dedication to self determination is generally limited to small ethnic groups conveniently placed in the interest spheres of rival great powers.
I think it is likely that the dollar will not still be the global reserve currency by the end of the decade.
Looks like it is still the global reserve currency.
I don’t see that happening—which one or ones do you think are most likely to leave?
Scotland may well leave the UK (10%), or the UK leave the EU (15%).
Texas is probably the most likely but I can imagine a number of other possibilities. MatthewB’s post above outlines a plausible case for California for example.
Being from Texas (I was born in Texas, but moved to CA in my mid-20s), I agree with you.
I noticed, when I went to school in Europe in the mid 80s that people there acted as if Texas was almost a different country from the rest of the USA. It was also easy for Europeans to recognize. When a foreign citizen, in Europe, was asked where they were from, Texans would usually answer “Texas”, yet if a person from Louisiana, Alabama, Montana, Idaho, or some other more obscure state attempted to explain where they were from in the terms of their home state, it would usually devolve to “I am from the Southern USA” or “I am from the Northwest/Midwest USA”.
Only New York and California seemed to enjoy this same recognition in Europe.
But, for Texans, they would consider themselves from Texas, first, and the USA second. Whereas most of the other US citizens from other states seemed to identify as USA citizens first, and then by their state.
Texas has a really strong independence from the USA, and it is pretty much the only state with an active Federal movement (movement to recognize the state as its own Nation). California also have one, but it is not nearly as diverse nor as active as that in TX.
However, despite the strong state recognition of its citizens, I think that there are other states that might lead the pack in an attempt to secede. Most of the former Confederate States still seem to have Very deep grudges against the federal gov’t, and when I lived in GA for a few years back in 91⁄92, I was stunned at how many people I encountered who really believed that the Civil War was still not finished, and that The South Shall Rise Again!
Many Republicans seem to be fomenting this sort of thinking as well, with things like the Tea Baggers, or trying to force the recognition of the USA as a Christian Nation
Referring to a (presumably) disfavored political group by a crude sexual dysphemism earned you a vote down. This is not how discourse is done here, please make a note of it.
Relevant wikipedia link
Not badly calibrated for 2010 in retrospect, though I should have realized at the time that some of your conditional probabilities were crazy: there’s virtually no chance that the Democrats would have held the House if there had been “a new round of bank failures and financial turmoil”, unless that happened after the elections.
I think you got that one.
This is the sort of thing I was thinking of and expect to see more of.
Haven’t riots been going on in Greece pretty regularly? (eg, 11/2009) Did you put at 50% the chance that the riots in Greece would stop? Maybe it was reasonable to put at 50% the chance that the riots would stay at 2009 levels and 50% the chance that they would go back to 12/2008 levels, but it’s not clear that “significant” should mean that.
Yes, Greece had riots in 2009. I expected increased civil unrest in developed countries in 2010. My impression is that there is more civil unrest in Greece now than there was last year but I don’t know how to objectively measure that which makes me think I was not specific enough with my prediction in this case.
Since nobody took the other side of the bet it doesn’t matter too much. I’m more interested in how my investments pan out as they represent real bets on my predictions—it’s not much use being right if you can’t turn it into profit.
I’m going to call this a hit but it was pretty much a gimme. My 80% estimate may have been too low.
U.S. Retail Sales Unexpectedly Fall After Bigger Gain
I’m inclined to call this a confirmation of the first part of my prediction but in retrospect I could have been more specific as to what would constitute confirmation. As to the resulting market turmoil that constitutes the second half of my prediction, I’d say that’s unconfirmed as yet and is also rather unspecific. I’m actually now betting real money on market turmoil by buying VXX which is a bet on increased volatility so I still stand by the second half of the prediction.
I’m going to attempt to continue posting updates on the state of my 1 year predictions as relevant news develops. This prediction exercise is only useful if outcomes are tracked.
I’m not going to claim this [1] as a confirmation of that prediction but I expect to see a lot more of these kinds of demonstrations and on a larger scale. Flaming torches are just the start, the metaphorical pitchforks will come.
I’m curious what the response of the secret service would be to a group of demonstrators with flaming torches surrounding the White House.
[1] “Fire and ice: On Monday, hundreds of people gathered outside the residence of Icelandic President Olafur Ragnar Grimsson in Reykjavik, where they held torches and delivered a petition asking him not to sign the controversial debt legislation.”
Great example of what I’m talking about. I’d challenge you on most of those actually, if there was a convenient and well structured betting forum, but none of them seem crazy to me.
None of the others do, but this one seems ludicrous to me.
30% probability might be around the point where we start to call things ludicrous. If you talk seriously about things that you think have a 10% chance of happening, you will be beyond the point where most people call it ludicrous, or even crazy; they simply will not understand or believe that that’s what you mean.
This comment provides more confirmation for a view I’ve held for a long time, and which was particularly reinforced by some of the reactions to (the first version of) my Amanda Knox post.
People have trouble distinguishing appropriately among degrees of improbability. This generalizes both underconfidence and overconfidence, and is part of what I regard as a cluster of related errors, including underestimating the size of hypothesis space and failing to judge the strength of evidence properly. (These problems are the reason that judicial systems can’t trust people to decide cases without all kinds of artificial-seeming procedures and rules about what kind of evidence is “allowed”.)
The reality is that given all the numerous events and decisions we experience on a daily basis and throughout our lives, something with a 10% chance of happening or being true is something that we need to take quite seriously indeed. 10% is, easily, planning-level probability; it should attract a significant amount of our attention. By the same token, something which isn’t worth seriously planning on shouldn’t be getting more than single digits of probability-percentage, if that.
There is a vast, huge spectrum of degrees of improbability below 1% (never mind 10% or 30%) that careful thinking can allow us to distinguish, even if our evolved intuitions don’t. Consider for instance the following ten propositions:
(1) The Republicans will win control of both houses of Congress in the 2010 elections.
(2) It will snow in Los Angeles this winter.
(3) There will be a draft in the U.S. by 2020.
(4) I will be dead in a month.
(5) Amanda Knox (or Raffaele Sollecito) was involved in Meredith Kercher’s death.
(6) A U.S. state will make a serious attempt to secede by 2020.
(7) The Copenhagen interpretation of quantum mechanics, as opposed to the many-worlds interpretation, is correct.
(8) A marble statue has waved or will wave at someone due to quantum tunneling.
(9) Jesus of Nazareth rose from the dead.
(10) Christianity is true.
I listed these in (approximately) order of improbability, from most probable to least probable. Now, all of them would be described in ordinary conversation as “extremely improbable”. But there are enormous differences in the degrees of improbability among them, and moreover, we have the ability to distinguish these degrees, to a significant extent.
The 10%-30% range is for propositions like (1) ; the 1%-10% range for things like (2) (the last time it snowed in LA was in the 1960s). Around 1% is about right for (3). Propositions (4), (5), and (6) occupy something like the interval from 0.01% to 1% (I find it hard to discriminate in this range, and in particular to judge these three against each other). Propositions (8), (9), and (10), however, are in a completely different category of improbability: double-digit negative exponents, if you’re being conservative. We could argue about (7), but it probably belongs somewhere in between (4)-(6) and (8)-(10); maybe around 10^(-10), if you account for post-QM theories somehow turning Copenhagen into something more mundane than it seems now.
So the point is, we, right here, have the tools to make estimates that are a lot more meaningful than “probably yes” or “probably no”. I remember reading that we tend to be overconfident on hard things and underconfident on easy things; I think we can afford to be a little more bold on the no-brainers.
It would of course be sacrilegious to place (8) below (9) and (10). Nevertheless even in the case of apparently overwhelming evidence, if you disagree with a mainstream belief 10^(-20) times you will be wrong rather a lot more than once.
Meanwhile, quantum tunnelling is a specific phenomenon which, if possible (very likely) gives fairly clear bounds on just how ridiculously improbable it is for a marble statue to wave. Even possible improbable worlds which make quantum tunnelling more likely still leave (8) less probable than (9) (but perhaps not 10).
I personally place (10) at no less than 10^(-5) and would be comfortable accusing anyone going below 10^(-7) of being confused about probabilities (at least as related to human beliefs).
Like most majoritarian arguments, this throws away information: the relevant reference class is “mainstream beliefs you think are that improbable”. (edit: no, I didn’t read the whole sentence) In that class, it’s not obvious to me that one would certainly be wrong more than once, if one could come up with 10^20 independent mainstream propositions that unlikely and seriously consider them all while never going completely insane. Going completely insane in the time required to consider one proposition seems far more likely than 10^-20, but also seems to cancel out of any decision, so it makes sense to implicitly condition everything on basic sanity.
(Related: Horrible LHC Inconsistency)
(9), and (10) for some definitions of “Christianity”, being more likely than (8) seems conceivable due to interventionist simulators (something I really have no idea how to reason about), but not for any other object-level reason I can think of. Can you think of others?
I’d be inclined to accuse anyone going above… something below 10^-7… of being far too modest.
No, that is the reference class intended and described (“apparently overwhelming evidence”).
Your prior is wrong (that is, it does not reflect the information that is freely available to you).
Considering normal levels of sanity are sufficient. Failing to account for the known weaknesses in your reasoning is a failure of rationality.
I am comfortable accusing you of being confused about probabilities as related to human beliefs.
I’d guess you could estimate (4) to within an order of magnitude or better from an actuarial table.
Well under 30% certainly, but I wouldn’t give it under 4%. A decade is long and the US is young.
I think a draft is much more likely.
You display a pessimism much greater than I think is warranted. My predictions for some of your statements:
Next decade:
Next year:
These seem overly optimistic to me. Maybe increase the numbers by 50% to 100% other than 99?
My second prediction is that the largest area of impact from technological change over the next decade will come from increasing communications bandwidth. Supercomputers a hundred times more powerful than those that exist today don’t look revolutionary, while ubiquitous ultra-cheap wireless broadband makes storage and processing power less important. Improvements in small scale energy storage, tech transfer from e-paper and lower power computer chips will probably help make portable personal computers more energy efficient, but for always-on augmented reality (and its sister-tech robotics) in areas with ubiquitous broadband computing off-site is the way to go.
Latency worries me, though. Bandwidth has been improving a lot faster than latency for a while now. For always-on augmented reality, I think that we’re going to need some seriously more power-efficient computing so we can do latency-limited tasks locally. (Also, communication takes energy too—often more than computation.)
Good news on that, by the way: modern embedded computer architecture and manufacturing techniques are going in the right direction for this. 3D integration will allow shorter wires, making all digital logic much more power efficient. Network-on-chip architectures will make it easier to incorporate special-purpose hardware for image recognition and such. And if you stick the memory right on top of your processor, that goes a long way to speeding it up and cutting down on energy used per operation. If you want to get even more radical, you could try something like bit-serial asynchronous processors (PDF) or something even stranger.
Agree on the trend, but I’d put significant odds on some (as yet unexpected) trend being “the largest area of impact” in retrospect.
And distributed to more people. >60% of people will have at least 1 Mb/s internet access by 2020 (75%).
Do you have any ideas about how the scale of the impact from various different technological changes should be measured in this context? As far as I know, there is no standard metric for this. So, I am not clear about what you mean.
http://predictionbook.com/predictions/1662
Better than even odds that in 2020:
GDP per capita at purchasing power parity for Singapore will be more than US$80,000 in 2008 dollars.
GDP per capita for China (PRC), will be more than twice 2009 GDP
Tourism to suborbital space will cost less than $50000.
Outcomes:
1. No.
2. Yes.
3. Hell no.
Unless I’m missing something, looks like #1 is actually correct… 2019 GDP PPP per capita for Singapore was 103,181 according to IMF, which adjusts to 84775.10 according to the first inflation calculator on Google.
Within ten years either genetic manipulation or embryo selection will have been used on at least 10,000 babies in China to increase the babies’ expected intelligence- 75%.
Within ten years either genetic manipulation or embryo selection will have been used on at least 50% of Chinese babies to increase the babies’ expected intelligence- 15%.
Within ten years the SAT testing service will require students to take a blood test to prove they are not on cognitive enhancing drugs. – 40%
All of the major candidates for the 2016 presidential election will have had samples of their DNA taken and analyzed (perhaps without the candidates’ permission.) The results of the analysis for each candidate will be widely disseminated and will influence many peoples’ voting decisions − 70%
While president, Obama will announce support for a VAT tax − 70%.
While president, Obama will announce support for means testing Social Security − 70%
Within ten years the U.S. repudiates its debt either officially or with an inflation rate of over 100% for one year − 20%.
Within five years the Israeli economy will have been devastated because many believe there is a high probability that an atomic bomb will someday be used against Israel – 30%
Within ten years there will be another $200 billion+ Wall Street Bailout − 80%
I was very, very wrong.
How many opportunities do you think we get to hear someone make clearly falsifiable ten-year predictions, and have them turn out to be false, and then have that person have the honour necessary to say “I was very, very wrong?” Not a lot! So any reflections you have to add on this would I think be super valuable. Thanks!
http://predictionbook.com/predictions/1689
http://predictionbook.com/predictions/1690
I think you are on crack for this one. 15% ?! You seriously think there’s a 15% chance that a embryo selection and/or genetic manipulation for IQ will be developed, commercialized, and turned into an infrastructure capable of modifying roughly 9 million pregnancies a year? Where the hell are all the technicians and doctors going to come from, for one thing? There’s a long lead time for that sort of thing.
http://predictionbook.com/predictions/1691
Ditto—America doesn’t have that many phlebotomists, and would go batshit over a Collegeboard requirement like that. There would have to be an enormous national outcry over nootropics, and there’s zero sign of that, and tremendous takeup of drugs like modafinil. Even a urine or spit test would encounter tremendous opposition, and the Collegeboard has no incentive for such testing. (Cost, blame for false positives, and possibly dragging down scores which would earn it even more criticism. To name just the very most obvious negatives.)
http://predictionbook.com/predictions/1696
I think you forgot the part of your prediction where all the candidates went insane and agreed to such an incredibly status-lowering procedure, gave up all privacy, and completely forgot about how past candidates got away with not releasing all sorts of germane records.
http://predictionbook.com/predictions/1576 (Not sure if your wording is exactly the same as Cowen’s VAT prediction, but I figure it’ll do.)
http://predictionbook.com/predictions/1692
I recently read a book on old age public policy; amidst the endless details and financial minutia, I was deeply impressed how many ways there were to effectively means-test, even inadvertently, without obviously being means-testing or having that name. Judging could be very difficult.
http://predictionbook.com/predictions/1693
With a probability that high, shouldn’t you be desperately diversifying your personal finances overseas? Either fork of your prediction means major pain for US debt, equity, or cash holders.
http://predictionbook.com/predictions/1694
The odds of an Iranian bomb aren’t that terribly high, much less such an outcome happening.
http://predictionbook.com/predictions/1695
Definitions here are an issue. Some forecasts are for 2-500 billion dollars in defaults on student loans, which likely would provoke another bailout. Would that count? Does a 0% Fed rate and >0% Treasury rate constitute an ongoing bailout? etc.
All in all, this is a set of predictions that makes me think that I really should go on Intrade. I did manage to double my money at the IEM; at the time I assumed it was because I got lucky on picking McCain and Obama for the nominations, but if this is the best a random LWer can do, even aware of biases, basic data, and the basics of probability...
I’d take the other side on any of these if we can find a way to make it precise.
I hope you paid out on your bets.
As far as I can tell, every single one of your predictions has now been falsified.
“While president, Obama will announce support for means testing Social Security − 70%”
I’d be wiling to take those odds, with some refinements.
How about this—I win if before he leaves office I can point to a speech Obama gave in which he advocates means testing Social Security. Otherwise you win. The speech has to be given after today, so you don’t fear this is some kind of trick.
If I win I get $100 from you. If you win I give you $233. But with these odds I’m indifferent to making the bet. So for me to be willing to bet I want you to agree that if Obama makes such a speech you have to pay me right away.
That works for me, with one little change. The end of his term needs to be counted as the end of a presidential election he doesn’t win, rather than the inauguration of his successor. This is because the reason I don’t think its very likely is that the political effects on him would be dire, so if he does it as a lame duck president he has nothing to lose. I’m still willing to take the risk on his second term since even a second-term president is subject to some political forces.
And as a clarification, I take “means testing” to mean increasing or decreasing social security payouts based on a person’s assets or income. It also has to apply to US citizens to count.
And since I’m not an American, I’d just like to confirm that the best is in US dollars. That works for me, and I assume it works for you too.
OK, I accept—and yes the bet should be in U.S. dollars.
Please contact me at
EconomicProf@Yahoo.com so we can exchange addresses.
Hey, looks like you’re still active on the site, would be interested to hear your reflections on these predictions ten years on—thanks!
We will end the decade with some mobile energy storage system with an energy density close to or better than fat metabolism.
ETA: I mean in the context of electronics.
From looking at the diagram, aren’t we starting the decade with such a system (gasoline)?
You are the second person to mistake my intent. I meant in the field of mobile electronics. Take a look at where lithium ion is on this chart.
http://predictionbook.com/predictions/1664
The graph you link to says magnesium and diesel already have greater energy density than fat.
So, I think you have to specify how portable, how common or cheap, and maybe whether you are talking about rechargable or not—or the prediction is probably going to be vague—and subject to the criticism that it has already happened.
I meant commonly used for powering portable electronics. I don’t assign a high probability to this. It is the upper bound of what I think worth discussing.
Right. TNT does not count as a mobile energy storage system.
I think you’re wrong; but it’s a really interesting prediction.
The reason I think you’re wrong is that the rate of improvement of technologies in a field is more-or-less fixed within a field, because it depends on the economics, not on the science. Moore’s Law exists not because there’s some magic about semiconductors, but because the market is sized and structured such that you need to sell people a new system every 2 years, and you need to double performance to get people to buy a new system.
This means you can look at the past exponential curve for battery density, and project it into the future with some confidence. I don’t know what the exponent per year is; but my gut feeling before checking any data or doing any calculations is that it isn’t high enough.
I disagree.
I am typing this on a machine I bought 6 years ago. Its CPU speed is still competitive with current hardware. This lack of speedup is not because processor manufacturers chaven’t been trying to make processors faster; they have. The reason for the lack of speedup is that it is hard to do. The problem is more to do with the nature of physical reality than the structure and economics of the computer industry.
Consider cars. They do not halve in price every two years. Why not? Because they are designed to move people around, and people are roughly the same size they have always been. But computers move bits around, and bits can be made very small (both in terms of the size of circuitry and the power dissipated); this is the fundamental reason why the computer/communications industry has been able to halve prices / double capabilities every year or two for the last half century.
I don’t think there is an exponent curve as such for battery tech. Li-ion came in about 2006? And nothing much has improved since then. The trouble with batteries is you can’t just shrink components and get some improvement as you do with semi-conductors. Your components are already on the atomic scale. So more fundamental breakthroughs are needed.
The prediction is based mainly on our increasing control of biology and the ability to work on the small scale. If nothing else we’ll invent a way to metabolise fat or other carbohydrates to electricity and have small home bioreactors that produce carbs and make nice little cartridges for people to plug into their electronics. Maybe not in 10 years, but some substantial movement is definitely possible in this direction.
A graph of battery energy density between 1985 and 2008:
http://www.kk.org/thetechnium/Battery%20Energy%20Density.jpg
Extrapolate away!
What about some of the advances in micro-generators and Fuel-Cells that I have read about?
For instance, I have seen one of those tiny turbine engines running to power an equally tiny generator, and it looked to provide a hellofa lotta power for its size. I know the military is putting them into some applications in the field, so it will probably not be too terribly long before we see them on things like Laptops/tablets or cell phones.
I haven’t seen anything recent on these. Any keywords to google? The key thing for a consumer electronics application is ease of getting the fuel. People don’t want to have to head out to the shops to get it every few days, which is why rechargeable batteries are the current winner.
Try “MIT Micro Turbine Generator”. That will get you to the base technology. I tried to find the DARPA Page, but it seems to have been buried. The MIT Technology has also got a lot smaller from the 2006 initial turbines, which were roughly the size of a quarter. They know measure less than 1cm on a side. The Generator that creates the electricity from these things is roughly the same size as the turbine. It basically looks like a DVD Motor (really flat and broad).
I saw them as a field power source for laser designator and weapon (a modified laser designator that could be used as a sniper weapon), and as a source for communications gear. They used the same propellant that a butane lighter uses (that stuff in an aerosol can), They were said to run much longer than one day of full use on one charge.
The problems with them:
Heat and noise. They make a high pitched whine that can be muffled, yet is still easy to pick up on a mic that has the appropriate filtering software. The heat can also be shielded, but it creates a problem for the user. A last rumor that I hear is that when these things fail, they can cause the propellant to burn off. I have only heard one person talking about that though.
MIT is not the only one to come up with small turbines to use as power sources. some of the really small jet-turbine engines (1/2“ in diameter, and 2” to 3″ long) have been discovered to be excellent power sources as well when coupled to a generator.
Two semesters ago, I looked into making my own micro-turbine as a project for an engineering lab (I couldn’t find anyone willing to donate the Mill Time on a CAD/CAM mill to make the turbine blades, and I couldn’t afford the ready made ones). This is what led to my discovery of most of these (and then friends helped with actually seeing one).
Atleast one asian movie will exceed $400 mn in worldwide box office gross before the end of the decade.
It will most probably not be a wuxia movie. My guess of its genre is urban action or speculative fiction.
Wolf Warrior 2 did $874 million in China alone; China’s rapidly growing domestic market won this prediction singlehandedly.
I agree. I especially see a lot of convergence in present day mainstream Bollywood cinema with conventional blockbuster Hollywood fare in terms of both plots and production values. So expect a Moulin Rouge-like crossover musical in English with a major Hollywood box-office draw, an Indian model female lead, rags-to-riches storyline, Inception-like action sequences and CGI by studios in Hyderabad and Bangalore.
http://predictionbook.com/predictions/1708
Seems like a solid prediction. 2020 allows a lot of growth in China & India, and Bollywood-style movies already play well in the West—look at Slumdog Millionaire which nearly grossed $400M, despite being a British film on Indian matters.
I estimate 90% odds that Emotiv’s EPOC will fail like the Segway did.
I have one of these puppies. It’s the most fickle device I’ve laid my hands on. It’s useless for anything except gaining nerd status points. Hey, do you guys want me to post a detailed review? :)
I’d like to see a review, but it isn’t a LW thing. It would be nice to have a forum / news structure, so that we could have a section for “Off-topic posts”. Heck, it would be nice to sort the posts by topic.
Isn’t that what tags are for?
For the next decade:
I’d bet about a 2:3 odds that energy consumption will grow on a par or less than population growth.
Any rise in average standard of living will come from making manufacturing/logistics more efficient, or a redistribution from the very rich to the less well off. There is still scope for increased efficiency by reducing the transport of people and more automation.
I’d take the other side at those odds. Per capita energy expenditures in China are set to skyrocket as rural areas industrialize, and I expect the same of many Second World nations. I don’t think increases in efficiency will dwarf that effect quite yet.
I’m basically betting that a short term lack of oil (as evidenced by reduced production in 2008 and high current price), will put a break on that expansion. Or the industrialization of china will only happen in if first would countries reduce their energy consumption to allow it, as they did in 2008.
Data from the BP energy review.
Interesting consideration; but on the other hand, China isn’t afraid to build nuclear power plants or burn coal.
An interesting article on china and energy. Nuclear has a lead time (optimistically ) of 3 years, so their prediction of 60-90 GWe won’t be too far off. It actually looks like they are planning more wind than nuclear. I’m really curious where they expect the 500 GWe odd of energy they don’t mention to come from. All coal? That’ll be pretty dirty.
I was probably a little overconfident in my initial bet. I do expect the ratio of energy consumption growth to population growth to trend downwards though.
Wrong. (Well, a little bit right, but wrong in all the ways that matter.) According to the article you linked, they’re planning to build about 60-90 GW of nuclear capacity (let’s say 80 GW to simplify the arithmetic) and 100 GW of wind. But what we really care about is how much energy they get from those sources per year, and to find that, we have to multiply the peak power generation capacities by the capacity factor for each source.
Nuclear power has a capacity factor of at least 93% for the newer plant designs that China is building (or even for older plants after operators get experience), so we’ll say that their average production is (80 GW) * 0.93 = 74.4 GW average.
Wind power has a capacity factor of around 21% right now. Since we’re talking about 2020, i.e. The Future!!, let’s assume they get it up to a whopping 30%. Their energy production from wind would come out to (100 GW) * 0.3 = 30 GW average, or less than half of their projected nuclear production.
The average power figures are much more meaningful than the capacity numbers, but the wind salesmen quote whatever numbers make them sound most impressive, and the news media report it. It’s as ubiquitous as it is misleading.
Mea culpa. I forgot how misleading some of the energy numbers could be.
The article estimates that China’s electricity capacity will double from 2008 to 2020; it doesn’t seem to list an estimate for electricity production, but I’d think it would trend in much the same way, significantly faster than China’s (rapidly falling) population increase. Reading this article makes me even more eager than before to take the “over” at these odds.
I’m rethinking my wager. To give you some information that I found. Which I should have looked at before.
Average energy consumption increase over 15 years to 2008 has been 2.13%. This is very choppy data it varies between 0.09% and 4.5%(2004 then trending downwards). This included a doubling on energy consumption by china in 7 years (2001-2008).
Average population growth is trending downwards and is at 1.1%.
I was probably putting too much weight on my own countries not very well thought out energy policy.
What odds would you give on energy consumption growth rate being lower for the next 10 years than the previous 10 (2.4%)?
Because of the Second World’s larger growth rate (and the fact that they occupy a larger part of the total now), I think the odds of energy growth being lower than 2.4% are somewhat worse than even. I’m quite metauncertain; I don’t think I’d actually bet unless someone were giving me 3:2 odds to bet the ‘over’, or 4:1 odds to bet the ‘under’.
http://predictionbook.com/predictions/1663
I predict a 10% chance that I win my bet with Eliezer in the next decade (the one about a transhuman intelligence being created not by Eliezer, not being deliberately created for Friendliness, and not destroying the world.)
I’ll go ahead and claim a 98% chance that, if a transhuman, non-Friendly intelligence is created, it makes things worse. And an 80% chance that this is in a nonrecoverable way.
I kinda hope you’re right, but I just don’t see how.
This prediction is technically consistent with my prediction (although this doesn’t mean that I don’t disagree with it anyway.)
In other words, one of us did not specify the prediction correctly.
I don’t think it’s me. I deliberately didn’t say it’d destroy the world. Would it be correct to modify yours to say ”..and not making the world a worse place”?
No. If you look at the original bet with Eliezer, he was betting that on those conditions, the AI would literally destroy the world. In other words, if both of us are still around, and I’m capable of claiming the money, I win the bet, even if the world is worse off.
Yup. If he lives to collect, he collects.
Assuming that there is, in fact, a correct way to specify the predictions. It’s possible that you weren’t actually disagreeing and that you both assign substantial probability to (world is made worse off but not destroyed | non-FAI is created) while still having a low probability for (non-FAI is created in the next decade).
Considering that the bet includes “not destroying the world”, the only fair way to do this type of bet (for money) is for you to give the other party $X now, and for them to give you $Y later if you turn out to be correct.
That’s exactly what happened; I gave Eliezer $10, and he will pay me $1000 when I win the bet.
I’ll put down money on the other side of this prediction provided that we can agree on an objective definition of “transhuman intelligence”.
My bet with Eliezer can be found at http://lesswrong.com/lw/wm/disjunctions_antipredictions_etc/.
I said there at the time, “As for what constitutes the AI, since we don’t have any measure of superhuman intelligence, it seems to me sufficient that it be clearly more intelligent than any human being.” Everyone’s agreement that it is clearly more intelligent would be the “objective” standard.
In any case, I am risk averse, so I don’t really want to bet on the next decade, which according to my prediction would give me a 90% chance of losing the bet. The bet with Eliezer was indefinite, since I already paid; I am simply counting on it happening within our lifetimes.
I like your side of the original bet because I think the probability that the first superintelligent AI will be only slightly smarter than humans, non-goal-driven, and non-self-improving, and therefore non-Singularity-inducing, is better than 1%. The reason I’m willing to bet against you on the above version is that I think 10% is way overconfident for a 10-year timeframe.
Would a sped-up upload count as super-intelligent in your opinion?
In an analysis that does not account for any health-care reform bill, the Department of Health and Human Services projected that health care expenditures would double from the 2009 level of $2.2 trillion (16.2% of 2009 GDP) to $4.4 trillion in 2018 (20.3% of projected 2018 GDP). This provides us a baseline from which to predict the cost-control effectiveness of health care reform.
I’m somewhat bullish on the potential of the pilot programs and the excise tax to lower med costs for a given level of health outcomes, although I’m not supremely confident in that. I also think there is a long tail of events or technologies that could unexpectedly increase med expenses (that would do so with or without health-care reform). Furthermore, the current bill will expand coverage for a substantial number of people, as a result of which total expenditures will definitely rise. All things together, here are my (very rough) intuitions:
I’d give 1:1 odds that health-care expenditure is less than or equal to $5 trillion in 2018.
I’d give 5:1 odds that it’s less than or equal to $4 trillion in 2018.
I’d give 5:1 odds that it’s greater than $6 trillion in 2018.
Conditioned on current health-care reform failing (i.e. no pilot programs or excise tax), I’d only give 2:1 odds that health-care expenditure is less than or equal to $4.4 trillion in 2018. (Long tails and overly rosy estimates.)
EDIT: I had this up for a few minutes with different numbers, before I remembered that the individual mandate and subsidies would raise med expenses significantly.
3.6 trillion in 2018 (17.7% of GDP), so we don’t even need to argue about adjusting for inflation to judge these. Thanks Obamacare!
Are those figures inflation-adjusted?
In order:
http://predictionbook.com/predictions/1667
http://predictionbook.com/predictions/1668
http://predictionbook.com/predictions/1669
(As health reform passed, I omit any consideration of #4.)
I get into UC Berkeley − 70%
http://predictionbook.com/predictions/1719 but what date should the prediction terminate on?
About 3 months ago.
o.0
OK, did you get in or no?
Yes.
Congratulations to the both of us, then.
I expect that Brain-Computer Interfaces will make their way into consumer devices by the next decade, with disruptive consequences, once people become able to offload some auxiliary cognitive functions into these devices.
Call it 75% - I would be more than mildly surprised if it hadn’t happened by 2020.
For what I have in mind, what counts as BCI is the ability to interact with a smartphone-like device in an inconspicuous manner, without using your hands.
My reasoning is similar to Michael Vassar’s AR prediction, and based on the iPhone’s success. That doesn’t seem owed to any particular technological innovation; rather, Apple made things usable that were only previously feasible in the technical sense. A mobile device for searching the Web, finding out your GPS position and compass orientation, and communicating with others was technically feasible years ago. Making these features only slightly less awkward than previously has revealed a hidden demand for unsuspected usages, often combining old features in unexpected ways.
However, in many ways these interfaces are still primitive and awkward. “Sixth Sense” type interfaces are interesting, but still strike me as overly intrusive on others’ personal space.
It would make sense to me to be able, say, to subvocalize a command such as “Show me the way to metro station X”, then have my smartphone gently “tug” me in the right direction as I turn left and right, using a combination of compass and vibrations. This is only one scenario that strikes me as already easy to implement, requiring only some slightly greater integration of functionality.
I expect such things to be disruptive, because the more transparent the integration between our native cognitive abilities, and those provided by versatile external devices connected to the global network, the more we will effectively turn into “augmented humans”.
When we merely have to think of a computation to have it performed externally and receive the result (visually or otherwise), we will be effectively smarter than we are now with calculators (and already essentially able, some would say, to achieve the same results).
I am not predicting with 75% probability that such augmentation will be pervasive by 2020, only that by then some newfangled gadget will have started to reveal hidden consumer demand for this kind of augmentation.
ETA: I don’t mind this comment being downvoted, even as shorthand for “I disagree”, but I’d be genuinely curious to know what flaws you’re seeing in my thinking, or what facts you’re aware of that make my degree of confidence seems way off.
Ruling this prediction as wrong. (Only three years late, but who’s counting.)
By now this looks rather unlikely in the original time-frame, even though there are still encouraging hints from time to time.
I’m not thrilled about your vagueness about what technologies count as a BCI. Little electrodes? The gaming device that came out last year or so got a lot of hype, but the gamers I’ve talked to who have actually used it were all deeply unimpressed. Voice recognition? Already here in niches, but not really popular.
If you can’t think of what interfaces specifically*, then maybe you should phrase your prediction as a negative: ‘by 2020, >50% of the smart cellphone market will use a non-gestural non-keyboard based interface’ etc.
* and you really should be able to—just 9 years means that any possible tech has to have already been demonstrated in the lab and have a feasible route to commercialization; R&D isn’t that fast a process, and neither is being good & cheap enough to take over the global market to the point of ‘pervasive’
Decoding spoken words using local field potentials recorded from the cortical surface
Yep, electrodes, as in the gaming devices. Headsets is the form factor I have in mind, so not necessarily electrodes if this is to be believed. I don’t want to commit to burdensome implementation details but voice isn’t what I mean—it doesn’t count as “unobtrusive” to my way of thinking.
I envision something where I can just form the thought “nearest MacDonalds” (ETA: or somehow bring up a menu selecting that among even a restricted set) without it being conspicuous for an outside observer, and get some form of feedback from the device leading me in the right direction. Visual overlay would work, but so would a physical tug.
Three and a half years in, this.
Any updates to your original prediction?
Now this.
I think I’ve come round to Gwern’s point of view—this is a bit too vague. The news item I posted makes me feel like we’re still on track for it to happen, though I could be a few years off the mark. I might knock it down to 65% or so to account for uncertainty in timing.
Given the feasibility that currently exists for gadgets that you envision… and Apple’s uncanny ability to bring those ides to market… I say 2015 is a 75% target for the iThought side-processor device. :) .
By 2020, an Earth-like habitable extrasolar planet is detected. I would take a wager on this one but doubt anyone would give me even odds.
Will anyone give me even odds if the bet is by 2015?
I think I’d give better-than-even odds for either date, and would be shocked if no one else would. How are you defining “Earth-like” and “habitable”?
I think he just meant with liquid water, some type of atmosphere, and approximately earth sized. Given this, my guess is that they find one within the next three years. If he meant “habitable” to human beings without protection, i.e. oxygen atmosphere etc., then this is extremely unlikely (less than 2% chance) that they will find such a thing by 2020.
Is it possible to have liquid water without life? I remember reading that an oxygen atmosphere was quite impossible, but am not sure about liquid water.
There could be an oxygen atmosphere without life for a short period of a planet’s history (I’m not sure how long.) It wouldn’t be possible for it to remain permanently.
According to our evidence, Mars had liquid water for a very long period, but no one considers this to be proof that there was life there.
I went to check this—maybe liquid water is a short-term enough thing that its mere presence is still weak evidence for an active biosphere, but apparently one timeline puts liquid water as present in large quantities for >600 million years. Bleh.
Yes.
I’m not sure we have the technology to make that call even if such a planet does, in fact, lie within range of our telescopes.
We don’t. My prediction then is only almost certainly true if we define habitable as a planet in a sun’s habitable zone. However, I still think finding a habitable planet, per Unknowns’s definition, is likely to happen by 2020.
http://www.npr.org/templates/story/story.php?storyId=101493448
If Kepler does indeed find hundreds of planets in habitable zones, that should get the popular imagination going enough for the successor to Kepler to be very well funded. Kepler Mark II in the air by 2017?
At even odds I would take a loan to make the bet.
When Will the First Earth-like Planet Be Discovered?
http://arbesman.net/blog/2010/09/13/when-will-the-first-earth-like-planet-be-discovered/
http://predictionbook.com/predictions/1676
http://www.nasa.gov/ames/kepler/kepler-186f-the-first-earth-size-planet-in-the-habitable-zone/#.U1DouvldW_U
So by ‘habitable’ you meant simply in the zone?
http://news.ycombinator.com/item?id=1628822
That’s a good link (maybe half-forgotten rumors of this were why I guessed so high), but I hope you’re not expecting me to close the prediction as correct based on just online rumors. :)
:) Definitely not closed yet, but I figured I would put the link up just as a running update of the prediction.
I am 99% confident that AGI comparable to or better than a human, friendly or otherwise, will not be developed in the next ten years.
I am 75% confident that within ten years, the Bayesian paradigm of AGI will be just yet another more or less useful spinoff of the otherwise failed attempt to build AGI.
Shane Legg gives a 10% probability of that here:
http://www.churchofvirus.org/bbs/attachments/agi-prediction.png
My estimate here is a bit bigger—maybe around 15%:
http://alife.co.uk/essays/how_long_before_superintelligence/graphics/pdf_no_xp.png
You seem to be about ten times more confident than us. Is that down to greater knowledge—or overconfidence?
You seem to be about ten times less confident than me. Is that down to greater knowledge—or underconfidence?
I’m not very confident—primarily because we are talking ten years out—and the future fairly rapidly turns into a fog of possibilities which makes it difficult to predict.
Which brings us back to why you seem so confident. What facts, or observations are the ones you find which provide the most compelling evidence that intelligent machines are at least ten years off. Indeed, how do you know that the NSA doesn’t have such a machine chained up in its basement right now?
It hasn’t worked in sixty years of trying, and I see nothing in the current revival to suggest they have any ideas that are likely to do any better. To be specific, I mean people such as Marcus Hutter, Shane Legg, Steve Omohundro, Ben Goertzel, and so on—those are the names that come to me off the top of my head. And by their current ideas for AGI I mean Bayesian reasoning, algorithmic information theory, AIXI, Novamente, etc.
I don’t think any of these people are stupid or crazy (which is why I don’t mention Mentifex in the same breath as them), and I wouldn’t try to persuade any of them out of what they are doing unless I had something demonstrably better, but I just don’t believe that collection of ideas can be made to work. The fundamental thing that is lacking in AGI research, and always has been, is knowledge of how brains work. The basic ideas that people have tried can be classified as (1) crude imitation of the lowest-level anatomy (neural nets), (2) brute-forced mathematics (automated reasoning, logical or probabilistic), or (3) attempts to code up what it feels like to be a mind (the whole cognitive AI tradition).
My estimates are unaffected by hypothetical possibilities for which there is no evidence, and are protected against that lack of evidence.
Besides, the current state of the world is not suggestive of the presence of AIs in it.
ETA: But this is becoming a digression from the purpose of the thread.
Thanks for sharing. As previously mentioned, we share a generally negative impression of the chances of success in the next ten years.
However, it appears that I give more weight to the possibility that there are researchers within companies, within government organisations, or within other countries who are doing better than you suggest—or that there will be at some time over the next ten years. For example, Voss’s estimate (from a year ago) was “8 years”—see: http://www.vimeo.com/3461663
We also appear to differ on our estimates of how important knowledge of how brains work will be. I think there is a good chance that it will not be very important.
Ignorance about NSA projects might not affect our estimates, but perhaps it should affect our confidence in them. An NSA intelligent agent might well remain hidden—on national security grounds. After all, if China’s agent found out for sure that America had an agent too, who knows what might happen?
I would guess that the NSA is more interested in quantum computing than in AI.
They are the National Security Agency. Which of those areas presents the biggest potential threat to national security? With a machine intelligence, you could build all the quantum computers you would ever need.
This is my sense as well. I also think there is a substantial limit on what we’re likely to learn about the brain given that we can’t study brain functionality with large scope, neuron-level definition, in real time given obvious ethical constraints. Does anyone know of any technologies on the horizon that could change this in the next ten years?
http://lesswrong.com/lw/vx/failure_by_analogy/
From quote in that post:
There’s no reason to spread such myths about medieval history.
The main characteristics of the Early Middle Ages were low population densities, very low urbanization rates, very low literacy rates, and almost zero lay literacy rates. Being in a reference class of times and places with such characteristics, it would be a miracle if any significant progress happened during Early Middle Ages.
High and Late Middle Ages on the other hand had plenty of technological and intellectual progress.
I’m much more surprised why dense, urbanized, and highly literate Roman Empire was so stagnant.
China also springs to mind. I have listened to documentary about the Chinese empire and distinctly remember how advanced yet stagnant it seemed. At the time my explanation was authoritarianism.
All that is fine.
But 1) I’m not sure anyone has a good grasp of what the properties we’re trying to duplicate are. I’m sure some people think they do and it is possible someone has stumbled on to the answer but I’m not sure there is enough evidence to justify any claims of this sort. How exactly would someone figure out what general intelligence is without ever seeing it in action? The interior experience of being intelligent? Socialization with other intelligences? An analogy to computers?
2) Lets say we do have or can come up with a clear conception of what the AGI project is trying to accomplish without better neuroscience. It isn’t then obvious to me that the way to create intelligence will be easy to derive without more neuroscience. Sure, from just from a conception of what flight is it is possible to come up with solutions to the problem of heavier than air flight. But for the most part humans are not this smart. Despite the ridiculous attempts at flight with flapping wings I suspect having birds to study—weigh, measure and see in action—sped up the process significantly. Same goes for creating intelligence.
(Prediction: .9 probability you have considered both these objections and rejected them for good reason. And .6 you’ve published something that rebuts at least one of the above. :-)
The NSA does have some scary machines chained in their “Basement,” yet I doubt any of them approach AGI. All of them(that I am aware of—so, that would be 2) are geared toward some pretty straightforward real-time data mining, and I am told that the other important gizmos do pretty much the same thing (except with crypto).
I doubt that they have anything in the NSA (or other spooky agencies) that significantly outstrips many of the big names in Enterprise. After all, the Government does go to the same names to buy its supercomputers that everyone else does. It’s just the code that would differ.
So: you have a hotline to the NSA, and they tell you about all their secret technology?!? This is one of the most secretive organisations ever! If you genuinely think you know what they are doing, that is probably because they have you totally hoodwinked.
Hardly a hotline… A long, long time ago, when I was very young, I wound up working with the NSA for about six months. I was supposed to have finished school and gone to work for them full time… But, I flaked when I discovered that I could get laid pretty easily (women seemed much more important than an education at the time).
I still keep in touch, and I have found that an awful lot of their work is not hard to find out about. They may have me hoodwinked, as my job was hoodwinking others. However, I don’t usually spend my time with any of my former co-workers talking about stuff that they shouldn’t be talking about. Most of it is about stuff that is out in the open, yet that most people don’t care about, or don’t know about (usually because it’s dead boring to most people).
And, I am not aware that I have stumbled onto any secret technology. Just two machines that I found to be freakishly smart. One of them did stuff that Google can probably now do (image recognition), and I am pretty sure that the other used something very similar to Mathematica. I was really impressed by them, but then I also did not know that things like Mathematica existed at the time. At the time I saw them, I was told by my handler than they were “Nothing compared to the monsters in the garage.”
Edit: Anyone may feel free to think that I am a nut-job if they wish. At this point, I have little to no proof of anything at all about my life due to the loss of everything I ever owned when my wife ran off. So, you may take my comments with a grain of salt until I am better known.
Can you be more specific about what you mean by the Bayesian paradigm of AGI? Is it necessarily a subset of good-old-fashioned symbolic AI? In that case, it’s been dead for years. But if not, I can’t easily imagine how you’re going to enforce Bayes’ theorem; or what you’re going to enforce it on.
Here’s an example of what I had in mind by “the Bayesian paradigm”—see especially pp.12-13. Bayesian reasoning may be the one correct form of reasoning about probabilities, just as the first-order predicate calculus is the one correct form of reasoning about the true and the false, but that does not make of it a method to automatically solve problems.
I also had in mind something broader than just Bayesian reasoning, although that’s a major part: the coupling of that with a goal system based on utility functions and their maximisation (the major thrust of the paper I linked).
http://predictionbook.com/predictions/1670
I don’t know how one would judge this and so haven’t made a prediction for this one.
Thanks for putting that up. I hadn’t been aware of PredictionBook, so I’ve just made an account and posted a more precise prediction there myself.
Hopefully my comments and importation of predictions will lead to more PB awareness on LW.
Carry-on luggage on US airlines will be reduced to a single handbag that inspectors can search thoroughly, in 2010 or 2011.
http://predictionbook.com/predictions/1720
2010 is almost over, so the odds of my being right are now considerably less.
Well, we’re only 8⁄24 of the way to the end of 2011, so you could still be right. Ganbaru!
I would say better-than even chances that sites like intrade gain prestige in the next decade
and betting on predictions will become common ( 90% that there is a student at 75% or so of high schools in 2020 that will take bets on future predictions on any subject, 40% that >5% of US middle class will have made a bet about a future prediction)
naive guesses based largely on http://www.fivethirtyeight.com/2009/11/case-for-climate-futures-markets-ctd.html
I predict further that I will continue to post on LW at least once a month next year (90%) and in 2020 (50%)
Is there any comparable website that you were posting on in 2000 and continue to post on today? I agree that LW is awesome, but web communities have a short shelf life (and a tendency to be superseded as web technology improves).
Probably a good reason to adjust the estimate down. On the other hand I was 11 in 2000 so I wouldn’t have been on this kind of site anyway, and conditional on the prediction that news-betting becomes more prestigious rationality almost certainly will.
Point taken, with the real point being that I have no sense of how long a decade is, so I’ll adjust that down to a 20%
I have stayed in touch with a different web community for five years, with which I’m still in touch, although only barely at the level of once a month. So my odds for awesomeness overcoming shelf-lifes may be higher than for most.
http://predictionbook.com/predictions/1710
Kind of vague, but I suppose it’s not too hard to do a search and note that the NYT only mentioned Intrade a few times in the 2000s and more in the 2010s.
http://predictionbook.com/predictions/1709
I have no idea how one would measure this one. I’m sure that at any high school you could find a student willing to wager with you on any damn topic you please.
Not including a prediction for middleclasses. Already true if you count sports, as many prediction markets such as Betfair do.
http://predictionbook.com/predictions/1711
http://predictionbook.com/predictions/1712
Agree with orthonormal that this is seriously over-optimistic. The only site I even use today that I did in 2000 would be Slashdot, and I haven’t commented there in a dog’s age.
I probably meant for claim 3 to exclude sports.
Well, then you’re using a variant definition of prediction market, and before I can feel confident judging any prediction of yours, I need to know what your idiosyncratic interpretation of the phrase is.
I agree that I wasn’t making the most coherent claim, and since it’s been a long time I can’t guarantee fidelity of what I originally intended.
But my best guess would be, trying to phrase this as concretely as possible, was that I meant to predict that either
a) sports betting agencies would expand into non-sports venues and see significant business there
or b) newer betting agencies not created to serve sports would achieve similar success
I would be “disappointed” if “non-sports” meant something like player movement between teams and “excited” if it meant something like unemployment rates and vote shares in elections.
Hmm. You seem to be a taking the position of a radical skeptic here. Would you agree? That position is almost always associated with sophistry, and neatly explains everyone’s reaction to you, I believe. AFAIK, there’s really no answer to radical skepticism (that’s acceptable to the skeptics).
ETA: I wish he had had a chance to respond to this. Seems like it more directly addressed the troll’s issues than other comments. Oh well, whatever.
By the end of 2013: Either the Iranian regime is overthrown by popular revolution, or there is an overt airstrike against Iran by either the US or Israel, or Israel is attacked by an Iranian nuclear weapon (70%).
Essentially seconding mattnewport: the price of gold reaches $3000USD, or inflation of the US dollar exceeds 12% in one year (65%).
The current lull in the increase of the speed at which CPUs perform sequential operations comes to an end, yielding a consumer CPU that performs sequential integer arithmetic operations 4x as quickly as a modern 3GHz Xeon (80%).
Android-descended smartphones outnumber iPhone-descended smartphones (60%).
The number of IMAX theaters in the US triples (40%).
http://predictionbook.com/predictions/1699
http://predictionbook.com/predictions/1375
http://predictionbook.com/predictions/1700
http://predictionbook.com/predictions/1698
http://predictionbook.com/predictions/1701
When you say sequential integer operations, do you mean integer operations that really are sequential? In other words, the instructions can’t be performed in parallel because of data dependencies? If not, then this is already possible with a sufficiently wide superscalar processor or really big SIMD units.
But let’s assume you really mean sequential integer operations. The only pipeline stage in this example that can’t work on several instructions at once is the execute stage, so I’m assuming that’s where the bottleneck is here. This means that the speed is limited by the clock frequency. So, here are two ways to achieve your prediction:
Crank up the clock! Find a way to get it up to 12 GHz without burning up.
Make the execute stage capable of running much faster than the rest of the processor does. This is natural for asynchronous processors; in normal operation the integer functional units will be sitting idle most of the time waiting for input, and the bulk of the time and complexity will be in fetching the instructions, decoding them, scheduling them, and in memory access and I/O. But in your contrived scenario, the integer math units could just go hog wild and the rest of the processor would keep them fed. This can be done with current semiconductor technology, I’m pretty sure.
So, either way, kind of an ambitious prediction. I like it.
Have you not heard that they discovered a way to use graphene as a one to one replacement for copper in chip production. That alone will allow speeds of 12-15GHz.
I would put faster chips using multicore running at many times current speeds will be available by 2011-2012 at near 100% certainty.
This seems to be still very far from application, a quick search on your claim turned up only this paper that isn’t cited by anybody yet, publicized in a few popular articles.
Let’s assume they put it into practice and start mass-producing processors with graphene interconnects with better-than-copper resistivity. We’ve got two things to worry about here: speed and power.
The speed of signal propagation along a wire depends on RC, the product of the resistance and the capacitance. Graphene lowers the resistance of a wire of a given size, but does nothing to lower the capacitance—that depends on the insulator surrounding the wire and the shape of the wire and its proximity to other wires. The speed gains from graphene look moderate, but significant.
The power dissipated by sending signals through wires will be most of the power of future processors, if current trends continue. Power is a barrier to clocking chips fast. We can overclock processors a lot, but you’ve got to worry about them burning up. Decreasing resistivity improves the power situation somewhat, but the bulk of the interconnect’s influence on power comes from its capacitance. Transistors have to charge and discharge the capacitance of the wires, and that takes power. So on power, graphene will help somewhat, but it’s not the slam-dunk that Valkyrie Ice is expecting.
tl;dr: Graphene interconnect sounds good, but not fantastic.
Thank you—I was wanting to write something along similar lines in response to Valkyrie Ice’s comment, but wouldn’t have ended up with something this compact.
I’ll add that clocking is just a piece of the puzzle when it comes to making computers that compute faster.
The second estimation in each paragraph is conditional on the first.
By 2020 some kind of CO2 emissions regulation (cap and trade) will be in place in the US(.85). But total CO2 emissions in the US for 2019 will be no less than 95% of total CO2 emissions for 2008 (.9).
Obama wins reelection (.7). The result will be widely attributed to an improving economy (in the media and in polls and whether or not the economy actually improves) (.85)
By 2020 open elections are held for the Iranian presidency (no significant factions excluded from participation) (.5). The president (or some other position selected through open elections) is the highest position in the Iranian state (.5)
“The president (or some other position selected through open elections) is the highest position in the Iranian state (.5)”
Qualify this. Formally, the highest position in the British state is unelected. In terms of political power, the highest position in the British state is elected.
In terms of political power.
http://predictionbook.com/predictions/1680
http://predictionbook.com/predictions/1681
http://predictionbook.com/predictions/452
http://predictionbook.com/predictions/1682
How do you plan to judge #2? It seems rather subjective. (I mean, the economy is one of the factors attributed to every president winning or losing.)
http://predictionbook.com/predictions/1683
http://predictionbook.com/predictions/1684
For the next decade: collaborative filtering.
Just one word: plastics.
Based on this article on collaborative filtering, we already have it. Every time I buy anything online, I am told what other products people buy who also bought what I bought. It is the central component of the StumbleUpon service.
So, what are you predicting?
I’m predicting that in 2020 you’ll look back at this blog comment and say, “Wow, he sure called that one.”
I think it’s more likely people will say “too vague a prediction”.
Too vague a prediction indeed, but also collaborative filtering seems to have become a cornerstone of modern online advertising / content recommendation services.
I look back and say “I wish he had been right!”
I don’t know. It makes sense; but I thought the same thing in 1999.
There is a lot of interest in using CF to sell people more mass-market things, eg Netflix; less interest in helping people find obscure things from the long tail that they might have a special interest in; still less interest in using CF for social networking.
My first prediction is that as is usually the case, political and random events will change the way people live far more over the next year than technology will. Given the current state of the financial system, I would place about even odds on politics having more impact than technology over the next decade, but with the caveat that over such a long time scale political and technological events will surely be interwoven.
There’s no separation to be had between politics and technology.
The biggest influence on technology is regulation which outlaws, restricts, or places huge financial barriers to entry (as with medical research); another non-trivial influence is politically controlled financing of R&D.
And arguably, the biggest influence on politics that isn’t itself political is technology (case in point: modern communications, computer, and the Internet spreading censored information, creating more popular awareness and coordinating protests.)
So I think political and technological events are inseparable over almost any timescale.
I agree that there is little to no separation, but I think a distinction can be made. Namely, there are two different words that mean different things. When predicting what is going to affect people you can probably find a way to split the techno-political mash usefully. This may be as simple as using one word over the other.
It seems pretty vague—do you have any ideas about how this should be measured?
That scarcely seems to be a testable prediction. Random, political, and technological events are tightly interwoven, first of all. Unless you plan to perform an experiment or do some kind of remarkably complex and research dense correlational analysis, how do you expect to determine whether you were right or wrong?
For instance, if a government sponsored project produces a type of cheap, practical fusion, is that tech change or a political change? Are terrorist attacks random or political?
In any case, I would guess that if you did a survey, people would more often say that technological change was more important.
Secession: If you mean a state trying to leave the US in the next decade, 5%. If you mean a state actually being allowed to leave, I put it at 0%.
Insurrection in the next decade: I’m defining an insurrection as at least 1000 people in the same or closely allied organizations with military weapons taking violent action against the US government: 30%. They’ll lose. It’s certainly possible that my opinion on this is based on reading too much left wing material which is very nervous about the right. On the other hand, 1000 isn’t a lot of people.
All predictions are 10 years out unless otherwise noted.
The rest of the world: Another EU-style organization gets started: 30%. The advantages of having a large population are getting obvious, and it’s astonishing to me to see countries begging for a chance to give up some national sovereignty.
Fabbing: Automated custom shoes: 50%. Possibly wishful thinking—I have feet which aren’t quite in the easy-to-fit range.
On one hand, there’s a massive market. On the other, shoes are a way of signaling status, and it’s probable that I wildly underestimate how hard it is to put a shoe together. And shoes are a way of signaling status, so a custom machine shoe for the general market can’t look much different from the standard shoes.
Custom machine shoes will start out expensive, and be for the athletic market.
Another product: Sous vide cookers (poaching food in vacuum-sealed bags at precisely controlled temperatures: has many good effects and should be especially appealing to geeks) will be down to $200 within 5 years (80%) and half as common as microwaves within 10 years (50%).
Obama will be re-elected unless he is assassinated (5%), there is a major terrorist attack on US soil (I’m not betting on that one, too random, and how he responds will have an unpredictable effect, too), or the economy doesn’t improve (I think it will, but don’t have a percentage).
Just as a check on 0% for a state being allowed to secede, consider this.
What would you put at the probability that there would be sufficient devastation in the eastern seaboard of the US in the next decade from (for example) bio or nuclear attacks or terrorism? If that happened, what would be the probability that the US would be disbanded as a going concern? I realize you would likely assign very small numbers to these possibilities, but possibly > 0%. If you assign >0% to this, then you assign >0% to a state being allowed to secede. (recapitulating an objection voiced to me by Anna Salamon when I made a claim of extremely small probability for some risk or another).
There’s a lot of ruin in a nation. The main axis nations of World War II—Germany, Italy, and Japan—provide some examples of nations that were really, really traumatized and damaged. Out of the three, only Germany split apart, and that only because of competing foreign occupiers. Even then it reunited as soon as it got the chance. I don’t think there’s enough hostility or just plain difference between most of the states west of the Mississippi to cause them to separate, especially under threat of external attack. If anything, I’d expect them to band together as tightly as possible.
I don’t think the US would go away even if the eastern seaboard was nothing but glassy craters and deadly microbes.
That being said, it’s conceivable that some technological or ideological change could weaken the central government to the point that states would be let go, though it’s hard to imagine something that drastic shaping up in as little as 9 years. I’m also not sure what change could happen which would break the federal government while leaving state governments intact.
Ok, though—in a decade, something very odd could happen. I don’t think a lot of people were predicting the dissolution of the USSR before it happened.
Meanwhile, sous vides don’t seem to be a lot cheaper or more popular, but I didn’t put as extreme a probability on that one.
Surely you mean “my estimate rounds to 0%”?
I meant 0%, but you probably have a point that I should present the chance as negligible rather than non-existent. Is there a limit, though? Does it make sense to say that there’s a non-zero chance that a state will propose secession and be allowed to leave by tomorrow morning?
Yep. It even makes sense to say that there’s a non-zero chance that a state seceded last month, and that we haven’t heard about it yet. The word ‘epsilon’ is useful in such cases; it means ‘nearly zero’ or ‘too close to zero to calculate’.
“Negligible” is a much better word, in my opinion, since epsilon is (conventionally) an arbitrarily small number, not a sufficiently small number. You could use “infinitesimal”, but nothing in reality is actually infinitesimally small (including probabilities), so again you’d be inaccurate. I always get frustrated when people misuse precise mathematical words that have lots of syllables in them. The syllables are there to discourage colloquial use! I don’t mind if you try to show off your knowledge, but for heaven’s sake don’t screw up and use that precise brainy term wrong!
You’re straddling a strange line here. You’re demanding a certain amount of strictness that is itself short of perfect strictness.
There’s no such thing as an “arbitrarily small number”. There are numbers chosen when any positive number might have been chosen. In particular, a given epsilon need not be “negligible”. Really, to conform to the strict mathematical usage, one shouldn’t say “epsilon” without first saying “For every”. Once you’re not demanding that, you’re not using the “precise mathematical words” in the precise mathematical way.
I’m not saying that you’re on some slippery slope where anything goes. But I wouldn’t say that AdeleneDawner is either.
Actually, I’m fine with people speaking vaguely, I just don’t want to see terminology misused.
“Through adding zeroes between the decimal point and the 7 in the string ‘.7’, the number we are representing can be made arbitrarily small.” Is this a misuse of the word “arbitrarily”?
The important think about an epsilon in a mathematical proof is, conventionally, that it can be made arbitrarily small. This is a human interpretation I am adding on to the proof itself. If the important thing about a variable in a proof was that the variable could become arbitrarily large, my guess is that a variable other than epsilon would not be used.
Your usage is fine, so long as it’s clear that “arbitrarily small” is a feature of the set from which you are choosing numbers, or of the process by which you are constructing numbers, and not of any particular number in that set. This is clear with the context that you give above. It wasn’t as clear to me when you wrote that “epsilon is (conventionally) an arbitrarily small number”.
’Kay.
I’m not the only one you should be ranting at, though—I picked it up here, not in a math class, and I suggested it because it’s in common use.
Yep, it is probably unrealistic to expect random folks to avoid picking up multisyllable terms in the way they pick up regular words.
Don’t forget “modulo”.
Suppose that Nancy meant 0% except for a few special cases that she didn’t think should be relevant. Then she could say, ‘0% modulo some special cases’.
I often use epsilon in the same informal way AdeleneDawner does, though I’m perfectly aware of the formal use. Still, I think the informal use of “modulo” is more defensible—it maps more closely to the mathematical meaning of “ignoring this particular class of ways of being different”
Could you explain this in greater detail? This way of using “modulo” bothers me significantly, and I think it’s because I either don’t know about one of the ways “modulo” is used in math, or I have an insufficiently deep understanding of the one way I do know that it’s used.
In modulo arithmetic, adding or subtracting the base does not change the value. Thus, 12 modulo 9 is the same as 3 modulo 9. Thus, for example, “my iPhone is working great modulo the Wifi connection” implies that if you can subtract the base (“the Wifi connection”) you can transform a description of the current state of my iPhone into “working great”.
(For your amusement: modulo in the Jargon File. Epsilon is there too.)
Edit: Actually, in this case, you would have to add the base, because my Wifi isn’t working, but the statement remains the same.
You can get a hacker sous vide setup for under $200 today. http://news.ycombinator.net/item?id=2058982
I think you could when I made the prediction—what I had in mind was a sous vide cooker that you didn’t need to put together.
http://predictionbook.com/predictions/1713 & http://predictionbook.com/predictions/1714
http://predictionbook.com/predictions/1715
An EU-style organization—you’ll have to be more specific than that. Every region has a bunch of multinational orgs like the UN. Africa has the Union of African States, Asia has ASEAN, SAARC, BIMSTEC, etc. Maybe you would prefer a prediction like ‘at least 10 nations in Asia/Africa/South America will create a new common currency and switch to it’?
http://predictionbook.com/predictions/1716 I agree that this one is wishful thinking on your part. :)
http://predictionbook.com/predictions/1717 & http://predictionbook.com/predictions/1718 I agree that it’s perfectly possible (surely right now) to sell a sous-vide cooker for $200; I question that there is demand enough, and really have no idea about the business environment. Cynicism tells me that there is no enormous revolution in American cuisine in the offing to the point where effectively half the middle-class has a sous-vide cooker, though. I mean come on.
http://predictionbook.com/predictions/452 for his re-election
Thanks, mostly.
I think it would have been more fair to make my predictions 1. “A state will not try to secede” and “A state will not succeed at seceding”.
Other than that, it’s interesting to see how uncertain I am that some of my predictions are the result of my own thinking rather than emotional effects from people I’ve been reading.
(3) What I had in mind for an EU-style organization was dropping restrictions on trade and travel. At this point, I’m not as optimistic, but that feels more like mood than new information. I don’t know whether dropping restrictions on trade requires a common currency.
(4) Computer-fabbed custom-fitted shoes are a lot easier than AI. If you don’t think that’s at all likely within 10 years, does this affect any predictions you might have for AI? Your answer is about there not being a market for them—I’d say that the market isn’t perceived. Either way, I don’t get the impression that that tech is ready to do it yet.
It might make more sense for a computer to measure the feet and make the pieces, but have human beings put the shoes together. :-/
I’m also assuming shoes would be mailed rather than being shoes on demand—shoes on demand would be another jump in technology.
Thinking about it a little more, the footprint in stores could be pretty small—just the measuring device. I’m not sure how much support from store staff it would be apt to need at the beginning.
This sort of development is also dependent on how much capital is available, and I’m not feeling optimistic about that.
(5) The conveyor belt for new aspects of food (perhaps unsurprisingly) seems to be more efficient for prepared food and ingredients than for cooking methods. I still haven’t had sous vide food myself, but everything I’ve heard about it makes it sound wonderful. I think there will be a sudden shift with sous vide food becoming available in mid-range restaurants followed by a lot of people wanting to cook it.
ETA: The website didn’t just format the numbers into pretty paragraphing, it “corrected” the numbers.
For 3, a monetary union isn’t necessary; look at the US & Mexico & Canada, thanks to NAFTA. Certainly helps, though. I don’t really see any areas which might do this sort of thing. Open borders and no trade barriers is a very Western 1st World sort of thing to do, and the obvious candidates like Japan don’t really have an incentive to do so. (Japan has no land borders, so having passport checks doesn’t really increase the cost of flying or boating to it.)
For 4: I think custom-fitting is already possible, and has been since the early laser scanners came out in the… ’80s? But like the sous-vide, I’m not confident in their uptake. (It’s kind of like jetpacks and flying cars and pneumatic postal systems. We have them; we just don’t use them.)
This is part of standard markdown; you can number each item ‘1.’ if you want! If you want a number item you can escape it with a backslash, or you can do like I did and insert a paragraph after the bullet (newline, and then indent the paragraph by 4-5 spaces).
Damn, on 3 I didn’t say what I meant. The genuinely big deal is freedom to relocate and work.
Do you have a source for computerized custom-fitting of shoes? The big deal isn’t just the fitting, though, it’s reasonably-priced manufacture.
Afaik, jet-packs can be made, but carrying enough fuel for significant travel isn’t feasible.
As for flying cars, it finally occurred to people that there were weather and pilot safety issue.
I don’t see those sorts of considerations applying to sous vide or computerized custom shoes.
The futuristic prediction which seems to be not happening because people just don’t want it is video which shows your face while you’re talking on the phone.
Someone I know has a foot problem. Her orthopedist recommended having a scan done to produce inserts to adjust the shape of her regular shoes, and said if that didn’t work, then entirely custom shoes could be made. So computerized custom-fit shoes do exist, but they’re considered a medical item which makes them expensive.
That sounds to me as though the inserts are customized, but the custom shoes would be made by humans.
That one’s already happened. My new iPhone does video calls, and so does Skype on any computer with a webcam. That wasn’t driven by demand, though, it was more that the technology all became ubiquitous for other purposes and it was easy to stitch it together to provide videophone functionality, even if it isn’t actually used very much.
IIRC, I read it a long time ago in a mouldering paperback of Alvin Toffler’s The Third Wave. (Or was it Future Shock?) But even without having read about clothes in particular, I have read about 3D models of statues etc. being generated through rotating the object while shining a laser on it; thus obviously one can generate a human model (I think CGI already does this), and fit clothes on that model. I would be deeply shocked if no one has ever used laser modeling to fit garments of some kind.
Considerations like expense and minimal benefit don’t apply? Mm, well, as Marx said, nous verrons. Figuring out whose perception of reality is clearer is one of the points of recording predictions.
You don’t have to be shocked. Here is one.
I think what user-specific clothing and shoes currently lacks is sufficiently advanced robotics. If you are doing the obvious, cutting out bits of material and attaching them together you have quite a few problems. You are having to manipulate non-standard sized bits of flexible material. The production line deals with many of the same sized and shaped bits of material so you can change molds/tools dependent upon the size of the shoe.
The knitting machine above removes that consideration as it produces the finished garment in one piece.
I found this pdf on customized shoe production from 2001 (requires login) while trying to find some videos of shoe manufacturing to confirm my ideas. I don’t have time to look into it, but seems relevant to the discussion.
The hard part of computerized custom shoes might be designing the shoes rather than measuring the foot. Also note that the shoe has to fit while you’re walking, though that seems like just adding difficulty rather than a whole new problem.
I should have been more precise about the difference I see between flying cars and sous vide cooking. Flying cars include infrastructure and group effects in a way that sous vide cookers do not.
No predictions about the state of the environment? Is every point of contention too close to call, then?
China is the 2nd biggest economy in 2020 (99%). Note I’m counting the EU as lots of countries, not as one big economy. Counting the EU together, China will be the 3rd biggest.
Pirate Parties will have been in government for a time in at least one country by 2020 (90%)
Pirate Parties will win >=10 seats in the European parliament in 2014 (75%), and <=30 seats (75%).
The Conservatives will win a majority the next UK general election (60%), there will be no overal majority (37%), or any other outcome (3%).
Do you have bets on Intrade or Betfair for those guesses? It’s probably better for you to bet directly than for me to do arbitrage on you :-) They have around 68% Conservative victory, 26% no overall majority, and around 6% Labour victory.
Betfair
Now, that’s unfair. You’ve already won that one, and any look at the numbers would’ve told you this was a like 99.999% prediction or something.
No, there is a reasonable (IMO >1%) chance China could overtake the USA or EU in the next ten years.
I’m a little confused what you’re predicting. China is already the 2nd biggest economy, my understanding was, unless the EU is counted as a single economy. So your 99% prediction is actually ‘China will not become the world’s largest economy and will remain #2/#3’?
Next 10 years:
Nativism discredited (80%)
Traditional economics discredited (80%)
Cognitivism/computationalism discredited (70%)
Generative linguistics discredited (60%)
To elaborate somewhat: By #1 I mean that in the fields of biology, psychology and neuroscience the idea that behaviours or ideas or patterns of thought can be “innate” will be marginalised and not accepted by mainstream researchers.
By #2 I mean that, not only will behavioural economics provide accounts of deviations from traditional economic models, but mainstream economists will accept that these models need to be discarded completely and replaced from the ground-up with psychologically-plausible models.
By #3 I mean the idea that the brain can be thought of as a computer and the “mind” as its algorithms will be marginalised. I give this lower odds than nativism being discredited only because the cognitivist tradition has managed to sustain itself through belligerence rather than evidence and is therefore likely to be more persistent and pernicious. Nativism, on the other hand, has persisted because of the difficulty of experimentally demonstrating that certain behaviours are learned rather than innate (as well as belligerence).
By #4 I mean that traditional linguistics, and especially generative grammar, will be marginalised. This one has long puzzled me since the generative grammarians based their ideas on intuition and explicitly deny a role for data or experiment (or the need to reconcile their beliefs with biology). The main problem has been the absence of a viable alternative research program. This is beginning to change.
If we could agree on a suitable judging mechanism, I would bet up to $10,000 against you on #1 and on #3 at those odds (or even at substantially different odds). I also disagree on the latter claim in #2, but that’s not as much of a slam dunk for me as the others.
Can you unpack what you mean by innate. I think babies would have a hard time surviving if sucking things wasn’t a behaviour that was with them from their genes.
And more generally, the distinction innate/learned is overly simplistic in a lot of contexts; rather, there are adaptations that determine the way organism develops depending on its environment. The standard reference I know of is
J. Tooby & L. Cosmides (1992). `The psychological foundations of culture’. In J. Barkow, L. Cosmides, & J. Tooby (eds.), The adapted mind: Evolutionary psychology and the generation of culture. Oxford University Press, New York.
A few thoughts:
It would be valuable to do an outside view sanity check: historically, how frequently have research programs of similar prestige been discredited?
There are all the standard problems with authority—lots of folks insist that they’re in the mainstream and that opposing views have been discredited. Clearly nativism &c. have been discredited in your mind; when do they get canonically discredited? Sometimes I almost think that everyone would be better off if everyone just directly talked about how the world really is rather than swiping at the integrity of each other’s research programs, but I’m probably just being naive.
Re 3, my domain knowledge is somewhat weak, so everyone ignore me if my very words are confused, but I’m not sure what would count as a refutation or the mind being an algorithm. Surely (surely?) most would agree that the brain is not literally a computer as we ordinarily think of computers, but I understand algorithm in the broadest sense to refer to some systematic mechanism for accomplishing a task. Thought isn’t ontologically fundamental; the brain systematically accomplishes something; why shouldn’t we speak of abstracting away an algorithm from that? Maybe I’ve just made computationalism an empty tautology, but I don’t … think so.
I don’t think the innate/learned dichotomy is fundamental; it’s both, everyone knows that’s it’s both, everyone knows that everyone knows that it’s both. Like that old analogy, a rectangle’s area is a product of length and width. What specific questions of fact are people confused about?
I think these research programs represent something without a clear historical precedent. Traditional economics and generative linguistics, for example, could be compared to pre-scientific disciplines that were overthrown by scientific disciplines. But both exhibit a high degree of formal and institutional sophistication. I don’t think pre-Copernican astronomy had the same level of sophistication. Economics also has data (although so did geocentric astronomy) whereas the generative tradition in linguistics considers data misleading and prefers intuitive judgement. What neither has is a systematic experimental research program or a desire to integrate with the natural sciences.
Cognitivism is essentially Cartesian philosophy with a computer analogy and experiments. In practice it just becomes experimental psychology with some extra jargon. Nativism, too, comes from Cartesian philosophy (Chomsky was quite explicit about this). While cognitivism has experiments it has an interpretation that isn’t founded in experiment (the type of computer the brain is supposed to be and the algorithms it could be said to run is not addressed) and an opposition to integration with the natural sciences (the so-called “autonomy of psychology” thesis).
These research programs are similar to pre-scientific research programs but have managed to persist in a world where you have to attempt to “look scientific” in order to secure research grants and they reflect this fact.
You point to many problems and I wouldn’t take any bets because of these. It would be too difficult to judge who had won. On the nature/nurture debate: Empiricism evolved into constructivism/interactionism (i.e., the developing organism interacting with the environment with genes driving development), which is the dominate view in biology, and it’s not obvious what, precisely, modern Nativists believe. But it is obvious that they still exist since naive nativist talk persists almost everywhere else. It’s similarly difficult to figure out what computationalists mean by their analogies and the degree to which they intend them to be analogies vs. literal propositions. This is probably why the natural sciences tend not to base research programs on analogies. What is clear is that they have a particular style of interpreting their results in terms of representations and sequential processing that is clearly at odds with biology and display no interest in addressing the issue.
First, this is the genetic fallacy. Secondly, I don’t take Chomsky’s authority seriously.
The experimental evidence that, say, Steven Pinker presents in How the Mind Works for innate mental traits and for the computational perspective are sound, and have nothing to do with Cartesian dualism.
The point is that the views have their origins in philosophy rather than experiment. We’re not dealing with a research program developed from a set of compelling experimental results but a research program that has inherited a set of assumptions from a non-empirical source. This is more obviously the case with computationalism, where advocates have shown almost no interest in establishing the foundational assumptions of their discipline experimentally, and some claim that to do so would be irrelevant. But it’s also true for nativism where almost no thought is given to how nativist mechanisms would be realised biologically.
I’m not entering any of these into PredictionBook because all 4 strike me as hopelessly argumentative and subjective. (Take #1 - what, you mean stigmatised even more than it already is as the province of racists/sexists/-ists?)
Regarding 3, there’s no way to find evidence against it (or for it, for that matter). You can’t look at a given system and measure its sentience. The closest to that anyone’s ever attempted is to try and test intelligence, but that assumes cognitivism/computationalism, among other things.
I agree with orthonormal, except that I don’t have $10,000 to bet.
Incorrect. You seem to have a concept of what rationality is that’s not very close to the reality; the reality doesn’t involve ignoring data, but rather giving it appropriate weight relative to the situation at hand. The high probability that you’re not actually interested in learning about or doing the things that we’re doing here is definitely relevant to any thread you make an appearance in.
I propose that commenting that you’re using the words ‘evidence’ and ‘legitimate’ in nonrational ways is also a reasonable response.
I doubt this. I suggest that an accurate definition of ‘troll’ would hinge on whether the suspected troll is trying to evoke emotions in eir target, and if so, which ones. This does make it hard to determine whether someone is actually a troll, or just socially clumsy, but we seem to have evidence of the former in your case: Your repeated comments about other posters’ emotional and mental states suggest that you’re thinking about those mental states much more than is normal, and an attempt to alter those states would be a likely reason for such interest.
If you’re actually interested in learning why we believe that this isn’t true, there are several relevant discussions that we could provide you with links to. I somehow doubt that you’re actually interested in that, though.
next year:
Germanys (as of 2009) foreign minister Westerwelle will trip over a mistake in international diplomacy and the German CDU / FDP government will fall apart. -more pedestrians than ever get killed in motorcycle crashes in London, and motorcycles get banned.
the congestion zone will become bigger
next ten years:
Chinese economy collapses and plunges the world economy into its’ biggest crisis ever -China colonialises Africa even further, and builds more factories. At the end of the decade, production has moved from China to Africa, leaving Billions of Chinese unemployed, Africa gets richer. This marks a turning point.
I would take you on most of your predictions on even odds. I’d gladly on 50% odds bet that:
No collapse of German government in 2010
No ban on motorcycles in London in 2010
No Congestion zone enlargement in 2010
Chinese economy in 2020 is more prosperous than in 2009
Chinese industrial production in 2020 larger than in 2009
Non-agricultural employment in China in 2020 larger than in 2009
I was right on all of these:
No collapse of German government in 2010
No ban on motorcycles in London in 2010
No Congestion zone enlargement in 2010
I’m ridiculously confident about predictions for 2020 as well.
I give 75% probability, that a RPSOP will be launched before 2020. (And that I will be down voted for this prediction!).
What technology would a RPSOP require, and what exactly would a RPSOP do?
If you knew enough that you predicted your comment would be downvoted as it is, then you could’ve explained your reasons better in the first place.
At least, we already have humans with the prenatal genetic screening. This can qualify as a (slow) Self Optimizing Process of the human kind. Better and better children are born.
A software/hardware analog could also be rapid.
Downvoted for lack of a clear definition of a RPSOP.
Edit: A definition that I can understand. :P
He answered elsewhere.
(Downvote seconded; please use standard terms, define terms, and make responsive replies.)
Presumably, it stands for Really Powerful Self-Optimizing Process, aka Really Powerful Optimization Process, aka superintelligence.
(Downvote seconded.)
There are very few predictions that you could append this to without earning a downvote.
What is an RPSOP? First page of Google doesn’t seem to know, and searching on LessWrong finds only this comment.
http://wiki.lesswrong.com/wiki/Really_powerful_optimization_process
“S” stands for “Self” here.
Care to bet?
(Parent was at −1.) Upvoted for correctly predicting being downvoted.
I predict that I will be upvoted for correctly predicting this.
User’s karma is below 0 but doesn’t show that way, for those of you wondering how someone could possibly be an LW reader since February and still not know better than to pull magic numbers out of his ass.
You have no way of knowing that he’s pulling them out of his ass. And even if he is, I think it’s appropriate for this thread. There are dozens of equally unjustified predictions below that you didn’t jump on. WTF?
I looked through this thread and didn’t see anything equally unjustified. The only prediction that came close was the one about a US state seceding, and that was by a user who was willing to make bets on their other predictions. This is just good-old-fashioned “Hey Rocky, watch me pull an amazing prediction out of my hat!”
I agree, but I do think that given your status in the community maybe it behooves you to be nicer if at all possible.
Strongly agree.
(I think what happened here was that Eliezer was particularly annoyed at this prediction, since it’s the sort of thing that gives his field a bad name.)
Down that road lies madness. I’m not Gandalf.
While the estimate does seem crazy, no-one’s actually offered Thomas a bet, and mattnewport didn’t offer to bet; Thomas has, thus far, done what mattnewport did, except he didn’t offer realistic bets.
If you’re going to do a bit of policing of LW, you ought to either address your comment to him and offer him advice to improve, or else block him: if his comments are unerringly useless, you shouldn’t be warning the flock about the wolf but removing him.
Upvoted for the reference, which I got.
A user’s karma is not a causal explanation for any ratio of Less Wrong readership to number-ass-pulling exhibited by that user.
I shouldn’t think there’s anything wrong (within this thread) with just stating your probability estimates, so long as you’re doing so in good faith. YMMV on whether the conditional clause holds in this case, obviously...
Every prediction of the future is a wild guess. It was just a wild guess back in 1931, that an A bomb will explode in a decade or two.
Need 100 more examples?
It’s just my estimation, if you have a better one, please fell free to express it!
I predict that no legitimate evidence for my individual mortality will be presented to me during 2010… or ever. (100%)
I’ll give you some great odds then. I will give you $5. In exchange, you will write a will giving me (or my heirs, should I be dead) your entire estate. Cryonic preservation counts as death. Deal? This is like a free $5 for you. If your current net worth is greater than $100,000, you are over 50 years of age or you have high earning potential we can talk about bumping that up to $25! It’s like I’m giving it away!
What’s up Jack, can’t you comprehend what you read? You have to present the evidence of my mortality to ME. It is irrelevant whether or not I ever die.
And you’re the one who is trying to tell me that Unknowns’ linked article means anything?
If you’re hoping to bait someone into slaying you so you can satisfy your deathwish without getting stiffed on life insurance payouts to your loved ones, you’re missing a few steps.
1) You haven’t provided your address! Read up on trivial inconveniences.
2) Post someplace with a larger population of violent criminals, or at least gun owners. We’re pretty harmless around here.
3) Choose a venue where people cannot vent their frustration with you via downvoting. That way, it may build up to an efficacious level!
If you’re being a troll: save yourself some trouble and go away. Eliezer’s been irritable lately and is apt to boot you.
If you’re actually, sincerely trying to have a fun exchange about whether you might be immortal (quantum or otherwise):
1) The little numbers on your comments are, in fact, important and should interest you. They indicate how you’re being received, on average. You may have corrupted public opinion of this account to the point where you can’t salvage it, but it can still provide valuable information, and nothing’s stopping you from trying again in a month or two when you’ve mulled things over. If this is the venue you’ve chosen to have your discussion on, surely you think we make suitable interlocutors—and hopefully, that will let you find informational value in numeric disapproval too.
2) When people quibble with you over what words mean, that requires your attention too! They’re trying to find a way to communicate with you. They’re not being mean. If they were being mean, they’d just interpret you in the stupidest way possible and then snicker at you, not try to share information about how words get used.
3) Pretty much nobody here will find themselves strangely compelled by protestations that a) your opponents have poor reading comprehension; b) it is unfair to expect you to do any background reading even when links are supplied; c) you are winning; d) we are engaged in groupthink; e) you are an unheard-of non-quantumly immortal creature (although I’d be pretty impressed by certain sorts of video evidence!). Saying it once might be too much to resist! I understand! But repeating yourself isn’t going to do anything that saying it once didn’t.
This should go in some kind of “Loud Passerby’s Guide to Less Wrong” wiki page so that we would be able to post a link next time… The only way to stop repetition is to distill the argument in a reusable form.
If I were a troll, would going away somehow be more beneficial to me than being booted? I mean, does this Eliezer actually physically kick suspected trolls? Or would the end result be the exact same? And if I were a troll, do you suppose saving myself some trouble would be my overriding concern?
Why should I care how I’m being received by anonymous faceless strangers, whose posts I may never even have read, and who may not even be taking part in the discussion? Are there Wrongie awards up for grabs? I believe the girl mentioned something about a… check? Please don’t tell me that I should alter my thinking and the expression of my ideas to suit popular opinion! Is that what you do?
So, public opinion is going to hold something I wrote on this thread against me forever, and use it against me on other threads, whether it agrees with me on those other threads or not? Have I stumbled into the Old Fishwives’ forum by mistake?
I began posting here about a minute after I arrived for the first time. I’m just beginning to learn about the mindsets of some of the participants here. I’m not overly impressed so far. I’m prepared though, to give it a chance. No point in going by first impressions. I’ll need a few more listenings before I decide if I like it or not.
I received a response that said (in its entirety) “Taboo legitimate”. The word ‘Taboo’ was a link to an article about something or other that appeared irrelevant. Do you suppose that poster was trying to find a way to communicate? When I questioned him, he apologized for his brevity and expanded on his post, and I withdrew the word ’legitimate”, as it was redundant anyway. Others chimed in (as you do here) telling me how I was the one at fault.
Yet, Jack, for one, appears to have poor reading comprehension.
I don’t protest that it’s unfair. I state that I’m not prepared to do it. If you don’t like it, either stop providing such reading material in lieu of originally-phrased arguments, or don’t engage me. Nobody is forcing you to respond to me.
I didn’t make this a competition, however, I am winning the debate. It’s immaterial that people don’t find themselves compelled by my stating such (in face of the many votes that state otherwise, and yet fly in the face of the rather obvious missing evidence of my mortality). Is humility big here?
So, you’re saying that the group, as a body, denies it is engaged in Groupthink? Is there any room for discussion on that?
I didn’t say that. I said I could be that. Part of your job is to present me with evidence that I’m not. America was unheard-of… until it was heard-of. And it was right there. The people just couldn’t hear of it, at the beginning.
Right, enough fun. Let’s stick to the topic at hand from now on.
Looks like people are getting fatigued with downvoting you, so I’ll be deleting all your comments from now on.
I can’t say I’m happy that you got so many replies, either; but I suppose if the community were sufficiently annoyed with the repliers, they could downvote the replies.
Psychology note: I’d rather people had stopped replying too, but looking at any particular reply, I’m more likely to upvote it out of positive affect (“take that!”) than remember to think about incentives, and if I did remember, I’d still have a hard time downvoting a true, (locally) on-topic comment. Probably others are doing the same. Time to be more self-aware.
I found the question “What is wrong with this person?” quite interesting and some of the responses were insightful in this regard. We don’t get to encounter extreme irrationality very often and I think the experience of failing to communicate is a good one to have occasionally. Being reminded what bad epistemic hygiene looks like is a great reminder to keep washing up. I also think one or two of the replies include good material to put on a Less Wrong intro/about/faq page if we ever get around to doing it.
The problem is that once you start arguing with someone giving up without resolution is like ending sex before orgasm. So it went on much longer than it should have.
Do people get their downvotes back when a downvoted comment is deleted?
As of about 9 months ago, no.
Not that I know of. If one were going to implement a behavior like that, people would get downvotes back when a post went to −3 or under and became invisible by default. But coding resources are scarce.
While I applaud drumming out a troll, I don’t mind that he wasn’t ignored (at least for a while).
Meeting a crazy person/anoying person/young earth creationist/troll gives you the chance to practice Bayesian Judo. There have been times when I wish I’d dedicated more practice to countering ridiculous but verbose interlopers.
Edit: Looks like Jack said this earlier and better.
But it’s not Bayesian Judo: you can’t argue with a rock, a person whose intent is to ignore what you say.
Fair point. Perhaps it’s arguing with the keeper of the box?
I upvoted this (it was at −1) because despite the fact that I was one of the ones replying, I’m not interested in hearing any more of it either.
I will admit that I feel differently.
All of this commenter’s comments are contained in a single thread, of which adefinitemaybe is the parent. It will already be down-voted to the bottom of the Open Thread, so when these comments are no longer ‘Recent’, if someone is reading this far, it is because they find this thread interesting—or even hilarious, like I do.
I find the thread hilarious not only because the dialogue is clever, but also because I’m enjoying the feeling that adefinitemaybe is the mad, crazy, irreverent jester in the court who provides us the opportunity to laugh at ourselves.
I mean, this is great stuff:
It’s a problem if someone is belligerent all the time, and it’s really annoying if they post in more than one thread. (The last troll was really quite a troll because he would drop inane comments all over the place – down-voting those comments would do nothing to hide them because they were nested in otherwise good threads.)
Given that I happen to enjoy this character (without approving of the behavior) – thinking, say, of Han Solo or even Churchill—I would like to suggest the following umbrella solution for all troublesome behavior; trolls and belligerent personalities alike:
A person can’t comment unless their karma is above −10, and this should be announced in some apparent place. That way people with belligerence and intelligence can game their comments so that they don’t go below the threshold. Keep in mind that this really would force them to be a positive influence; because the community could always down-vote all their comments to kick them out.
Oh no. I’ve been quite convinced by this thread. It is clearly impossible to present you with anything you’ll recognize as evidence of your mortality.
I’m serious about the bet though. Or does your belief that there is no evidence that you are mortal not change the your belief that you are indeed mortal?
Yes, it is clearly impossible. As I predicted. Although you could have at least tried. Does this mean I DO win? Or is this some bizarro debateland, where you still win?
I’m not interested in your bet. I might die. There is an equal weight of evidence (i.e., zero) for my immortality. You want to bet $5 against my entire estate on a coin toss. No thanks. Perhaps that why I didn’t predict that’s I wouldn’t die in 2010.… or ever.
Would it be accurate to summarize your views as follows:
(1) There is zero evidence for the proposition that you are mortal
(2) There is zero evidence for the proposition that you are immortal
Therefore: You will act as though you are immortal
?
I at first assumed (when you had only the first two comments to your name) that you had been a lurker here for some time, had tried to make either a joke or a half-serious point, and might have been blindsided by the downvotes. In the intervening time, I have changed my opinion of you, and I think that Less Wrong is not suitable for you.
There are plenty of Internet sites composed of people whose purpose is simply to signal that they are the cleverest person there. Although I do not expect you to be aware of this fact, Less Wrong is not primarily such a place. This community is defined by, among other things, wanting to understand others’ objections before dismissing them, even should that require five minutes of reading and pondering. If you look back over your exchange, several of us have replied in good faith to clarify your assertion and expand upon the areas of disagreement. You have not shown the least interest in returning the favor.
Your presence here is wasting everyone’s time, including yours; you might be much happier, I think, engaging in this behavior on another forum where it is more the norm. Although you may not believe yourself to be a troll, you will almost certainly be regarded as one here.
Thank you. I don’t care what you think about my suitability here, and I suspect your motives for so advising me to be based in unjustified ill-will. Please restrict your subsequent responses to me to the topic in hand.
I made a prediction. If you wish to challenge that prediction, the onus is upon you to present me with evidence of my mortality. I will either accept that evidence or reject it. Readers may judge the rationality of each side, based upon our exchanges. Nothing else is relevant. Unless you require me to explain the prediction to you.
I feel my time here has been far from wasted. If you feel that commenting on my prediction or responding to my posts represents a waste of your time, perhaps you should consider just doing something else. Responding to a different post, for example. If you are being forced to respond to my posts, by someone who may be inside your house watching you, please indicate by blinking one eye.
What behavior? Making a prediction, then defending it? No, I think the New Year’s Prediction thread on this site is as good a place as any. In fact, it’s ideal.
I’m sorry, I don’t speak weasel. Did you say “Although you are apparently not a troll, I’m going to do my best to convince others that you are, and, perhaps, eventually have you banned. Basically because I feel somewhat threatened by your manner of thinking.”? If I’m a troll, who, until provoked (as here), sticks to arguing his side of a relevant debate in good faith, what does that make you and others, having made specific posts telling me how I’m not suitable and would be happier elsewhere?
Again, I’m not interested in what you think about me, or in receiving any advice you might have for me. If you don’t want to engage me, don’t. If nobody engages me, I’ll leave. Obviously.
Apart from that, I’m ready and willing to respond in good faith to anyone who makes an originally-phrased argument here against my prediction. I’m not interested in being directed to read the ideas of third parties. I can find my own way to the library.
I’m really curious why you didn’t take Jack’s $5 above.
Clearly, if you observed that you were 1,000,000,000 years old, this would support the theory that you are immortal. But you observe that you not that old, but much less. Therefore, by Bayes theorem, it becomes more probable that you are not immortal. Since you assign odds of 100% to not being presented with such evidence, this means you should be willing to wager any amount, against nothing, that you would not receive such evidence. You may therefore now send me all your money.
If I am immortal I am ageless. I may look a certain way, but that is irrelevant. Even if I was born, in the normal human way, I’d still be ageless, and I’d still be immortal. That I took human form for a few years is irrelevant. Unless you can show that no immortals were ever born, in the normal human way, and appeared to age, and even appeared to die (immortals can’t really die), your ‘evidence’ does not constitute evidence.
“100%” can’t denote odds. And, being an immortal, I’m not a betting man.
Even if it is possible that you always existed, you do not remember always existing. If you remembered always existing, this would increase the chance that you are immortal. Since you observe your lack of memory of always existing, your chances of being mortal increase.
As for this :”Unless you can show that no immortals were ever born...” etc., I do not need to show this unless I want to prove with 100% certainty that you are not immortal. However, I do not need to prove this: it is enough to give some evidence, however limited, that you are mortal, and I have done this.
No, you haven’t done that.
I may have memory of always existing. However, that would be irrelevant. If I told you about it, and you accepted my evidence, and presented it back to me as evidence, it could only represent evidence of my immortality. I require evidence, from you, of my mortality.
I may have no memory of always existing. However, that would be irrelevant also, given that human beings, apparently, (and, for all anybody knows, immortals) have no knowledge of the rules governing immortality. Perhaps immortals don’t have such memory. Plenty of human beings suffering amnesia have no memory of having lived in previous periods. That doesn’t constitute evidence that they didn’t live in those periods.
http://lesswrong.com/lw/ih/absence_of_evidence_is_evidence_of_absence/
Do you have any evidence of my mortality to present to me or don’t you? Please, don’t respond with any more links to other site pages in lieu of original, rational thought and coherent argument.
The article you were linked to explains exactly what you’re getting wrong. i looked it up to give it to you before I saw that Unknowns already had. It is a waste of everyone’s time to repeat the the argument the article makes in are own words. It is extremely short. If you can’t be bothered to read it then everyone is going to assume that you aren’t arguing in good faith. They would be right to do so.
If we are all to just read the articles, what’s the point of having a discussion forum? If Unknowns wants to use the content of that article to make his argument, then he should do so, and make the argument in his own words. It is sheer laziness (and verging on the plagiaristic) to just point to articles and say “My argument is in there somewhere. Please respond as soon as you identify it. Of course, I’ll get all bent out of shape if you misinterpret what I meant to say, although I’m prepared to accept some associated praise for having ALSO thought that which the article’s author has taken the time to write down and publish.”
“If you can’t be bothered to read it then everyone is going to assume that you aren’t arguing in good faith.”
I’m taking the time to construct original arguments here. I’m also taking the time to read all original responses that people offer, and respond to those. I’m receiving some responses to those arguments in the form of links to articles penned by third parties. And you have the audacity to threaten me that people will assume my motives are unwholesome if I refuse to accept that sort of lazy response as legitimate (in the non-geek sense of the word), and to inform me that they would be right to do so? Suppose, in lieu of this response to you, I just directed you to The Collected Works of Friedrich Nietzsche? Would you read them and get back to me with your rebuttal? If the article is as short as you say, shouldn’t Unknowns have the common courtesy to paraphrase it here?
I made a prediction. So far it has come true. Nobody has yet presented me with any evidence of my mortality. If that pains any of you to the point of abusing the privilege of this site’s forward-thinking voting system, please, take a moment to give yourself a good slap.
The point in you reading the articles is that the inferential distance between you and the rest of the members of this community is so large that communication becomes unwieldy. Like it or not, the members of Less Wrong (like the members of most communities which engage in specialized discourse) chunk specific, technical concepts into single words. When you do not understand the precise meaning of the words as they are being used, there is a disconnect between you and the members of the community.
The specific problem here is in the use of the word “evidence.” By evidence, we mean (roughly) “any observation which updates the probability of a hypothesis being true.” By probability, we mean Bayesian probability. I’m not going to go through the probability calculation, but other commenters are correct: given the evidence that you are not really, really old, you should revise the probability assigned to your hypothesis of immortality down significantly.
If you are not going to do the requisite reading that would enable you to participate in this discussion community, it would probably be best for both you and everyone here if you just left now. If you do feel like participating, I highly recommend going through the sequences.
Anyway, not to worry. We can still be sure of taxes.
You use this word, “evidence”. I do not think it means what you think it means.
Your link directs to one definition of the word. Whatever interpretation of “evidence” you reasonably use in context (which qualification excludes yours), you won’t be able to present me with any for my individual mortality… ever. Note: The challenge is not to provide evidence for my theory, but of my mortality
“I am mortal” and “I am immortal” are definitely two theories, and each of them can be supported or opposed by many types of evidence. Providing evidence against “you are immortal” is providing evidence for your mortality. I pointed to some such evidence in my other comment. If you do not accept this type of evidence, what is your definition of “legitimate evidence”?
I don’t claim to be either mortal or immortal. I predict that no evidence of my mortality will be presented to me in 2010… or ever. My definition of legitimate evidence is evidence which would leave me without a reasonable doubt that I’m going to die… and stay dead.… forever. In the absence of any such evidence being presented to me, I have no recourse but to consider myself to be immortal. You may too. I’m particularly fond of coconuts and virgins.
Really though, do you guys ever just say die, from the get go, and move on to the next, actually debatable, thing? It’s like I threw ball or som.… {Woof!} As I was saying, it’s like I threw a B-A-L-L or something.
So if I showed you that there was a 90% chance of that happening, you would still say there is no legitimate evidence, just because a 10% chance would be enough for a reasonable doubt?
If you show me that there’s a 90% chance of that happening, I’ll make you an archbishop. I say there’s no chance of that happening. So that, I’m afraid, leaves you awkwardly situated amongst the highly smiteable.
You have a 0% credence in every state of affairs, including ones that you haven’t thought of, which include a mechanism for you (the software) to note the death of your hardware? Or do you mean something else?
Taboo legitimate.
I think he means nobody will prove to him that he has been killed.
I’m new here. Is there a shorthand code in operation? Or are words rationed? Perhaps you’d like to expand your comment?
Zack M Davis’s point is explained by the article he has linked to.
The tl;dr version is that your use of a certain word (in this case “legitimate”) is not helping a productive conversation. Instead, explain exactly what you mean when you say “legitimate”, because the word can mean different things, so it’s not clear which meaning you’re using.
I’ve already retracted the word “legitimate” as being redundant (although I don’t see how you can have any fruitful discussion if you’re going to ask for the meanings of everyday words like “legitimate”. Don’t you find such a practice bogs things down?)
Now, can someone please offer a friendly explanation as to why ALL of my comments have attracted negative votes, even though people are still engaging me, and still failing to provide any evidence as to my mortality?
Even my original post, which simply outlines my 2010 prediction, has garnered 4 negative votes. Wasn’t I supposed to make a New Year’s prediction on this New Year’s Predictions thread? Why would a 2010 prediction get any negative votes, or any votes at all, until either it came true or 1/1/11?
You’re being downvoted because people think you’re either being irrational or trolling.
New Year’s prediction: adefinitemaybe will be banned from Less Wrong. Sixty-five percent.
I made a prediction. People challenged me on that...weakly. I responded to them, rationally and in good faith. People are voting me down because I issued a challenge they can’t meet; because I hit them with a conundrum that they can’t solve. I’ve dented pride. And, because I’m new, the humiliation they irrationally feel is doubled. How can I be trolling when all I’ve done is respond, amicably and on-topic, to posters’ comments? Isn’t it true that all the hostility is coming from their side?
“New Year’s prediction: adefinitemaybe will be banned from Less Wrong. Sixty-five percent.”
Why, what rule have I broken? Is there a rule about riding roughshod over wannabe thinkers intellectual shortcomings?
Mommy, the geeks won’t let me sit at their table! And, because I lack the “karma” that only the geeks can issue, I can’t do anything about it.
New Year Prediction: The Less Wrong community becomes so insular and inbred that its members discover that more and more of their thoughts begin to be spawned retarded. Have you people had a look at yourselves recently?
Rationale behind my prediction:
I don’t dislike you (I’ve upvoted some of your comments, downvoted others, and left some alone entirely), but people who are being consistently downvoted have been told to leave in the past. You match that profile the best of anyone I’ve seen on this site—better than someone who Eliezer recently asked to leave. Eliezer was himself downvoted when he announced that, so I’m not sure whether this rule is still in effect, which is why I estimated sixty-five percent instead of ninety.
I’ve been signed up here for about three hours—in total length of membership, not accumulated posting time. There has never been a time when any of my posts had a higher vote score than zero. It’s possible that that is the result of a net negative vote on each, but I’m inclined to think that I haven’t received one positive vote from anyone. Perhaps you were confusing me with someone else.
If I’m asked to leave by an authority, I will leave immediately without complaint. However, I can’t envision having any kind of enjoyable posting future here anyway. I don’t think people here are interested in exploring new territory, as much as belonging to a pretend thinkers club.
It is the result of a net negative or zero vote on each. Independent of any action by other members, I know I’ve upvoted three of your posts—“I’ve already retracted the word “legitimate” as being redundant...” “I may have memory of always existing...” and “Anyway, not to worry. We can still be sure of taxes.” I am not sure why you would doubt me on this.
Did you read any of the articles here or on Overcoming Bias before signing up?
Right, sorry. You say (presumably jokingly) that there’s no “legitimate” evidence for your mortality, but surely the fact that you’re human and humans have been known to die eventually is probabilistic evidence that you are mortal. I was trying to hint at this by indicating that there were hidden assumptions in the word legitimate, but on reflection I might have been misusing/overloading the taboo terminology. Do downvote the grandparent.
Perhaps I should have dispensed with the word “legitimate”. In retrospect it was redundant. If evidence is not legitimate, it is not evidence.
Again, I don’t know if I am human, in the generally accepted sense. I don’t know that I’m going to die. Even if I am a normal human being, however, I can’t accept that what has purportedly happened to any other human being must happen to me. Especially, given the fact that I don’t know what actually happened to the vast majority of people that were ever born. Far more people have “disappeared” out of my life (after having briefly entered it) than have apparently died (to my satisfaction, evidence-wise). So, for me, individually, the evidence would suggest that most people disappear (go on living elsewhere in the world—out of my ken), rather than die.
Where did anyone get the idea that the preponderance of evidence shows, to the satisfaction of any individual, that most human beings die? Isn’t that just hearsay, based on very small evidence samples?
Is it possible that only human beings who maintain close relationships with other human beings die? Could it be that many loners are immortal? Is there any global agency that is matching deaths to births, and investigating all anomalies?
Click the ‘Taboo’ link in the grandparent comment of this one.
Did you click on the link?
Are you banking on subjective quantum immortality?
I’m not “banking on” anything. So far, I am immortal. As I am immortal, I am not subject to the limitations that mortal humans have been seen to have been subject to until now.
Are you a human being, adefinitemaybe? It seems that all humans in the past have died, and all humans currently alive appear to be following the same pattern that leads to eventual death. How are you different from other humans who are known to be mortal?
Your suggestion that you are immortal is basically the same as saying, “cars are known to break down under certain conditions, and my car is just like the others, but this specific car hasn’t broken down yet so I’ll assume that it will never break down.”
Am I a human being (if it’s not a personal question)? I don’t know. What am I, omnipotent? Perhaps I took human form. Anyway, even if I know something you don’t, the prediction is that I will never be presented with evidence of my mortality… obviously, by any of you 6.8 billion mayflies.
Again, whether I’m actually immortal or not is irrelevant.
I shall assume that you are human (which I think is virtually certain) and speaking in good faith (which I shall assume for the sake of the conversation). You say “I don’t know if I am human, in the generally accepted sense”, but I do not believe you.
These being so, evidence that you will die and not live again, and that you did not exist before you were conceived, lies in such observations as these:
The tendency of every human body to stop working within a century and then disintegrate. Not merely the observation that people die, which is as old as there have been people, but the extensive knowledge of how and why they die.
The absence of any reliable evidence of survival of the mind in any form thereafter.
The absence of any reliable evidence of existence of that mind before conception.
The absence of any reliable evidence of a mind existing independently of the physical body; the existence of much reliable evidence to the effect that the mind is a physical process of the brain.
Further argument against the idea of any sort of intangible mental entity separate from material things, can be found here.
Of course, many have argued otherwise. Not merely books, but whole libraries could be collected arguing for the existence of souls independent of the body and their immortality. But even if the matter were seriously contendable, that would not alter the existence of the evidence I have given, merely put up other evidence against it.
So there is the evidence that you asked for. I am of course only summarising things here. But what else is possible? If someone who knows no mathematics at all starts babbling to me about the 4-colour theorem, what can I do but advise them to study mathematics for a few years?
That being said, however, you might indeed be immortal! There is one slender chance: advances in medicine. We live at the first time in history when we can begin to see the chance of remedying the fragility of the body. Just beginning, and as yet only a chance. Had you been born two centuries ago, you would certainly be dead today, drowned in the river of time. The challenge, should you choose to accept it, is to stay alive long enough to benefit from medical advances that will enable you to swim upstream, and perhaps even overtake the current. That is the only route to immortality there is, and the best reason there has ever been to take care of your body as well as you possibly can, to make it last until you can catch that boat.
Edited to add: two slender chances! The other is cryonics. Live as well as you can for as long as you can, and if radical life extension still isn’t available, get your head frozen. And don’t get Alzheimer’s.
You are not. You began with a bare demand for evidence of your mortality. (Why? Why that question, and why here?) When you didn’t like the answers, you demanded more loudly, then threw a tantrum. You even gave yourself a Signal from Fred here:
You intended that ironically, but it exactly describes your situation: a child who has wandered into a conversation among adults and understands nothing. You do not even understand that there is something for you to understand. But the remedy is easy: read every post linked to here. It’s about the size of a book. Post nothing until you have finished. If you understand what you read there—not agree with, but understand—then there will be enough of a common background to have a useful conversation.
Why wouldn’t you just take it as read that I’m speaking in good faith? You’ve used a lot of words in attempting to paint me as a country bumpkin, not fit to tie your intellectual sandals. That you preface all that with a comment about having to overtly assume my good faith makes me think you’re not that sure about the bumpkin thing.
You can’t just assume I’m human. If that were valid, we could all just assume whatever we wanted here, and claim we had won our arguments.
Apart from your beliefs being entirely irrelevant, how is it possible for you to form an opinion about what I claim not to know, that is not entirely founded on your emotion? Since the greatest philosophers have struggled over the ages with the question “Am I?”, I don’t see how “Am I human?” will likely ever have a cut and dried answer.
The tendency of every human body to stop working within a century and then disintegrate. Not merely the observation that people die, which is as old as there have been people, but the extensive knowledge of how and why they die.<
How many human bodies have you personally witnessed stop working and begin disintegrating, within 100 years? I don’t grant that that is a tendency at all. We only have information about those who have died. If we are to examine the chances of my immortality, we must look for those who haven’t died. Do you have any relevant input with respect to people who haven’t died? If not, does the fact that you, personally, don’t, constitute evidence of anything? If not, does the same apply to all other individuals? If so, may we say that there is no reliable evidence that humans all die?
Do you have any reliable evidence for the existence of the mind at ANY time? If so, can you present it to me. That you purportedly think will not convince me. Isn’t it true that the existence of mind can only ever be hearsay (and, no, I’m not singling you out here)?
Oh! please. What an entirely stupid thing to write. It’s fundamental that evidence can’t be produced for the non-existence of a thing. What are you going to say “It isn’t there, none of us present can experience it, so it can’t exist”?
Again, you have no reliable evidence for the existence of mind (in the form it is widely thought to exist, i.e., one each, inside our bodies somewhere, etc.). For all you know, you’re plugged into the Matrix.
Wow, is that what got your backs up? The idea that I might be trying to prove the existence of God? Is that what all the witch-hunting is about? My other prediction is future formation of an Atheist Inquisition, as Atheism gradually takes the classic form of a religion.
I resent your framing the debate (which you, contradictorily, have felt the need to participate in at length) as not seriously contendable. You’ve given no such evidence. Everything you’ve said relies on the definite prior existence of things as yet not proven to exist, and the definite non-existence of certain other things. Your arguments then, are entirely untenable.
“If someone who knows no mathematics at all starts babbling to me about the 4-colour theorem...” Do you ever even think a little bit before you type something? Or do you just copy and paste from your Bumper Book of Insults for Use by the Pompous?
That’s not being argued. The challenge is for you to present me with evidence of my mortality. So far, you’ve listed a lot of irrelevancies about what you personally believe about “human beings”. All your evidence stands or falls on a) human beings all having died in the past (which, of course we KNOW they haven’t, or we wouldn’t be having this conversation), b) all human beings being conventional, and c) my being a conventional human being too. For it to carry any weight at all (and I suspect it wouldn’t ever), you’d first have to prove that there is only one kind of human being, that those human beings all die (and have all died, say, within 200 years of being born), and that I am a human being (and not just in one physical shape and form either, but entirely).
Certainly? Does that word have a special meaning here I don’t know about? Can you list any other certainties? If you can make the list long enough, we can dispense with any further musing here. For my part, I know of nothing that is certain. Of course, I’m only a humble child amongst knowledgeable adults here.
Are you the Omnipotent One they said we should seek out? You could be in the running for Atheist Pope one day.
I didn’t. I began by making a prediction on a prediction thread. I didn’t ask for responses. I was then challenged on that prediction and have since defended it. Can we say in light of that, that your “Why that question, and why here?” question is silly (or is it something someone in a synod of atheists is bound to ask)? When I didn’t “like” the answers, I showed how they were inadequate. My responses never got “louder”. That you think they did, probably has more to do with faulty wiring in your brain, and perhaps your own tendency to try to shout people down. Have your speakers checked. And I didn’t throw a tantrum. I merely questioned the rationale of corrupting the voting system by using it to put down supposed “heresy”. To censor. To maintain group integrity. To encourage Groupthink.
I understand that you are a pompous windbag who can’t form a coherent thought, due to his brain being fogged by a fantastic hubris. Was that “3. The absence of any reliable evidence of existence of that mind before conception.” part of the adult conversation? What about “”If someone who knows no mathematics at all starts babbling to me about the 4-colour theorem...”? I may be a child, but those statements seem stupid to me.
My statement re “the geeks table” had more to do with the sad reality that complete censorship and ideas control is more assured here than at any Medieval synod.
Oh well, that settles it. You win. You now move on to the quarter-final round against the guy who thinks he’s Napoleon at Bedlam Hospital. It should be a cracker!
Groupthink gone wild. “Will you confess your heresy, and bow down and kiss the ringI If you do, all this torture will stop.” I don’t have to read and regurgitate other people’s ideas. My brain makes its own ideas. Ideas, apparently, that encourage you to respond in long diatribes. Do you mean you’d write MORE if I first read the articles before posting again? Please feel free not to respond at all.
Based on your observation that your direct evidence for human mortality is limited, you conclude that you will never receive evidence, under some definition, that you are mortal. Do you have any evidence (under the same definition) that I am mortal, or indeed that anybody else is? If yes, could you please explain the difference in conclusions from identical evidence?
No, I have no evidence for your mortality. Although it’s possible that I could someday have such evidence (based on the generally-accepted definition of mortality), I could never be in a position to present YOU with any.
My underlying interest in this theme lies in the direction of why we blindly accept our own mortality on such little individually-beheld evidence. Is it possible that, as with increasing average human lifespan, dying has more to do with belief about dying than with any physiological limitations? Accidents and murder, etc., apart, could we believe ourselves to 200 years old, if we could shake off the ingrained belief in the inevitability of death? Could avoidance of news reports involving death help? Are we really to believe that gains in average human lifespan are solely due to improvements in factors external to the body? If so, why is that many remote, relatively under-developed areas produce so many centenarians? Does belief that it’s easily achievable to live to, say, 90 years have nothing to do with individually achieving such longevity? And does “genetics” have more to do with monkey see, monkey do than we imagine?
Note: In 1900, global average lifespan was 31 years. By mid century, it was 48 years. In 2005, it was 65.6 years. http://www.who.int/global_health_histories/seminars/presentation07.pdf
You do have some evidence that you are similar to other people. Consequently, if you had evidence for the mortality of someone else, this would be evidence for your own mortality. You have admitted that it is possible to have evidence for the mortality of someone else, and therefore it is possible for you to have evidence of your own mortality.
May we define what ‘you’ is?
For example, if ‘you’ is a username here on LessWrong (‘adefinitemaybe’), then you could be “mortal” because your account can be deleted.
Or if you identify with the atoms you are composed of, then the issue of your mortality is again different...
Later edit: OK, you’re ‘apparently human’. Please don’t respond to this message, as I plan to delete it since its apparently noise.
Which, to all intents and purposes, means that I’m immortal. Please form an orderly queue to worship, leave offerings, etc.
I would really rather bet against you. Let’s select a suitable arbitrator and translate that probability into some (finite) odds.
Say, hypothetically, that every living relative of yours out to fourth cousin is captured and brought before you. They are then, every man woman and child, beaten to death with a rubber chicken. The assailant then begins to beat you with the aforementioned toy and you exhibit similar symptoms of physical decay to your previously bludgeoned kin. No unbiased arbitrator would judge that no legitimate evidence for your individual mortality has been presented to you. Short of non-occamian priors the evidence is clear.
(First, there is nothing to bet on. Your mission, s.y.c.t.a.i., is to provide me with evidence of my individual mortality. Whether I actually die at some point or not is irrelevant.)
So, if all your relatives were already dead, and your heart stopped beating for one reason or another, there would be little point in attempting to revive you? Could it be that all my dead relatives were mortal, while I am not? Even if I bleed when pricked by some defective design element of a Chinese-made rubber chicken? And how could I know for sure that these mere mortals were actually relatives of mine. I mean, tsk, they’re a bit ephemeral, aren’t they?
I require evidence of my mortality, not my propensity to bruise and bleed when hit. I predict that I’m never going to be presented with such evidence.
Meanwhile, I’ve noticed that the voting appears a little biased toward people who are losing the debate. What’s that all about? Groupthink?
The bet would (quite obviously) be on whether you are provided evidence of your individual mortality.
No, you have it backwards. Chewbacca was born on Kashyyyk but lives on Endor.
Orthonormal was kind enough to provide you with an explanation of what evidence means.
You already have overwhelming evidence of your mortality. Providing more by beating you with a rubber chicken until you were bloody and bruised would just be icing.
Votes would have hovered around 0 if you had let it go when it turned out your joke didn’t quite work. Meanwhile, I had best not reply further lest I be found to violate the “don’t feed the trolls” injunction.
“Votes would have hovered around 0 if you had let it go when it turned out your joke didn’t quite work. Meanwhile, I had best not reply further lest I be found to violate the “don’t feed the trolls” injunction.”
Wow, that’s pretty harsh. Can you provide any evidence to back up that accusation as to my intent? How do I know you’re not just a sore loser?
Meanwhile, I’m posting in good faith, and I think I’m holding my ground pretty well. The “there’s no real evidence that humans always die” thing that occurred to me (see Zack’s comments on this thread segment) strikes me as very discussable.
It sounds like you are posting in good faith. Just go easy on the “but I’m winning! lolz, groupthink!” stuff, that tends to be a tipping point.
I do recommend you look a bit closer at what people are telling you about ‘evidence’, it’s important. I have been involved with communities in which finding clever ways to say “That isn’t evidence. Where is your evidence?” in response to any given piece of evidence is rewarded with status and the stronger the evidence ignored the more respect is granted. This, for most part, isn’t one of them. If you continue to speak nonsense and fail to comprehend those who are engaged with you you’ll just be voted down to oblivion.
No, you accused me of being a troll. Are you now stating you believe me to be a troll posting in good faith?
“If you continue to speak nonsense and fail to comprehend those who are engaged with you you’ll just be voted down to oblivion.”
What a perfectly rational argument. I’m speaking nonsense, therefore, you are right and I’m wrong, and I must change. Brilliant… if a trifle lazy. Okay, tell me what to say so that I get voted most popular boy. What’s considered acceptable rational thought around here?
I must say that I loved your ‘Just go easy on the ”...groupthink!” stuff, that tends to be a tipping point .’
And that from a “rational thinker”!