Good someone pointed this out! I think this is correct and an important point. Voting is to a large extent about expressing loyalty to king and land (or system and government for those of you who do not live in constitutional monarchies). It is one of the processes that build trust in the society and thus in efficient coordination. Looking just at who will win the election is too a narrow perspective to properly understand the effect of voting.
TraditionalRationali
Anyone who knows what these two pictures are and where they come from? Nemesis is doing some I think not very accurate critique of LessWrong on the Swedish Skeptics internet forum. It is in Swedish, and sorry, I have not time to translate into English. But though I do not know, I suspect that Nemesis have not the pictures them by himself but found them somewhere. If anyone knows I would be glad to know. (If someone should know already and recogises them, I do not ask anyone to spend significant time on it.)
Anyone who knows what these two picturse are and where they come from? Nemesis is doing some I think not very accurate critique of LessWrong on the Swedish Skeptics internet forum. It is in Swedish, and sorry, I have not time to translate into English. But though I do not know, I suspect that Nemesis have not produced them by himself but found them somewhere. If anyone knows I would be glad to know. (If someone should know already and recogises them, I do not ask anyone to spend significant time on it.)
“what can we do to prevent some small group of humans (the SIAI, a secret conspiracy of billionaires, a secret conspiracy of Google employees, whoever) from steering a first-mover scenario in a direction that’s beneficial to themselves and perhaps their blood relatives, but harmful to the rest of humanity?”
Actually, if they managed to do that, then they have managed to build an FAI. The large(/largest ?) risk some (as SIAI I think, but I am not an expert) is that they think they are building an FAI (or perhaps a too weak AI to be really dangerous) but that they are misstaken in that assumption. In reality they have been building an uFAI that takes over the world and humanity as a whole is doomed, including a small minority of humanity that possibly the AI was supposed to be friendly to.
There seems to be three different problems here. To analyse how dangerous AIs in general are. If dangerous, how can one make an FAI, that is an AGI that is at least beneficial to some. And then if an FAI can be built, to whom should it be friendly. As I interprete your post you are discussing the third question and dangers related to that while hypothetically assuming that the small group building the AGI has managed to solve the second question? If so, you are not really discussing why some would build an uFAI half-way by purpose but why some would build an FAI that is unfriendly to most humans?
Yes. The orbital angular momenta spans the same space as the linear momenta, so it cannot add anything in principle to MIMO and similar. (Practical issues can of course in some cases make the one or the other basis more effective under various circumstances.)
The Landmark Education is probably yet another one of those non-serious self-improvement course providers. I do not know too much myself about them but e.g. the Swedish Skeptics who are often quite reliable are one of those who have accused Landmark Education of not serious methods. E.g., Landmark lämnar Sverige: Landmark Education, an American company that offers courses in personal development, to abandon services in Sweden, reported Dagens Nyheter 2004-06-08. The reason is a substantial reduction in interest rates. It should be related to several critical reports in various media, including in TV4. Critics argue that the courses are similar ecstatic revival meetings and mainly aims at attracting new participants or the participants to perform otherwise free work for Landmark. Several cases of mental breakdown has occurred in persons who have received Landmark’s courses. (Google translation from Swedish.)
I will show up.
In what chapter is the outbreak of world-war 1 treated?
I read some time ago The Origins of Major War by D. C. Copeland. He argues that WW1 broke out because Germany feared that due to Russia’s large size, and their at the time rapid industrialisation and economic growth, would become a strategic threat in the future. Therefore Germany better had to try a war to weaken them when Germany’s relative military strength was at its peak, which the German gouvernment estimated was at 1914. So, according to Copeland, the German attack on France (which they hoped to be a quick blietzkrieg as in 1871, although that failed) was primarily to clear their back for their main objective of attacking Russia.
I can recommend A Guide to Introductory Physics Teaching by Arnold B. Arons (or maybe Teaching Introductory Physics which seems to be in practice a second edition of the former although I have not read it). I read it more than ten years ago and don’t remember much more than that I found it a good book. It is centred on high-school level physics but has some contents also to related subjects, e.g. on how to show the students how to think on why they/we believe what they/we believe we do understand.
That it should be possible to Algorithmize Science seems clear from that the human brain can do science and the human brain should be possible to describe algorthmically. If not at a higher level, so at least—in principle—by quantum electrodynamics which is the (known and computable in principle) dynamics of electrons and nuclei that are the building blocks of the brain.( If it should be possible to do in practice it would have to be done at a higher level but as a proof of principle that argument should be enough.)
I guess, however, that what is actually meant is if the scientific method itself could be formalised (algorithmized), so that science could be “mechanized” in a more direct way than building human-level AIs and then let them learn and do science by the somewhat informal process used today by human scientists. That seems plausible. But has still to be done and seems rather difficult. The philosophers of science is working on understanding the scientific process better and better, but they seem still to have a long way to go before an actually working algorithmic description has been achieved. See also the discussion below on the recent article by Gelman and Shalizi criticizing bayesianism.
EDIT “done at a lower level” changed to “done at a higher level”
I wrote a backlink to here from OB. I am not yet expert enough to do an evaluation of this. I do think however that it is an important and interesting question that mjgeddes asks. As an active (although at a low level) rationalist I think it is important to try to at least to some extent follow what expert philosophers of science actually find out of how we can obtain reasonably reliable knowledge. The dominating theory of how science proceeds seems to be the hypothetico-deductive model, somewhat informally described. No formalised model for the scientific process seems so far has been able to answer to serious criticism of in the philosophy of science community. “Bayesianism” seems to be a serious candidate for such a formalised model but seems still to be developed further if it should be able to anser all serious criticism. The recent article by Gelman and Shalizi is of course just the latest in a tradition of bayesian-critique. A classic article is Glymour “Why I am Not a Bayesian” (also in the reference list of Gelman and Shalizi). That is from 1980 so probably a lot has happened since then. I myself am not up-to-date with most of development, but it seems to be an import topic to discuss here on Less Wrong that seems to be quite bayesianistically oriented.
- 2 Jul 2010 5:47 UTC; 4 points) 's comment on Open Thread: July 2010 by (
Very interesting. If you find time, could you elaborate on these. I am particularly interested in hearing more on the criticism of the skeptics movement.
Eliezer Yudkowsky and Massimo Pigliucci just recently had a dialogue on Bloggingheads.tv. The title is The Great Singularity Debate.
After Yudkowsky at the beginning gives three different definitions of “the singularity” they discuss strong artificial intelligence and consciousness. Pigliucci is the one who quite quickly takes the discussion from intelligence to consciousness. Just before that they discuss whether simulated intelligence is actually intelligence. Yudkowsky made an argument (something like) if the AI can solve problems over a sufficiently broad range of areas and give answers then that is what we mean by intelligence, so if it manages to do that then it has intelligence. I.e., it is then not “just simulating to have intelligence” but is actually intelligent. Pigliucci however seems to want to distinguish between those and say that “well it may then just simulate intelligence, but maybe it is not actually having it”. (Too difficult for me to summarize it very well, you have too look for yourself if you want it more accurately.)
There it seemed to me (but I am certainly not an expert in the field) that Yudkowsky’s definition looked reasonable. It would have been interesting to have that point elaborated in more detail though.
Pigliucci’s point seemed to be something like that for the only intelligence that we know so far (humans (and to lesser extent other higher animals)) intelligence comes together with consciousness. And for consciousness we know less, maybe only that the human biological brain somehow manages to have it, and therefore we of course do not know whether or not e.g. a computer simulating the brain on a different substrate will also be conscious. Yudkowsky seemed to think this very likely while Pigliucci seemed to think that very unlikely. But what I lacked in that discussion is what do we know (or reasonable conjecture) about the connection between intelligence and consciousness? Of course Pigliucci is right in that for the only intelligence we know of so so far (the human brain) intelligence and consciousness comes together. But for me (who do not know much about this subject matter) that seems not a strong argument for discussing them so closely together when it comes to artificial intelligence. Maybe someone here on Less Wrong who knows more about connection or not between intelligence and consciousness? For a naive non-expert like me intelligence seems (rather) easy to test if anything has: just test how good it is to solve general problems? While to test if anything has consciousness I would guess that a working theory of consciousness would have to be developed before a test could be designed?
This was the second recent BHTV dialogue where Pigliucci discussed singularity/transhumanism related questions. The previous I mentioned here. As mentioned there it seems to have started with a blogg-post of Pigliucci’s where he criticized transhumanism. I think it interesting that Pigliucci continues his interest in the topic. I personally see it as a very positive establishing of contact between “traditional rationalist/skeptic/(cis-)humanist”-community and “LessWrong-style rationalist/trans-humanist”.community. Massimo Pigliucci very much gave the impression of enjoying the discussion with Elizer Yudkowsky! I am also pleased to have noticed that recently Pigliucci’s blog has now and then linked to LessWrong/ElizerYudkowsky (mostly Julias Galef if I remember correctly (too lazy to locate the exact links right now)). I would very much like to see this continue (e.g. Yudkowsky discussing with people like e.g. Paul Kurtz, Michael Shermer, Richard Dawkins, Sean Carroll, Steven Weinberg, Victor Stenger (realizing of course that they are probably too busy for it to happen)).
Previous BHTV dialogues with Elizer Yudkowsky I have seen noticed here on LessWrong but not this one (hope it is not that I have just missed that post). Therefore I posted this here, I did not find a perfect place for it, this was the least-bad I noticed. Although my post here is only partly about “Is Elizer alive and well” (he surely looked so on BHTV), I hope it is not considered too much off-topic.
An interesting dialogue at BHTV abot transhumanism between cishumanist Massimo Pigliucci and transhumanist Mike Treder. Pigliucci is among other things blogging at Rationally Speaking. This BHTV dialogue is partly as a follow-up to Pigliucci’s earlier blog-post the problems with transhumanism . As I (tonyf, July 16, 2009 8:29 PM) commented then, despite the title of his blog-post, it was more of a (I think) misleading generalisation from an article by some Munkittrick than by an actual study of the “transhumanist” community that was the basis for Pigliuccci’s then rather sweeping criticism. The present BHTV dialogue was in a rather different tone, and it seemed Pigliucci and Treder understood each others rather well. (As for now I do not see any mentioning of the dialogue on Rationally Speaking, it would be interesting to see if he will make any further comment.)
I have not time to comment the dialogue in detail. But I say that both Pigliucci and Treder did not distinguish between consciousness and intelligence. Pigiliucci pointed very clearly out that the concept of “mind uploading” suppose the “computational hypothesis of consciousness” to be true, but (at least from an materialistic point of view) it is not at all clear why it should be true. But from that he tacitly draw the conclusion (it seemed to me at last after a single view of the dialogue) that also [general] intelligence is depending on that assumption. Which I cannot see how it should. Is not the connection (or not) betwen consciouness and intelligence a so-far open question?
- 16 May 2010 1:48 UTC; 6 points) 's comment on Open Thread: May 2010 by (
This is a standard semiclassical motivation as to why gravitons most probably exist (I think from Steven Weinberg “gravitation and cosmology” but I have since long lost the book so I am not sure): In the limit of weak gravitation GR looks similar to the Maxwell equations. In particular there should exist gravitational waves.. (Have not yet been detected experimentally but if GR is (at least approximately) correct they should exist.) This means that you could in principle build a gravitational wave microscope. Say you want to measure the position of a test particle using this microscope. Now if gravitational waves were actually classical you could use arbitrarily feeble waves and thus arbitrarily small recoil on the test particle. And thus measuring position and momentum of the test particle with lower unaccuracy of position times momentum (along a given direction) than allowed by the Heisenberg uncertainty relation. But if gravitational waves are quantized in gravitons of energy = h times oscillation frequency Heisenberg uncertainty relation will be satisfied (Heisenberg’s original semiclassical derivation goes through for any wave quantised like this).
I guess that education systems differ quite a lot between different countries. So where you do your physics studies could make a significant difference. The OP is probably somewhat USA-centric. In Sweden there are two major undergraduate programs in physics. The general university physics program, and the engineering physics program. The latter is an engineering program and you are awarded an engineering degree. The contents, however, is mostly a physics program. For those going into an academic physics career it is as good a starting ground as the general university physics program. For those going into an engineering career, it is in some respects as good as or better than the more traditional engineering programs, and in some worse. The difference is that you learn less specific engineering, the advantage is that you learn more basic physics and mathematics, and general modelling skills. In USA the tradition seems to be more of traditional pure university physics programs. How about other places?
Physics Ph.D.-s are to a large extent recruited from the engineering physics program. A large fraction (I do not have actual reference here, somewhat anecdotic evidence) of physics Ph.D.-s go later to an engineering carrier, and that is made easier by the fact that those coming from an engineering physics background actually have a formal M.Sc. engineering degree.
The reason some continue to Ph.D. varies of course. But for many the dominating reason is: for fun. They like learning about the real world, this is an important reason (I think) that many are attracted to physics; to understand more about the actual real world that we are living in at a fundamental level. But not just the very fundamental level, an aspect of understanding the real world by actual modelling (mathematical modelling that is systematically tested against real empirical data) is also important, be it at a more fundamental or a more phenomenological level (depending what field of physics you go to). The second reason is they like problem solving, being able to work for a few years on a non-trivial problem but still realistically feasible due to supervision by a professor. Some have already from the start an idea for a carrier after the Ph.D., for some in academia, while others self-idetify as engineers and see the Ph.D. just as a temporary fun before they take up their engineering carrier. While others have a more carefree attitude and don’t want to take out future worries as carrier choicies before that day actually arrives. Of course many by choice or necessity (or unexpected oppertunity in few lucky cases) change their carrier plans later, even had they one to begin with. In particular, competition for jobs in academia is fierce, and many who wished for that end up in engineering even so. But that is one of the ideas behind the system. There are no sufficiently reliable methods to predict who will be good enough for an academic research carrier, so many more Ph.D.-s than is needed for that has to be produced. Anyway, since most can get reasonably good jobs as e.g. engineers the cost is not that high, so provided they have fun enough during their time as graduate students it is worth it.
The value of a physics Ph.D for engineering. is to a large extent more skills in problem solving and modelling generally. The value as an engineer of the more actual physics that you know as a Ph.D. as compared to an M.Sc is probably only rarely of significant value. Several other small but significant skills are also acquired, e.g., Ph.D.-s have often significantly (well, somewhat...) better English writing skills by having to write articles for actual publication. The value of engineering physics as compared to a more standard engineering program is similar, though the difference is smaller, and here some of the larger physics knowledge is slightly more often of actual practical use (though I think that more often the stronger training in applied mathematics is a more important advantage of engineering physics as compared to a traditional engineering program, but those are not independent; the strong training in applied mathematics comes from doing actual problem solving in physics).
Whether it is good or not to go into physics depends on your preferences and interests, and your skill profile. My guess from personal experience by myself and acquaintancies; Apart from personal interest, if your talent is for acquiring general modelling skills reasonably well, then going to engineering physics is probably better than going to traditional engineering. Many who go there do, but a significant number don’t completely “get it” despite being tought it and formally passing the course exams. For those a more traditional engineering program would have been better. As for continuing for a Ph.D., for most students they should do this if they are among the best in class as M.Sc.-s, and if physics is what they really enjoy in life. For most Ph.D.-s there is a rather large but not very large cost of doing a Ph.D., You delay your engineering carrier by approximately 5⁄2 years, based on a simple toy modell where a Ph.D. takes 5 years and a year as a physics Ph.D-student is worth a half year engineering experience engineering-carrier-wise. (The lower salary during the very time as Ph.D.-student is no real cost, because it is well compensated by the lower expensives by the life-style as a Ph.D.-student.) For me and many of my acquaintances, that was more than well worth the cost, while for others of course it would not be.