A video of the whole talk is available here.
wuncidunci
Ahh, thank you.
Did you mean Saint Boole?
And whence the blasphemy?
If someone believes they have a really good argument against cryonics, even if it only has a 10% chance of working, that is $50 in expected gain for maybe an hour of work writing it up really well. Sounds to me like quite worth their time.
Quite possible. I didn’t intend for that sentence to come across in a hostile way.
Since in Swedish we usually talk about the 1800s and the 1900s instead of the 19th and 20th century, I thought something could have been lost in translation somewhere between the original sources, the book by Kelly and gwern’s comment, which is itself ambiguous as to whether it is intended as (set aside an island for growing big trees for making wooden warships) (in the 1900s) or as (set aside an island for growing big trees for (making wooden warships in the 1900s)). (I assumed the former)
If we assume a scenario without AGI and without a Hansonian upload economy, it seems quite likely that there are large currently unexpected obstacles for both AGI and uploading. Computing power seems to be just about sufficient right now (if we look at supercomputers), so it probably isn’t the problem. So it will probably be a conceptual limitation for AGI and a scanning or conceptual limitation for uploads.
Conceptual limitation for uploads seems unlikely, because were just taking a system cutting it up into smaller pieces and and solving differential equations on a computer. Lots of small problems to solve, but no major conceptual ones. We could run into problems related to measuring quantum systems when doing the scanning (I believe Scott Aaronson wrote something about this suspicion lately). Note that this also puts a bound on the level of nano-technology we could have achieve, if we have neuron-sized scanning robots, we would be able to scan a brain and start the Hansonian scenario. Note that this does not preclude slightly larger scale manufacturing technologies, which would probably come from successive miniaturisations of 3d-printers.
Conceptual difficulties creating AGI are more or less expected by everyone around here, but in the case AGI is delayed by over a century we should get quite worried about other existential risks on our way there. Major contenders are global conflict and terrorism, especially involving nuclear, nano-technological or biological weapons. Even if nano-technology will not reach the level described in Sci-Fi, the bounds given above still allow for sufficient development to make advanced weapons be a question of blueprints and materials. Low probability huge impact risks from global warming are also worth mentioning, if only to note that there are a lot of other people working on them.
What does this tell us about analysing long-term risks like the slithy toves? Well I don’t know anything about slithy toves, but let’s look at the eugenics stuff discussed earlier and consider how it would influence the probability of major global conflicts, the question is not whether it would increase the risk of global conflict, but how much it would increase the risk of global conflict. On the other hand if AI-safety is already taken care of, it becomes a priority to develop AGI as soon as humanly possible. And then it would be really good if humanly possible was a sigma or so better than today. Still it wouldn’t be great, since most of the risks we would be facing at this point would be quite small for each year (as it seems today we could of course get other info on our way there). It’s really quite hard to say what would be the proper balance between more intelligent people and more time available at this point, we could say that if we’ve already had a century to solve the problem more time can’t be that useful, on the other hand we could say that if we still haven’t solved the problem in a century there are loads of sequential steps to get right we need all the time we can buy.
tldr: No AGI & No Uploads ⇒ most X-risk from different types of conflict ⇒ eugenics or any kind of superhumans increases X-risk due to risk of war between enhanced and old-school humans
a Scandinavian country which set aside an island for growing big trees for making wooden warships in the 1900s, which was completely wrong since by that point, warships had switched to metal, and so the island became a nature preserve;
This was probably Sweden planting lots of oaks in the early 19th century. 34 000 oaks were planted on Djurgården for shipbuilding in 1830. As it takes over a hundred years for the oak to mature, they weren’t used and that bit of the Island is now a nature preserve. Quite funny is that when the parliament was deciding this issue, it seems some of the members already doubted whether oak would remain a good material to build ships from for so long.
Also observe that 1900s ≠ 19th century, so they weren’t that silly.
Had some trouble finding English references for this, but this (p 4) gives some history and numbers are available in Swedish Wikipedia.
and the dark arts that I use to maintain productivity.
Yes! Please tell us more about these!
Two points of relevance that I see are:
If we care about the nature of morphisms of computations only because of some computations being people, the question is fundamentally what our concept of people refers to, and if it can refer to anything at all.
If we view isomorphic as a kind of extension of our naïve view of equals, we can ask what the appropriate generalisation is when we discover that equals does not correspond to reality and we need a new ontology as in the linked paper.
van Dalen’s Logic and Structure has a chapter on second order logic, but it’s only 10 pages long.
Shapiro’s Foundations without Foundationalism has as its main purpose to argue in favour of SOL, I’ve only read the first two chapters which give philosophical arguments for SOL, which were quite good, but a bit too chatty for my tastes. Chapters 3 to 5 is where the actual logic lives, and I can’t say much about them.
Which edition did you read? The image in the post is of the fifth edition, and some people (eg Peter Smith in his Teach Yourself Logic (§2.7 p24)) claim that the earlier editions by just Boolos and Jeffrey are better.
Cutland’s Computability and Mendelson’s Introduction to Mathematical Logic between them look like they cover everything in this one, and they are both in MIRI’s reading list. What is the advantage of adding Computability and Logic to them? (ie is it easier to start out with, does it cover some of the ground between them that both miss, or is it just good with alternatives?)
The questions on Smoking and Nicotine distinctly lack a middle question “Do you use some kind of smokeless tobacco?” (eg I don’t smoke but use snuff almost daily).
Cantor who first did the first work on infinite cardinals and ordinals seemed to have a somewhat mystic point of view some times. He thought his ideas about transfinite numbers were communicated to him from god, whom he also identified with the absolute infinite (the cardinality of the cardinals which is too big to itself be a cardinal). This was during the 19th century so quite recently.
I’d say that much mysticism about foundational issues like what numbers really are, or what these possible infinities actually mean, have been abandoned by mathematicians in favour of actually doing real mathematics. We also have quite good formal foundations in terms of ZF and formal logic nowadays, so discussions like that do not help in the process of doing mathematics (unlike, say, discussions about the nature of real numbers before we had them formalised in terms of Cauchy sequences or Dedekind cuts).
Coffee purchases seem to be done by near-mode thinking (at least for me), while having children is usually quite planned.
Personally I like giving myself quite a bit of leniency when it comes to impulsive purchases in order to direct my cognitive energy to long-term issues with higher returns. Compare and contrast to the idea of premature optimization in computer science.
Understanding the OS to be able to optimize better sounds somewhat useful to a self-improving AI.
Understanding the OS to be able to reason properly about probabilities of hardware/software failure sounds very important to a self-improving AI that does reflection properly. (obviously it needs to understand hardware as well, but you can’t understand all the steps between AI and hardware if you don’t understand the OS)
Private bittorrent trackers come to mind. Though over there, “good enough” is not measured by quality of conversation, but by your ability to keep up a decent ratio.
I’ve read it but didn’t consider the possibility of a twist like that here as well.
My largest problem with the Dark Lord == Death theory is that it doesn’t really square with Quirrelmort being another super-rationalist and Eliezer’s First Law of Fanfiction (You can’t make Frodo a Jedi unless you give Sauron the Death Star). Either Quirrelmort is a henchman or personification of Death, which is unlikely considering he is afraid of dying and the dementor try to frighten him in the Humanism arch. Or Quirrelmort is not the Sauron of this story but will help Harry to defeat the main bad guy Death. This could be a really cool ending, but I doubt that it would fit in the remaining arch.
Dementors symbolise death. Dementors can destroy humans (by their kiss), and Harry can destroy dementors (by True Patronus). That if anything marks him as Death’s equal. If not, dementors obeying him can be understood as him being Death’s equal.
Hodges claims that Turing at least had some interest in telepathy and prophesies:
Alan Turing: The Enigma (Chapter 7)