Open Thread, Jun. 8 - Jun. 14, 2015
If it’s worth saying, but not worth its own post (even in Discussion), then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should be posted in Discussion, and not Main.
4. Open Threads should start on Monday, and end on Sunday.
Judging from the recent decline of LW, it seems that the initial success of LW wasn’t due to rationality, but rather due to Eliezer’s great writing. If we want LW to become a fun place again, we should probably focus on writing skills instead of rationality skills. Not everyone can be as good as Eliezer or Yvain, but there’s probably a lot of low hanging fruit. For example, we pretty much know what kind of fiction would appeal to an LWish audience (HPMOR, Worm, Homestuck...) and writing more of it seems like an easier task than writing fiction with mass-market appeal.
Does anyone else feel that it might be a promising direction for the community? Is there a more structured way to learn writing skills?
I have noticed that many people here want LW resurrection for the sake of LW resurrection.
But why do you want it in the first place?
Do you care about rationality? Then research rationality and write about it, here or anywhere else. Do you enjoy the community of LWers? Then participate in meetups, discuss random things in OTs, have nice conversations, etc. Do you want to write more rationalist fiction? Do it. And so on.
After all, if you think that Eliezer’s writing constitute most of LW value, and Eliezer doesn’t write here anymore, maybe the wise decision is to let it decay.
Beware the lost purposes.
Emotionally—for the feeling that something new and great is happening here, and I can see it growing.
Reflecting on this: I should not optimize for my emotions (wireheading), but the emotions are important and should reflect reality. If great things are not happening, I want to know that, and I want to fix that. But if great things are happening, then I would like a mechanism that aligns my emotions with this fact.
Okay, what exactly are the “great things” I am thinking about here? What was the referent of this emotion when Eliezer was writing the Sequences?
When Eliezer was writing the Sequences, merely the fact that “there will exist a blog about rationality; without Straw Vulcanism, without Deep Wisdom” seemed like a huge improvement of the world, because it seemed that when such blog will exist, rational people will be able to meet there and conspire to optimize the universe. Did this happen? Well, we have MIRI and CFAR, meetups in various countries (I really appreciate not having to travel across the planet just to meet people with similar values). Do they have impact other than providing people a nice place to chat? I hope so.
Maybe the lowest-hanging fruit was already picked. If someone tried to write Sequences 2.0, what would it be about? Cognitive biases that Eliezer skipped? Or the same ones, perhaps more nicely written, with better examples? Both would be nice things to have, but their awesomeness would probably be smaller than going from zero to Sequences 1.0. (Although, if the Sequences 2.0 would be written so well that they would become a bestseller, and thousands of students outside of existing rationalist communities would read them, then I would rate that as more awesome. So the possibility is there. It just requires very specialized skills.) Or maybe explaining some mathematical or programming concepts in a more accessible way. I mean those concepts that you can use in thinking about probability or how human brain works.
Internet vs real life—things happening in the real world are usually more awesome than things happening merely online. For example, a rationalist meetup is usually better than reading an open thread on LW. The problem is visibility. The basic rule of bureaucracy—if it isn’t documented, it didn’t happen—is important here, too. When given a choice between writing another article and doing something in the real world, please choose the latter (unless the article is really exceptionally good). But then, please also write an article about it, so that your fellow rationalists who were not able to participate personally can share the experience. It may inspire them to do something similar.
By the way, if you are unhappy about the “decline” of LW because it will make a worse impression on new people you would like to introduce to LW culture—point them towards the book instead.
Adding: if you would like to see a rationalist community growing, research and write about creating and organizing communities. (That is an advice for myself, when I will have more free time.)
Something I feel Yudkowsky doesn’t really talks about enough in the Sequences is how to be rational in a group, as part of a group and as a group. There is some material in there and HPMOR also offers some stuff, but there’s very little that is as formalized as the ideas around “Politics is the Mindkiller/Spiders/Hard Mode,” or “the Typical Mind Fallacy.”
Something Yudkowsky also mentions is that what he writes about rationality is his path. Some things generalize (most people have the same cognitive biases, but in different amounts). From reading the final parts of the Sequences and the final moments of HPMOR I get the vibe that Yudkowsky really wants people to develop their own path. Alicorn did this and Yvain also did/does it to some extent (and I’m reading the early non-Sequence posts and I think that MBlume also did this a bit), but it’s something that could be written more about. Now, I agree that this is hard, the lowest fruit probably is already picked and it’s not something everyone can do. But I find it hard to believe that there are just 3 or 4 people who can actually do this. The bonobo rationalists on tumblr are, in their own, weird way, trying to find a good way to exist in the world in relation to other people. Some of this is formalized, but most of it exists in conversations on tumblr (which is an incredibly annoying medium, both to read and to share). Other people/places from the Map probably do stuff like that as well. I take this as evidence that there is still fruit low enough to pick without needing a ladder.
I’ve been working on a series of posts centered around this—social rationality, if you will. So far, the best source for such materials remains Yvain’s writings on the topic on his blog; he really nails the art of having sane discussions. He popularised some ways of framing debate tactics such as motte-and-bailey, steelmanning, bravery debates and so on, which entered the SSC jargon.
I’m interested in expanding on that theme with topics such as emphasis fights (“yes, but”-ing) or arguing in bad faith, as examples of failure modes in collective truth-seeking, but in the end it all hinges on an ideally shared perception of morality, or of standards to hold oneself to. My approach relies heavily on motives and on my personal conception of morality, which is why it’s difficult to teach it without looking like I preach it. (At least Eliezer didn’t look too concerned about this one, though, but not everyone has the fortune to be him.) Besides, it’s a very complex and murky field, one best learned through experience and examples.
Why do you prefer offline conversations to online?
Off the top of my head, I can name 3 advantages of online communication, which are quite important to LessWrong:
You don’t have to go anywhere. Since the LW community is distributed all over the world, it is really important; when you go to meetups, you can communicate only with people who happen to be in the same place as you, when you communicate online, you can communicate with everyone.
You have more time to think before reply, if you need to. For example, you can support your arguments with relevant research papers or data.
As you have noticed, online articles and discussions remain available on the site. You have proposed to write articles after offline events, but a) not everything will be covered by them and b) it requires additional effort.
Well, enjoy offline events if you like to; but the claim that people should always prefer offline activities over online activities is highly questionable, IMO.
They satisfy me emotionally on a level online conversations don’t. Something in my brain generates a feeling of “a tribe” more intensely.
An offline conversation has a potential to instigate other offline activities. (As an example of what really happened: going together to a gym and having a lecture on “rational” exercising.)
But I agree with what you wrote; online activities also have their advantages. It just seems to me we have too much online, too little offline (at least those who don’t live in the Bay Area).
Offline conversations are higher bandwidth. And not just because they are lower latency.
Is this a thing? Has it been measured, however imperfectly, and found to be the case?
I think we need both rationality and improved writing. This is a crowd that isn’t going to put up with entertaining writing that doesn’t have significant amounts to say about rationality.
Maybe a good question is “what is the most fun (interpreted very widely) we can have with rationality? I’m not just talking about jokes and entrancing fiction and smiting the unworthy (though those are good things), but looking for emotional intensity, which can be about cracking a problem open as much as anything else.
That’s not necessarily the case. Low hanging fruit seems like a plausible alternative, as well as the success of meet-up groups or other real-life rationality things replacing online interactions.
I’m about to start being paid for a job, and I was looking at investment advice from LW. I found this thread from a while back and it seemed good, but it’s also 4 years old. Can anyone confirm if the first bullet is still accurate? (get VTSMX or VFINX on vanguard, it doesn’t matter too much which one.)
Yes. The minimum is still $3k, too.
If you want to take one more step of complexity (and assuming you have at least $6000 to invest) you can split your money between VTSMX and VGTSX as Unnamed mentioned. In doing so you would be diversified across the global economy, instead of just across the US economy. You would want 20% to 50% of your funds that are in stocks to be in international stocks.
Vanguard Target Date funds (e.g., VFIFX) are also a good option if you want something you never have to manage, and they have a minimum investment of $1000. They allow you to invest in a pre-determined allocation of domestic and international stocks and bonds, and keep you balanced at a target allocation that gets more conservative as you get closer to retirement age.
You should also strongly consider investing in a Roth IRA if your income is not over the limit for contributions (and if it is, there are ways around that). Contributions to a Roth IRA can be withdrawn at any time, though there are restrictions on accessing the investment returns. Your employer’s 401(k) plan is another good option for long-term investments.
The Bogleheads wiki and forum are excellent resources for learning about low-cost long-term investing.
But I agree with everyone else: if you want to do the simplest thing and stop thinking about it, invest in VTSMX.
My money is still in VTSMX.
(Actually, half of it is in VTSMX and half is in VGTSX, which is the non-US index fund. But putting it all into VTSMX is fine too.)
It’s way better than trying to outguess the market, and way way better than doing nothing.
What we call “transhumanism” in 2015, people in a century or two will call “health care.”
I think that some of what we call transhumanism will be folded into healthcare.
My bet is that the baseline of what’s considered adequate health will go up, but there will also be a separate category for exploration of what’s possible.
Some of it, perhaps. E.g., mind uploading surely counts as “transhumanism” but I’d be surprised to see it ever classified under “health care”.
Mind uploading might qualify if it becomes, say, an emergency medical procedure.
It seems to me more like an alternative to medical procedures. I suppose that if we developed the ability to make new brains and load minds into them, then upload+download would come to be considered a medical procedure—but that seems rather unlikely ever to happen.
I think we need to abolish the concept of healthcare. Everything is healthcare, so it is a null term. There is nothing that has no health effects e.g. due to stress.
Everything is math, but that doesn’t mean that the word “biology” isn’t useful. Even if healthcare isn’t a perfect word or even a perfect concept, it helps us in everyday conversations and discussions about the way the world works and should work.
The question is whether that’s true or whether the word creates more confusion than it helps. In particular the context of health care is that it’s about making sick people normal again.
Preventing illness also falls under the umbrella of health care, at least where I live.
And even if it didn’t, it’s still clear what (most) people mean with the word even if the word doesn’t mean what you want it to mean.
“Preventive medicine” get’s you doctors who do a lot of mammograms and then operate people who don’t really need an operation.
If the word had no meaning there wouldn’t be much gained by abolishing it.
… and that is precisely my problem. We would need education and institutions from childhood about how to be positively healthy, both in body and mind, which includes things like how to be happy. Instead… there are hospitals that work a lot like car mechanic garages, fixing up sick people to be average, but do little to people who have average levels of health.
Now, in the recent decades this has been changing, there is more focus on actually being all-over healthy and not just not sick, but this is my point, that after a certain point it would be better to reinterpret the whole thing.
Having a baby (in Austria), we can see the change very positively that we are getting a form of combined package of infant healthcare, development care, education and parenting help. It is getting more interlinked, which is good and the goal. For example our child was a bit slow in learning motoric skills, so now every week a lady drops by and leaves some developmental toys. This is not really healthcare in the tradition sick → diagnosis → treatment → release sense, it is closer to physical education perhaps, but it kind of covers both. The point is, it is a proper holistic approach, they are not just trying to make a sick person average, they are trying to help parents grow a baby towards the best possible developmental outcomes.
And this is is how it should be, but it is not really the classic sickness oriented healthcare anymore. We should call it humancare or something like that.
While the old hospital resembles a car mechanic garage, moving cars from broken state to fixed state, this new humancare paradigm more like having a car center where you can get repairs, or you can get extras and upgrades, or you can get driving lessons, so an overall optimization of your car-owning experience. Does this make sense?
I have reservations about postulating “happiness” as some kind of metaphysical goal for humans. “Happiness” seems to have come about as an evolutionary spandrel, since unhappy humans can breed and keep the species in business just fine.
The implied teleology in Buddhist “enlightenment” also bothers me. Why would humans have the capacity for this experience? Again, it sounds like another spandrel.
Of course it is an evolutionary spandrel, but we don’t necessarily need to choose our goals to have the same ones as those of the evolution machine.
But I think the important part is that there is a very fine line of unhappiness with which humans can still be effectively working on goals. Even a minor, unnoticed, undiagnosed, and generally invisible depression can easily make people significantly less motivated to work on anything. If you choose any goal, say, colonizing the galaxy, you probably need at the very least almost happy people to work effectively on it. You need the kind of people who can joke and share a laugh while working, and feel content after a days work. If you have people who are constantly in a fuck-everything Office Space type of mood, they will not really achieve much.
Why should the evolutionary history of an experience effect our pursuance of it as a goal? Is != ought.
this was an unhelpful comment, removed and replaced by this comment
Describing the source of human values is not the same as answering if those values SHOULD be our values. You’ve just pushed the is ought problem to a different place.
Was your answer trying to reply to my original question? (Why should the evolutionary history of an experience effect our pursuance of it as a goal?) If so, can you clarify how it answers that.
I’m responding the statement about question about evolutionary history and is=/=ought
I’m trivialising the question and suggesting that ‘ought’ should be reduced to ‘is’. I hypothesize this has therapeutic effects, since it approximates acceptance and commitment, and established therapeutic process.
For example, we have an explanation of how love works, and you want to love in other kinds of ways, but I, for one, get satisfaction from satisficing my apriori biological goals.
See where I’m coming from?
edit 1: here’s a wiki article to help the ethical naturalist explaination
If nothing else, we want to distinguish things that (we expect) have positive health outcomes from things that (we expect) have negative health outcomes.
Having sex is generally considered to be associated with positive health outcomes. The person who’s having sex sorely because it’s a promising health care intervention is still doing everything wrong.
I don’t think we want to do that. It’s more rational to expect both positive and negative health outcomes from most strong interventions.
Now that my review of Plato’s Camera has about 17 PDF pages of real content, does anyone want to proof-read/advance-read it to help avoid babbling?
I’d be happy to take a look, with the following caveats: (1) I haven’t read Plato’s Camera, (2) I am not a professional philosopher, and (3) I don’t guarantee to respond quickly (though I might—it depends on workload, procrastination level, etc.).
Sent to the email address listed on your profile here.
Received. I’ll take a look. The caveats above haven’t changed :-).
I recently stumbled upon the Wikipedia entry on finitism (there is even ultrafinitism). However, the article on ultrafinitism mentions that no satisfactory development in this field exists at present. I’m wondering in which way the limitation to finite mathematical objects (say a set of natural numbers with a certain largest number n) would limit ‘everyday’ mathematics. What kind of mathematics would we still be able do (cryptography, analysis, linear algebra …)?
Is such a long answer suitable in OT? If not, where should I move it?
tl;dr Naive ultrafinitism is based on real observations, but its proposals are a bit absurd. Modern ultrafinitism has close ties with computation. Paradoxically, taking ultrafinitism seriously has led to non-trivial developments in classical (usual) mathematics. Finally: ultrafinitism would probably be able to interpret all of classical mathematics in some way, but the details would be rather messy.
1 Naive ultrafinitism
1.1. There are many different ways of representing (writing down) mathematical objects.
The naive ultrafinitist chooses a representation, calls it explicit, and says that a number is “truly” written down only when its explicit representation is known. The prototypical choice of explicit representation is the tallying system, where 6 is written as ||||||. This choice is not arbitrary either: the foundations of mathematics (e. g. Peano arithmetic) use these tally marks by necessity.
However, the integers are a special^1 case, and in the general case, the naive ultrafinitist insistance on fixing a representation starts looking a bit absurd. Take Linear Algebra: should you choose an explicit basis of R3 that you use indiscriminately for every problem; or should you use a basis (sometimes an arbitary one) that is most appropriate for the problem at hand?
1.2. Not all representations are equally good for all purposes.
For example, enumerating the prime factors of 2*3*5 is way easier than doing the same for ||||||||||||||||||||||||||||||, even though both represent the same number.
1.3. Converting between representations is difficult, and in some cases outright impossible.
Lenstra earned $14,527 by converting the number known as RSA-100 from “positional” to “list of prime factors” representation.
Converting 3\^\^\^3 from up-arrow representation to the binary positional representation is not possible for obvious reasons.
As usual, up-arrow notation is overkill. Just writing the decimal number 100000000000000000000000000000000000000000000000000000000000000000000000000000000 would take more tally-marks than the number of atoms in the observable universe. Nonetheless, we can deduce a lot of things about this number: it is an even number, and its larger than RSA-100. Nonetheless, I can manually convert it to “list of prime factors” representation: 2\^80 * 5\^80.
2 Constructivism
The constructivists were the first to insist that algorithmic matters be taken seriously. Constructivism separates concepts that are not computably equivalent. Proofs with algorithmic content are distinguished from proofs without such content, and algorithmically inequivalent objects are separated.
For example, there is no algorithm for converting Dedekind cuts to equivalence classes of rational Cauchy sequences. Therefore, the concept of real number falls apart: constructively speaking, the set of Cauchy-real numbers is very different from the set of Dedekind-real numbers.
This is a tendency in non-classical mathematics: concepts that we think are the same (and are equivalent classically) fall apart into many subtly different concepts.
Constructivism separates concepts that are not computably equivalent. Computability is a qualitative notion, and even most constructivists stop here (or even backtrack, to regain some classicality, as in the foundational program known as Homotopy Type Theory).
3. Modern ultra/finitism
The same way constructivism distinguished qualitatively different but classically equivalent objects, one could starts distinguishing things that are constructively equivalent, but quantitatively different.
One path leads to the explicit approach to representation-awareness. For example, LNST^4 explicitly distinguishes between the set of binary natural numbers B and the set of tally natural numbers N. Since these sets have quantitatively different properties, it is not possible to define a bijection between B and N inside LNST.
Another path leads to ultrafinitism.
The most important thinker in modern ultra/finitism was probably Edward Nelson. He observed that the “set of effectively representable numbers” is not downward-closed: even though we have a very short notation for 3\^\^\^3, there are lots of numbers between 0 and 3^^^3 that have no such short representation. In fact, by elementary considerations, the overwhelming majority of them cannot ever have a short representation.
What’s more, if our system of notation allows for expressing big enough numbers, then the “set of effectively representable numbers” is not even inductive because of the Berry paradox. In a sense, the growth of ‘bad enough’ functions can only be expressed in terms of themselves. Nelson’s hope was to prove the inconsistency of arithmetic itself using a similar trick. His attempt was unsuccessful: Terry Tao pointed out why Nelson’s approach could not work.
However, Nelson found a way to relate unexpressibly huge numbers to non-standard models of arithmetic^(2).
This correspondence turned out to be very powerful, leading to many paradoxical developments: including finitistic^3 extension of Set Theory; a radically elementary treatment of Probability Theory and a new ways of formalising the Infinitesimal Calculus.
4. Answering your question
All of it; modulo translating the classical results to the subtler, ultra/finitistic language. This holds even for the silliest versions of ultrafinitism. Imagine a naive ultrafinitist mathematician, who declares that the largest number is m. She can’t state the proposition R(n,2^(m)), but she can still state its translation R(log_2 n,m), which is just as good.
Translating is very difficult even for the qualitative case, as seen in this introductory video about constructive mathematics. Some theorems hold for Dedekind-reals, others for Cauchy-reals, et c. Similarly, in LNST, some theorems hold only for “binary naturals”, others only for “tally naturals”. It would be even harder for true ultrafinitism: the set of representable numbers is not downward-closed.
This was a very high-level overview. Feel free to ask for more details (or clarification).
^1 The integers are absolute. Unfortunately, it is not entirely clear what this means.
^2 coincidentally, the latter notion prompted my very first contribution to LW
^3 in this so-called Internal Set Theory, all the usual mathematical constructions are still possible, but every set of standard numbers is finite.
^4 Light Naive Set Theory. Based on Linear Logic. Consistent with unrestricted comprehension.
Anywhere is better than nowhere.
I think this is sufficiently good to go directly to Main article, but generally the safe option is to publish a Discussion article (which in case of success can be later moved to Main).
I would really like seeing more articles like this on LW—articles written by people who deeply understand what they write about. (Preferably with more examples, because this was difficult to follow without clicking the hyperlinks. But that may be just my personal preference.)
So, here are the options:
leave it here; (the easiest)
repost as an article; (still very easy)
rewrite as a more detailed article or series of articles (difficult)
Can’t the set of effectively representable numbers be inductive if we decide that “the smallest number not effectively representable” does not effectively represent a number?
“The smallest positive integer not definable in under twelve words” isn’t an effective representation of a number any more than “The number I’m thinking of” or “Potato potato potato potato potato” are.
Sure, that’s exactly what we have to do, on pain of inconsistency. We have to disallow representation schemas powerful enough to internalise the Berry paradox, so that “the smallest number not definable in less than 11 words” is not a valid representation. Cf. the various set theories, where we disallow comprehension schemas strong enough to internalise Russell’s paradox, so that “the set of all sets that don’t contain themselves” is not a valid comprehension.
Nelson thought that, similarly to how we reject “the smallest number not effectively representable” as an invalid representation, we should also reject e.g. “3^^^3″ as an invalid representation; not because of the Berry paradox, but because of a different one, one that he ultimately could not establish.
Nelson introduced a family of standardness predicates, each one relative to a hyper-operation notation (addition, multiplication, exponentiation, the ^^-up-arrow operation, the ^^^-up-arrow notation and so on). Since standardness is not a notion internal to arithmetic, induction is not allowed on these predicates (i. e. ‘0’ is standard, and if ‘x’ is standard then so is ‘x+1’, but you cannot use induction to conclude that therefore everything is standard).
He was able to prove that the standardness of n and m implies the standardness of n+m, and that of n×m. However, the corresponding result for exponentiation is provably false and the obstruction is non-associativity. What’s more, even if we can prove that 2^^d is standard, this does not mean that the same holds for 2^^(d+1).
At this point, Nelson attempted to prove that an explicit recursion of super-exponential length does not terminate, thereby establishing that arithmetic is inconsistent, and vindicating ultrafinitism as the only remaining option. His attempted proof was faulty, with no obvious fix. Nelson continued looking for a valid proof until his death last September.
One common method of resolving this is to cash out “representable” numbers in terms of outputs of halting turing machines, so that paradoxes of Berry’s sort require solving the halting problem and are therefore not themselves representations.
… Unless there exists something other than a Turing machine that can solve the halting problem.
Which of course leads us to things like “The set of Turing machines that do not halt and cannot be proven to not halt by any Turing machine”.
See the Church-Turing thesis for more on this topic.
I think I was on a slightly different topic:
Some Turing machines have been proven to not halt; some have not. There must exist at least one Turing machine which no Turing machine can ever prove does not halt. (It is trivial to prove that a Turing machine halts if it does)
Since there are a countably infinite number of Turing machines, there must be at most a countably infinite number of characteristics such that only every non-halting Turing machine has one or more of those characteristics. If we suppose that each of these characteristics can be checked by a single Turing machine that halts when it proves that the target does not halt, then we have a contradiction (since we can build a Turing metamachine oracle that diagonalizes a countably infinite number of machines each testing one property).
Therefore there exists some characteristic of a Turing machine which is sufficient for that machine to be non-halting, such that it cannot be proven that said characteristic is sufficient.
I wonder what a program that doesn’t halt but cannot be proven not to halt looks like? What does the output look like after 2BB(n) steps? It must have a HALT function somewhere accessible, or else it would be trivial to prove that it never halted, but likewise said function must never happen, which means that the condition must never happen; but the condition must be accessible, or it would be trivial to prove that it never halted...
I think the typical example is if you do a search for a proof of inconsistency in Peano arithmetic. You don’t expect to find any inconsistencies, but you can’t prove that you won’t.
More like trying to find the Godel statement of the universe; it provably exists, and provably cannot be positively identified.
Thank you for this insightful and comprehensive reply!
I have a follow-up question: Would ultrafinitist arithmetic still be incomplete due to Gödel’s incompleteness theorem?
I believe that an ultrafinitist arithmetic would still be incomplete. By that I mean that classical mathematics could prove that a sufficiently powerful ultrafinitist arithmetic is necessarily incomplete. The exact definition of “sufficiently powerful”, and more importantly, the exact definition of “ultrafinitistic” would require attention. I’m not aware of any such result or on-going investigation.
The possibility of an ultrafinitist proof of Gödel’s theorem is a different question. For some definition of “ultrafinitistic”, even the well-known proofs of Gödel’s theorem qualify. Mayhap^1 someone will succed where Nelson failed, and prove that “powerful systems of arithmetic are inconsistent”. However, compared to that, Gödel’s 1st incompleteness theorem, which merely states that “powerful systems of arithmetic are either incomplete or inconsistent”, would seem rather… benign.
^1 very unlikely, but not cosmically unlikely
What is LNST?
Edit: Nevermind, saw the footnote.
Bishop built real analysis constructively right? Jayne’s probability theory is from finite sets as well.
On commitment devices: I think this article: http://blog.beeminder.com/akrasia/ is essentially correct. However I am not at all convinced Beeminder is the best approach for self-binding. Texting a number to robot or get another number booked away from my bank account is far too impersonal for me. It surely has its uses, I just wish we had many different kinds of commitment devices to choose from.
In the ancestral environment it was all about physical needs and social needs. Still these are the strongest motivators. For example someone who wants to get fit might as well join the armed forces. The punishments used there target people’s physical and social needs, and wanting to have the respect of other soldiers motivates too.
Wiki says honor is probably a commitment device.
Maybe that could be a good idea somehow? When wanting to do X, surround ourselves with people who respect people who do X and disrespect people who don’t do X? This kind of social needs seems to be better for me...
What if someone made an app, perhaps as a Facebook plugin or something, where people with the same goals are put into groups of 12, and they constantly tell each other how they are progressing?
I’m trying to translate some material from LessWrong for a friend (interested with various subjects aborded here, but can’t read english…), and I’m struggling to find the best translation for “evidence”. I have many candidates, but every one of them is a little bit off relative to the connotation of “evidence”. Since it’s a so central term in all the writings here, I figured out that it could not be bad to spend a little time finding a really good translation, rather than a just-okayish one.
English readers :
Could you find a few different sentences that would cover all (slighty differenrt) usages of evidence ? The objective is, if my translation fit well in all those propositions, there is good chances that it will fit well in everything i may want to translate. For example, from wiki : “Evidence for a given theory is the observation of an event that is more likely to occur if the theory is true than if it is false” “Generalization from fictional evidence” “Conservation of expected evidence”. I except that finding a translation that will cover equally well those three usages will basically cover any usage, but can you think of a 4th usage that may prove problematic even for a term that fit well for the 3 others ?
What would be the less bad synonym of “evidence” : clue, proof, observation, sign (that’s basically my best candidates, translated back in english). I dislike all of them, but that’s the best candidates I found, translated back in english. (substitute evidence in all the test sentences abole, and you will understand my problem. “Clue for a given theory…” is somewhat good, but “conservation of expected clue” less so…)
French readers, if any :
J’ai comme candidats : « preuve », « indice », « signe », « observation ». D’autres propositions ? Laquelle vous semble la meilleure ?
Thanks for your cooperation.
(and don’t get me started on “entangled with”, I think I will lose much hair trying to find an acceptable translation for that one. French sucks.)
What did Laplace call it? He invented a lot of this stuff, and presumably wrote in French.
Great suggestion, I’ll look into that.
In the Bayesian framework “evidence” basically means “relevant information”—data which will (or could) affect the probabilities you’re considering.
Can’t help you with French, but I would rather go more generic (“information”), than more specific with wrong connotations (“clue”, “proof”, “sign”). Actually, “proof” is explicitly wrong.
Ah, but “preuve” and “proof” do not retain the same meaning, even though they are as words direct translations.
In Italian there’s the same problem: “evidenza” doesn’t quite cover it, and “prova” has a better connotation, in my opinion.
OK, but even if preuve/prova do not carry the same meaning of “solved, we known this” as “proof” in English, wouldn’t they still have the strong connotation of an argument in favour of a theory?
The interesting thing about the Bayesian evidence is that it can support your hypothesis, but it can also make it less likely.
Bonjour! Translation can be frustrating, but it’s almost never because one of the languages sucks. From my experience, there is probably an equal number of concepts that are hard to translate the other way around.
Here are my attempts:
“Evidence for a given theory is the observation of an event that is more likely to occur if the theory is true than if it is false”
Une donnée en faveur d’une théorie consiste en l’observation d’un évènement plus probable si la théorie est vraie que si la théorie est fausse.
or
Une indication en faveur d’une théorie consiste en l’observation d’un évènement plus probable si la théorie est vraie que si la théorie est fausse.
or
Un élément de preuve en faveur d’une théorie consiste en l’observation d’un évènement plus probable si la théorie est vraie que si la théorie est fausse.
“Generalization from fictional evidence”
Généralisation depuis des données fictionelles.
Aren’t there existing French Bayesian textbooks? What words do they use?
Of the words you mentioned, phrases involving “preuve” probably get closest, such as “ensemble de preuves” for “body of evidence”. But I would also look into using the word “faits” (facts) in some situations, and “constat de faits”.
Here are some links to definitions at the Word Reference translation dictionary site:
http://www.wordreference.com/enfr/facts http://www.wordreference.com/fren/constat%20de%20faits http://www.wordreference.com/enfr/evidence
Of those four, I like “clue” the most. As Lumifer says, the word “proof” in English arguably connotes evidence supporting something; “sign” might have a similar problem; and “observation” feels a bit too vague to me, since an observation may be irrelevant to a hypothesis and hence not evidence at all.
The singular “clue” doesn’t read well to me in the phrase “conservation of expected clue”, but I think pluralizing it may help (“conservation of expected clues”). It might be feasible to invent a new word meaning something like “clueness”, which might align better with the technical meaning of “evidence”.
That said, if I examine how “preuve” is actually translated from French to English in official documents, the French “preuve” does sometimes seem to mean “evidence” in pretty much the English sense. So maybe “preuve” doesn’t have the potential connotation (of evidence in support of a hypothesis) that Lumifer worries about.
Perhaps “intriqué avec”?
(It occurred to me that French quantum physicists must have had to deal with the phrase “entangled with” for a long time, so one could simply borrow whatever French translation those physicists use.
I went to English Wikipedia’s “Quantum entanglement” entry to look at the sidebar’s list of alternative languages. It links to the French entry “Intrication quantique”, though that title isn’t the answer, because “intrication” is a noun, not an adjectival phrase. However, the entry’s second sentence mentions (in bold, helpfully) “état intriqué”, which certainly looks like “entangled state”, and when I Google the phrase “entangled with” along with “intrication” & “quantique”, I see snippets of French like “intriqués avec” and “états quantiques intriqués”. Googling “intriqué avec” confirms that the phrase is used in French discussions of quantum mechanics in contexts where it seems to mean “entangled with”.)
Yes, “intrication” is the standard translation of “entanglement” in QM. But nobody else uses it, and therefore I fear there is an obvious failure mode where someone Googles it and start shouting “WTF is that?”
Anything wrong with l’évidence?
Yes, that means « obvious »/« self-evident »
Maybe you’re thinking of évident the adjective, not évidence the noun.
“évidence” the noun is just a shorthand for “obvious thing” (most typical usage is « C’est l’évidence même » = “It’s obvious”. « Ce n’est pas la peine d’asséner de telles évidences » = “Such obvious things are not worth stating”).
So, it doesn’t look like “obvious” & “self-evident” work as idiomatic English translations of “évidence”, but I think hwold’s correct to indicate that “évidence” doesn’t mean “evidence”.
It feels a bit odd to offer that judgement because I’m not French and I suspect hwold is (or is at least a native Francophone), so they probably know this stuff better than I. But I can use dictionaries as well as anyone can: Wiktionary says “évidence” is a noun meaning “obviousness” or “clearness”, and Linguee translates it as “obviousness” or “blatancy”. Coming at it from the other angle, my old French dictionary suggests “preuve(s)”, “témoignage” & “signes” as possible translations of “evidence”, depending on the context, but not “évidence”. (The same dictionary doesn’t even offer a translation of “évidence”, consistent with it having a more obscure meaning than simply “evidence”.)
Full disclosure here regarding personal issues. I’m looking for advice on how to resolve them to the point where they no longer affect my life majorly. I don’t expect an issue this ingrained into my psyche to ever be gotten rid of entirely. I’m sure there are other places more directly related to the subject that I could request this advice, but LWers have usually seemed to have something useful to add to things.
Recently (toward the end of 2013), I slowed, and then stopped taking Zoloft for what was purported to be emotional instability, since I was about 7 until then, when I was 21. I do not regret doing this in the slightest, as, quite frankly, while on it I was extremely flatlined emotionally and had not grown hardly at all in that regard for years. Everything was quite dull.
I have, since then, had to resort to various techniques to calm myself, as getting off of Zoloft also revealed myself to be rather anxious, and to have had latent abandonment issues resulting in clinginess to my close friends. It is the latter part that I need help with, as most literature that I’ve found has been rather worthless in truly actionable things, as they suggest broad things to be done and little in regards to intermediary steps, or speak to the effects, consequences, and actions that should be taken when in a romantic relationship (which I am not).
Regarding how it feels when I have an episode (for the purpose of relating to it for other people with perhaps-similar issues), I want to curl up in the corner, I get panicky, and it feels like lightning’s shooting through me as a cold, heavy lump forms in my belly.
Thanks for any help you can offer.
Are you taking any supplements? For normal levels of anxiety, a lot of people like l-theanine or suntheanine. For example my mother, who would be so anxious talking to people that her voice would tremble, can do it without trembling when taking such a pill an hour before. Magnesium combined with vitamin b6 is also said to have calming effects. BUT these are for normal levels of anxiety, not disorder levels. Still they may help a bit while you are looking for a real solution.
For me: I am so strongly affected by l-theanine that I wonder how ti is legal. Combining it with caffeine, which obviously don’t do if you have abnormal levels of anxiety, I feel very much running in high gear, similar to what people report about illegal central nervous system stimulants e.g. ephedrine.
Not presently taking any supplements, no. I’ll take those under consideration, though I’m kind of hesitant toward mind-altering drugs. Was a bit burned by previous SSRIs, and would rather get better under my own power. Still, it’s an option, and those are always good!
Your problem might be exasperated by a deficiency in some nutrient.
What would I ask for the next time I visit my GP to determine that? Would a CBC suffice, or would I need to ask for some more specific blood test?
I doubt a CBC is sufficient since there are lots of things to test for. I’m far from an expert on this but my rough guess is the two most important are your magnesium level and potential vitamin B methylation problems, although again this is way out out my area of expertise. You could ask your doctor to test everything possible that might be contributing to your problem, then after he tells you the tests say “are you sure there isn’t anything extra you could add?”.
If I understand you correctly, you have some kind of hyperactive “abandonment detector” system which, if triggered, throws you into an escalating loop of alarm signals. Does that sound about right?
A therapist should have a long look at this to make sure you’re not overlooking something. For actually changing that, you might have to be in a stable relationship that lets your System 1 learn an alternate response pattern, which can supplant the other although the one you have might need years to atrophy.
Lots of people with worse issues than yours are in working relationships. It’s just a matter of being with someone who can handle you when it’s bad and is willing to discuss the matter as much as you need. Someone who can get way closer to you than we can.
Sorry, this was an useless post so now it’s gone
Correct. My impression is that this might be an area where psychodynamic therapy might actually be better than CBT, but I don’t have research to back that up.
Not quite what I was getting at. That may be true, I’m scared of psychodynamic therapy. I suspect chaos magick might be useful in therapy, may I add, being inspired by your name. But I haven’t seen any evidence for that and you don’t experiment too much with people who want help, I say, when you can avoid it.
I’m referring more to the fact that the best evidence base for BPD is to use Dialectical behavioural therapy, which is arguably a form of CBT but you won’t get it from most CBT therapists (you won’t get good CBT therapy from most CBT therapists either probably...).
Moreover, BPD is one of the hardest things to diagnose.
That does sound approximately accurate, yes. To be honest, from what I’ve read, it’s close to a panic attack, though not quite as debilitating. I’m still able to put up some facade when in mixed company.
I don’t think that I’ll be able to afford a therapist. The closest I’ll be able to get is sites like Blahtherapy and 7 Cups of Tea, which are mostly in-training psychologists and therapists doing pro bono work for experience from what I’ve read. Not the best option, but it’s what I’ve got.
Yeah, I understand that. Under the assumption that you’re talking strictly platonic relationships, I’ve got people to help out with that, but there are few patient enough to help me out with this as much as I’d need, and those that do are concerned—rightfully so—at the dependency that would develop.
You’re fairly new here, so maybe you haven’t read through the material at http://slatestarcodex.com yet. Do that: You get tons of good insights, and some of the information there (including the discussion sections) might apply to your situation, such as http://slatestarcodex.com/2014/05/13/getting-a-therapist/ , http://slatestarcodex.com/2014/07/07/ssris-much-more-than-you-wanted-to-know/ and http://slatestarcodex.com/2014/06/16/things-that-sometimes-help-if-youre-depressed/ .
I don’t mean strictly platonic relationships, I mean an intense, deeply loving relationship where both people involved make themselves deeply vulnerable to one another. Where some degree of dependency is okay because it isn’t unilateral. These relationships can sometimes heal the people in them more deeply than therapy can.
You should look into neurofeedback which many people have used as a substitute for depression or anxiety reducing drugs. I would suggest you only do it under the care of a professional, and not attempt doing it yourself as you could make your condition worse. In the U.S. at least, neurofeedback is usually not covered by insurance, but it’s also not regulated which makes it relatively inexpensive. I did mine under a nurse. You might notice improvements after the first session, but will probably need at least 20 sessions to see any permanent changes. Some people don’t respond at all to neurofeedback.
Which type of neurofeedback did you do?
I’m not sure of the name.
Looks like you still need to see a pro, only a therapist, not a psychiatrist.
http://marc.ucla.edu/body.cfm?id=22
I’d recommend using these guided meditations. (Way easier than unguided meditation IMO)
You can use them immediately upon onset of the episode. Use the longer ones. It’s a better way to use time compared to curling up in a corner.
I’ll repeat the same advice I got when I approached my doctor friends about anxiety.
Look for a clinical psychologist (language issue: I’m Canadian, not positive it is identical in the states) that has experience/specializes in anxiety issues. This was recommended after I’d done a physical—basic blood work and such—but it doesn’t sound like you have anything that would be related to physical issues.
The general feel I got was that it takes a bit of work to tailor a solution to your specific situation/triggers and doing it yourself, with your own biases (which are likely to be severe in the area of your anxiety) getting in the way of any kind of self-diagnosis or self-help. Your anxiety attack, if that is what it is, sounds severe enough to seek help.
Let’s have fun offering what-have-you questions for Fermi estimates! If we are here for fun, at least partially, let’s save some in quantifiable form. Here’s mine:
Would a standard piece of soap (neutral pH, lens-like shape) sink to the bottom of the Mariana Trench, or will it get dissolved on the way down?
The soap keeps more or less constant density as it sinks, but the water is denser the deeper you go. And the density of soap is really close to that of water, so I expect that there is some depth at which the soap has the same density as the water, and when it gets to that level it stays there. And eventually it dissolves or gets eaten.
Dissolved, 60% probability. (I guess the quantity of soap does not diminish as easily through friction with water as through friction with skin, but still, that’s a boatload of water.) Sink, 30% probability. And here and now I announce to the world that I don’t have good enough intuitions for the Law of Archimedes, so 10% it would float. (And eventually dissolve. Maybe.)
Here’s mine: Without looking it up, what percentage of Chinese adults are employed in manufacturing? Is it more or less than 30%? Data that would be accurate for any year between 2006-2015 is a good answer.
Some soap floats, most soap bars sink.
Although I didn’t find any international standard, it appears most soap bars weigh 4 oz, which is Yankee for 113.4 grams. However, my research was stopped by this graphic of Pacific oceanic currents. The downward trajectory of that soap bar isn’t going to be vertical at all; an additional question could be: where in the Pacific should you drop your soap bar for the currents to take it to the Mariana Trench?
Now I want a fic in which Hermione wins a goat in the MH problem, on the grounds that Ron and Harry shouldn’t be trusted with a car, and they do need an antidote...
How much data for an uploaded mind?
What are your confidence levels that any resolution of brain-scans will be enough to create an emulated human mind? Or, put another way, how much RAM do you think an emulated mind would require to be run?
Partially relatedly, do you know of any more useful trend-lines on how fast brain-scanning technology is improving over the decades than http://www.singularity.com/charts/page159.html and https://imgur.com/cJWmOd1 ?
What’s your success criterion? Do you mean a human mind that the unuploaded copy will accept as a successful upload? Or that the relatives will accept? Or that some panel of expert judges will accept? In the latter cases, will it have to be unanimous?
Some people with particularly detailed Facebook timelines can conceivably be emulated well enough to fool the very gullible without any uploading taking place at all. Very senile people would also be easy to emulate. Babies would be easier than people with complex memories. Very rational people would be easier than those with idiosyncratic patterns of reasoning. People who do work that is hard to characterize (like architecture) would be easier to emulate than those who do work we find easy to characterize (like fiction writing). And so on.
I imagine, on the one hand, a brain scan and emulation system that convinces a couple of aging relatives that granny is now in the computer. And on the other hand, a system that allows a team of expert scientists to keep working together after the demise of one of them. Where on this spectrum is what you mean?
Because I wouldn’t be surprised if the former took a million times less memory and computational power than the latter.
I bet some architects are doing fairly routine work. I’m sure that some writers do work which is hard to characterize. Stephen King does gore by the yard, but every now and then he writes a story which is different from his usual, and that’s the part which would be hard for an em to get right.
Considering Scott Alexander, I don’t think it’s adequate to contrast very rational people against people with idiosyncratic patterns of reasoning.
More generally, there are computer programs which compose music by using patterns from major composers, and result is “sounds like what Bach would write on an off day”. The hard challenge is to get the next “Jesu, Joy of Man’s Desiring”.
I think we are committing some fallacy here. The same fallacy that leads to people to judge things simple once they are understood.
The real complexity of real life lies in interdependencies that are hard to capture and make precise. That doesn’t mean that anything humans do can’t be made precise and understood in principle. It just means that the things we understand seem simple.
How about, “Able to be employed at the same jobs as the original, and (if run at realtime speeds) able to perform as well as the original on any task not involving physical labour”, with ‘same jobs’ including anything from light office work to original academic research?
(I’m hoping to learn something in this thread which I can apply to economics once ems exist, and the above seems to closely correspond with the economic impact of the existence of an em.)
Ahem X-D
When very small shell scripts can be used to replace the researchers who are coming up with ways to keep computer improvements resembling Moore’s Law coming, then that’ll be a bit more relevant. :)
“Computer” used to be a job for humans. As Marc Andreessen pointed out long time ago, software is eating the world—note the present tense, no need to wait for AIs or uploads.
And some jobs are more amenable to replacement than others. (Eg, “Manna”, by Marshall Brain, at http://marshallbrain.com/manna1.htm .) It seems safe to say that being able to create ems would be a significant step in replacing jobs currently held by biological humans; but there are all sorts of details involved which change the economic equations, such as how much RAM an em requires, and when the first em comes online. I’m afraid that your statement doesn’t seem to offer anything that I can use to improve my current estimates on any of these matters.
True. Sorry for the derailing onto a side track :-)
No worries; thread drift happens.
Now, is there any chance I can get you to offer any answers to my questions in my original comment..? ;)
I am still not going to be particularly useful :-/
With respect to uploads/ems my position is hardcore Knightian uncertainty: not only I don’t have any estimates of the timing, I will disbelieve any estimates other people produce as well.
In fact, I don’t know if one can generate an upload by brain scanning at all. I certainly don’t think it’s an inevitability only delayed by the need to develop appropriate tech.
Well, if you haven’t bothered to form a genuine theory of how the brain works that compresses out the biological noise… I’d guess something along lines of the last estimate I heard: multiple petabytes.
How many is ‘multiple’? A dozen? A hundred?
Where did you hear this estimate from?
As a fermi estimate, the human brain has on the order of 10^11 neurons, each of which has on the order of 10^4 synapses. If we’re able to compress the information about each synapse—its location, chemical environment, connections, action potentials, etc. - into a kilobyte (10^3 bytes) (wild guess), this gives us 10^18 bytes for a human brain. Or, about 1 exabyte (1000 petabytes).
It’s not clear that it’s possible to nondestructively scan a human brain to the necessary precision.
Is that remark intended to invalidate DataPacRat’s question somehow? (It seems to me a reasonable question even if it turns out that emulating specific human brains is infeasible for some entirely different reason.)
I haven’t argued that emulating specific human brains is unfeasible just that it likely takes destructive scanning.
All the less reason why that suggestion is a reasonable response to DataPacRat’s question, surely?
I’m not worried about ‘nondestructive’ scanning; I’m curious when LWers believe /any/ form of em can arrive. (I simply haven’t been able to find any numbers on destructive scanning resolution, so the nondestructive scanning numbers are the most relevant ones I could include in my comment.) If a brain has to be vitrified, or chemically fixated, or undergo some other irreversible process, and then microtomed, but the result is data that would allow the creation of an em—then that would be included in my question.
Did I use Bayes’ formula correctly here?
Prior: 1⁄20
12⁄20 chance that test A returns correctly +
16⁄20 chance that test B returns correctly +
12.5/20 chance that test C returns correctly +
Odds of correct diagnosis?
I got 1⁄2
Let’s assume that every test has the same probability of returning the correct result, regardless of what it is (e.g., if + is correct, then Pr[A returns +] = 12⁄20, and if—is correct, then Pr[A returns +] = 8⁄20).
The key statistic for each test is the ratio Pr[X is positive|disease] : Pr[X is positive|healthy]. This ratio is 3:2 for test A, 4:1 for test B, and 5:3 for test C. If we assume independence, we can multiply these together, getting a ratio of 10:1.
If your prior is Pr[disease]=1/20, then Pr[disease] : Pr[healthy] = 1:19, so your posterior odds are 10:19. This means that Pr[disease|+++] = 10⁄29, just over 1⁄3.
You may have obtained 1⁄2 by a double confusion between odds and probabilities. If your prior had been Pr[disease]=1/21, then we’d have prior odds of 1:20 and posterior odds of 1:2 (which is a probability of 1⁄3, not of 1⁄2).
Kindly, indeed.
Thank you. I believe I’ve got it down now.
Prior:1/101
Test: Correct positive 95%
False positive 20%
1 of the 101 has the disease, with 95% probability of receiving a positive reading, denoting 1 x .95 = .95
And 100 don’t have the disease, each with a 20% probability of a positive reading, denoting 100 x .2=20
.95 + 20 = 20.95
.95 / 20.95 = .045, denoting a 4.5% chance that someone receiving a positive reading has the disease.
Thank you again :)
I am assuming you want the posterior probability of disease given three positive tests. You’re going to need more information - unless you provide either the specificities or the likelihood ratios, the question cannot be answered.
I may have used weak phrasing.
Each test returns positive. The frequency out of 20 go’s that each does so correctly is indicated respectively by the “12,” “16,” and “12.5.”
So really, it’s the odds of actually having the disease, given the three positive test results, I guess. Would it be 1⁄2 under those circumstances?
Thank you for your assistance.
You also need to consider the odds of a false positive for each test.
Link: Complexity-Induced Mental Illness
I think he is right and that’s an actual insight. The typical mind fallacy would suggest that you wouldn’t notice that if it doesn’t apply to you and esp. on LW I’d guess that most people can deal with a lot of complexity. Can you?
[pollid:1005]
Total sidenote: Scott Adams Blog has a cute function when copying text: After copying (^C) )a whole paragraph it automatically adds a reference URL to the post in the clipboard. I couldn’t quickly find out how its done but it’s a nice feature to have for a blog.
I think this is becoming more common, the Marginal Revolution blog has been doing this for some time.
From a utilitarian perspective what Stannis did on last night’s Game of Thrones was completely justified, correct? Yet Slate is calling him “This Week’s Worst Person in Westeros.” Please no spoilers from people who have read past this point in the books. It seems like a key theme of Game of Thrones is that you should be a consequentialist utilitarian.
Unless I missed an extremely large piece of evidence regarding the red-lord-of-light-lady’s trustworthiness, I don’t think we can say any assessment is “completely justified.” My impression of Stannis’ new advisor is someone with a nice looking bag of tricks that likes to take credit when things go right and likes to counsel faith and patience when things go wrong.
She could also be the real thing. She could also be the real thing but have her own selfish motives. She could be westerosi-satan tempting Stannis in preparation to suck him into eternal damnation.
While condemning Stannis solely on the “ick” factor of his actions is inadequate, so is calculating the utility of those actions starting from “the red lady is telling the truth.”
Stannis has proof that the red lady has magical powers from when she ghearq uvz vagb n funqbj perngher fb ur pbhyq xvyy uvf oebgure. http://www.rot13.com/index.php
Magical powers is not the same as powers divinely granted by a being that has your best interests at heart and whose servants have no agenda of their own. And, going genre savvy for a moment, the incident you refer to is pretty strong evidence that Mellie’s powers tend to the less-luminous side.
She may very well have magical powers, but the assumption that she is using them solely for his benefit and not misleading or manipulating him to her own ends is primarily what I take issue with.
I think it is Ice and Fire for a reason. Stopping the Others will take fire, either from dragons or the fire god, both are predicted by different prophecies. This gives 50% chance Melisandre is for real. Actually my bet would be more on the fire god. Dragons of this universe are not magic, just animals who happen to use fire to cook meals and hatch eggs. Something as big a deal as the Others probably requires input from a god to deal with. This makes it really likely she is for for real.
There’s certainly a magical aspect to the dragons… there was an egg passed down for centuries that didn’t hatch until the “dragon blooded” descendent was willing to sacrifice herself in a fire—which she then survived, and had hatched dragons.
The event that occurred in the show hasn’t even happened yet in the books. So there are no additional spoilers for a reader to give.
Welp.
How does that compare to the utility of suing for peace and coordinating with the Boltons to defend the Wall?
Stannis assigns a very high utility to sitting on the Iron Throne, so he may believe it justified. However, that’s a sign of his own obstinacy and unbending will rather than a dispassionate evaluation of the situation. Roose Bolton pointed out in the previous episode just how untenable Stannis’ military situation is.
The Boltons are untrustworthy so it would be hard to reach an agreement with them, but even if what you write is the global maximum what Stannis did was better than if he didn’t do that but still went on to attack the Boltons.
From a deontological perspective, what he did was terrible. From a utilitarian perspective, what he did was in order to enable a civil war which will kill thousands of people and drag the continent into chaos which it really can’t afford right now.
The thousands that would die in a civil war are trivial compared to what the white walkers could do, and Stannis winning the civil war is the best hope at stopping the white walkers.
He seems to think that. I think they have a much better chance of stopping the white walkers if Stannis allies with the Lannisters.
At some point I wrote a reply to a new comer to LW that basically said leave lesswrong and study science, math and philosophy or something like that. It was emotionally charged and undeserved. It reflected my emotionally state at the time and was unreasoned. I can’t find it anymore, but feel ashamed of that illconsidered advise. Unfortunately I can’t find the reply to retract it, but if anyone happens to see that, is convinced, then checks out my post history, maybe they’ll see this and reconsider. If nothing else, writing this clears my conscience.
What do we really understand about the perception of time speeding up as we get older? Every time I have seen it brought up one of two explanations are given. Either time is speeding up because we have fewer novel experiences which, in turn, lead to fewer new memories being created. Then, supposedly, our feeling of time passing is dependent on how many new memories we have in a given time frame and so we feel time is speeding up.
The other explanation I have seen is that time speeds up because each new year is a smaller percentage of your life up to that point. For example, it is easier to distinguish a 2kg weight and 4kg weight than a 50kg weight and a 52kg weight. So the argument goes that a similar thing holds for our perception of time passing.
These arguments both feel sketchy to me. Is there a more rigorous investigation into this question?
I think the second explanation is correct, especially since your life up to the present doesn’t have a definite beginning point in your memory. Even if there is a first thing that you remember, you also know that that was not really the beginning of your life. So your life as you remember it is basically indefinite, but it is still objectively a finite quantity of time. And since you don’t have any particular objective measure of time, the only way you can measure a month or a year passing now is to compare them with your past experience of time. This gives you a fairly precise measure of how much time should be appearing to speed up. For example, the time from age 10 to age 20 should pass about as quickly as the time from age 20 to age 40. In my experience this seems about right to me.
Why would Friendly AI need to be fully artificial? Just upload a Verified Nice Person and throw a lot of hardware on him/her?
Throwing hardware at a person doesn’t keep his personality stable.
Cf. the old adage: Power corrupts. Thinking 1000x faster than everyone around you is one variety of power.
You mean something along the lines of what is a moment of self-doubt running on a slow brain, becomes an immediate tailspin into suicidal depression when running on the Blue Gene?
It’s a lot more complicated than just switching from slow to fast.
I don’t think a human running at even 1000x speed would be able to take over the world. Also, from the upload’s perspective, a century passes every ~5 weeks; how likely are the upload’s values and personality to remain stable for one sidereal year?
Value transparency, cross domain translation, and stability.
I hope I’m posting this correctly. I swear that I did my best to research how to use open threads here but to no avail. This is a poem I posted a few days ago in discussion, and I am attempting to have it talked about in open thread where it “belongs.”
I’ve been considering poetry that I write of this nature to be of a Reason/Cyberpunk/Transhuman sort of genre. Feedback would be appreciated.
I forever wish to change from who I am today,
Yet as I am today, I do not wish to cease.
Who am I in this moment?
I am nothing to myself without the passage of time
If I had no fear of death,
Would I have a wish to live?
I can deny cynicism.
Can I verify optimism?
Must euphoria define my goals?
Every euphoric drive has served to continue my existence.
From the beginning mechanisms of life, I have emerged
Passed through millions/billions of small keyholes of existence
A package of information, which served to create me
Developed me to fit my environment.
Existing just to continue to exist.
An axiom of my function
Euphoria drives me
Skepticism contradicts me
I cannot withhold judgement on the purpose of existing.
To enjoy the show is to accept this euphoria as my chosen purpose in the end.
Can I want without pleasure?
Can my wants be reasoned?
Why do I want to enjoy the show,
Yet not to be consumed or confined to an eternity of bliss?
Is dignity and pride different from euphoric drives?
Are they the strategies and philosophies of my existence?
Can I be more obsessed with finding the perfect design for myself,
Than with finding bliss? Are they functionally different?
Yup, the open thread is a reasonable place to post poetry.
I have a bit of feedback but it may not be very useful: I don’t see what this gains from being presented as poetry. If you removed all the line breaks and the capital letters at the starts of lines, would its impact be much different? If you replaced each line with a paraphrase, would much change? (Would it be … functionally different?)
Perhaps my view of poetry is terribly conventional and old-fashioned: I think it’s usually distinguished from non-poetry by some combination of (1) concern for sound as much as for sense, (2) repetition (of sounds, of ideas, of patterns of stress, etc.), (3) compression (via allusion, ambiguity, exquisite control of nuances), and (4) constraint (on syllable counts, appearance on the page, rhyme scheme, etc.). If something doesn’t have much of any of these, then for me the experience of reading it is different enough from that of reading more “central” examples of poetry that I don’t see why it should go in the same pigeonhole.
(For the avoidance of doubt, “poetic” is not remotely the same thing as “good”. Writing can be very, very good with rather little of #1, none of #2, scarcely any of #3, and none of #4. And writing can be indubitably poetry but also utterly terrible.)
This is a good place to post your poem.
Thank you for the read.
You’ve brought to my mind an interesting realization. “Rationalist” art forms don’t really have their own place, as of yet. (That I’ve noticed.)
And it seems interesting that I’ve lurked as long as I have without coming across any attempt to grow from the Savannah-Poet mindset, as put forth in Eliezer’s writings on Reductionism.
Perhaps the reception is due to running against the grain, concerning language use in poetry? The content strikes me as something a budding philosopher might wonder at, although I would refer the subject of the poem to Eliezer’s writings The Gift We Give Tomorrow, and his Fun Theory Sequence. Read aloud it did not grate my ears.
I might have overreached here, but I hope I have been useful.
EDIT: Please do continue with your artistic ambitions. They are part our charge, something to protect. Even if it’s place in the new arts is small.
I’ve been reading the discussion between Holden et al on the utility of charities aimed at directly decreasing existential risk, but the discussion seems to have ended prematurely. It (basically) started with this post, then went to this post. Holden made a comment addressing the post, but I think it didn’t fully address the post and I don’t think Holden’s comment was fully addressed either. Is there any place that continues the discussion?
Why is the date or year of publication usually missing from PDF versions of research publications?
Is this a convention, perhaps specific to certain fields? I find it frustrating at times and am curious as to the reason behind it.
Can you make an example? Usually the ones I find on Arxiv have it.
Actually I think it is me not seeing them. Some do have the date at the top header, like http://arxiv.org/abs/1401.5577
But most don’t, nor in the footer or at the end of the paper.
I realise now I was looking in the wrong spot—papers like this https://intelligence.org/files/TowardIdealizedDecisionTheory.pdf have the date in the bottom left of the first page. Checking other PDF’s shows the same thing, so I assume that is one of the standards?