Knowledge is initially local. Induction works just fine without a global framework. People learn what works for them, and do that. Once the whole globe becomes interconnected, we each have more data to work with, but still most of it is irrelevant to any particular person’s purposes. We cannot even physically hold a small fraction of the world’s knowledge in our head, nor would we have any reason to.
Differences cannot be “settled” by words, only revealed and negotiated around. We have different knowledge because we have different predispositions, and different experiences that we have learned from, and have created different models as a result. We can create new experiences by talking about our experiences, but we cannot truly impart our experiences as we have had them by doing so.
It’s our differences that make humanity more than just 8 billion human clones, and give it its distinct shape. Each difference, each experience, adds to humanity. What would humanity be if everyone agreed on everything, had all the same experiences? An ant colony, with no queen? The equivalent of one human brain, mass-produced in the billions?
Most of us wish humanity took a different, more pleasing shape, with less sharp edges and harsh colors. We try to mold it to our own tastes, remove conflict and suffering and ignorance and disability. We wouldn’t be human if we didn’t try. But no human being could ever succeed at it. Only humanity possibly could, warts and all.
tivelen
How is confidence different from the belief you have in your own competence? Your self-reported confidence and competence should always be the same.
Is there something I’m missing, some way that confidence is distinct from belief in competence?
What is the mechanism, exactly? How do things unfold differently in high school vs. college with the laptop if someone attempts to steal it?
Do you have any examples in mind?
If an altruist falls on hard times, they can ask other altruists for help, and those altruists can decide to divert their charitable donations if they consider it worth more to help the altruist. If the altruists are donating to the same charities, it is very likely that restoring the ability to donate for the in-need altruist will more than pay for the donations diverted.
If charitable donations cannot be faked, and an altruist’s report of hard times preventing their charity can be trusted, then this will work to provide a financial buffer based purely on mutual interest.
Only if most altruists in the network fall on hard times does this fail, as there aren’t enough remaining charitable donations to redistribute. A global network of diversely employed altruists would minimize this risk.
Cases where an altruist is permanently knocked out of income (and therefore donation) would lack mutual interest. There would need to be a formal agreement to divert some charity for life to help them out, and this would most likely be separate from the prior network of mutual aid, and count as insurance.
I appreciate the benefits of the karma system as a whole (sorting, hiding, and recommending comments based on perceived quality, as voted on by users and weighted by their own karma), but what are the benefits of specifically having the exact karma of comments be visible to anyone who reads them?
Some people in this thread have mentioned that they like that karma chugs along in the background: would it be even better if it were completely in the background, and stopped being an “Internet points” sort of thing like on all other social media? We are not immune to the effects of such things on rational thinking.
Sometimes in a discussion in comments, one party will be getting low karma on their posts, and the other high karma, and once you notice that you’ll be subject to increased bias when reading the comments. Unless we’re explicitly trying to bias ourselves towards posts others have upvoted, this seems to be operating against rationality.Comments seem far more useful in helping writers make good posts. The “score” aspect of karma adds distracting social signaling, beyond what is necessary to keep posts prioritized properly. If I got X karma instead of Y karma for a post, it would tell me nothing about what I got right or wrong, and therefore wouldn’t help me make better posts in the future. It would only make me compare myself to everyone else and let my biases construct reasoning for the different scores.
A sort of “Popular Comment” badge could still automatically be applied to high-karma comments, if indicating that is considered valuable, but I’m not sure that it would be.
TL;DR: Hiding the explicit karma totals of comments would keep all the benefits of karma for the health of the site, reduce cognitive load on readers and writers, and reduce the impact of groupthink, with no apparent downsides. Are there any benefits to seeing such totals that I’ve overlooked?
by running a simulation of you and seeing what that simulation did.
A simulation of your choice “upon seeing a bomb in the Left box under this scenario”? In that case, the choice to always take the Right box “upon seeing a bomb in the Left box under this scenario” is correct, and what any of the decision theories would recommend. Being in such a situation does necessitate the failure of the predictor, which means you are in a very improbable world, but that is not relevant to your decision in the world you happen to be in (simulated or not).Or: A simulation of your choice in some different scenario (e.g. not seeing the contents of the boxes)? In that simulation, you would choose some box, but regardless of what that decision would happen to be, you are free to pick the Right box in this scenario, because it is a different scenario. Perhaps you picked Left in the alternative scenario, perhaps the predictor failed; neither is relevant here.
Why would any decision theory ever choose “Left” in this scenario?
Such a system doesn’t prescribe which action from that set, but in order for it to contain supererogatory actions, it has to say that some are more “morally virtuous” to others, even in that narrowed set. These are not prescriptive moral claims, though. Even though you follow this moral system, a statement “X is more morally virtuous but not prescribed” coming from this moral system is not relevant to you. The system might as well say “X is more fribble”. You won’t care either way, unless the moral system also prescribes X, in which case X isn’t supererogatory.
If I am not obliged to do something, then why ought I do it, exactly? If it’s morally optimal, then how could I justify not doing it?
Supererogatory morality has never made sense to me previously. Obviously, either doing the thing is optimally moral, in which case you ought to do it, or it isn’t, in which case you should instead do the optimally moral thing. Surely you are morally blameworthy for explicitly choosing not to do good regardless. You cannot simply buy a video game instead of mosquito nets because the latter is “optional”, right?
I read about slack recently. I nodded and made affirmative noises in my head, excited to have learned a new concept that surely had use in the pursuit of rationality. Obviously we cannot be at 100% at all times, for all these good reasons and in all these good cases! I then clicked off and found another cool concept on LessWrong.
I then randomly stumbled upon an article that offhandedly made a supererogatory moral claim. Something clicked in my brain and I thought “That’s just slack applied to morality, isn’t it?”. Enthralled by the insight, I decided this was as good an opportunity as ever to make my first Shortform. I had failed to think deeply enough about slack to actually integrate it into my beliefs. This was something to work on in the future to up my rationalist game, but I also get to pat myself on the back for realizing it.
Isn’t my acceptance of slack still in direct conflict with my current non-acceptance of supererogatory morality? And wasn’t I just about to conclude without actually reconciling the two positions?Oh. Looks like I still have some actual work ahead of me, and some more learning to do.
tivelen’s Shortform
The only difference between this and current methods of painless and quick suicide is how “easy” it is for such an intention and understanding to turn into an actual case of non-existence.
Building the rooms everywhere and recommending their use to anyone with such an intention (“providing” them) makes suicide maximally “easy” in this sense. On a surface level, this increases freedom, and allows people to better achieve their current goals.
But what causes such grounded intentions? Does providing such rooms make such conclusions easier to come to? If someone says they are analyzing the consequences and might intend to kill themselves soon, what do we do? Currently, we force people to stay alive, tell them how important their life is, how their family would suffer, that suicide is a sin, and so on, as a society, and we do this to everyone who is part of society.None of these classic generic arguments will make sense anymore. As soon as you acknowledge that some people ought to push the button, that anyone might need to consider such a thing at any time, you have to explain specifically why this particular person shouldn’t right now, if you want to reduce their suicidal intentions. The fact that someone considering suicide happens to think of their family as a counter reason, is because of the universal societal meme, not its status as a rational reason (which it may very well happen to be).
We can designate certain groups (i.e. the terminally ill) as special, and restrict the rooms to them, creating new memes for everyone else to use based in their health, but the old memes remain broken, and the new ones may not be as strong.
I suspect that the main impact of providing the rooms will be socially encouraging suicide, regardless of what else we try to do, even if we tell ourselves we are only providing a choice for those who want it.
I tested Otter.ai for free on the first forty minutes of one podcast (Education and Charity with Uri Bram), and listening at 2x speed allowed me to make a decent transcript at 1x speed overall with a few pauses for correction. The main time sinks were separating the speakers and correcting proper nouns, both of which seem to be features of the paid $8.33/month version of the program (which if used fully would cost $0.001/minute to use). If those two time sinks are in fact totally fixed by the paid version, I could easily imagine creating a decent accurate transcript in half the run time of the podcast. Someone who can type faster than me could possibly cut the time down even more.
If there is sufficient real demand for particular/all transcripts, I would be willing to do this transcription myself at no cost (though I would be best convinced of the need for these transcripts via some kind of payment for my work if I’m going to do a lot of them. I don’t want to waste my effort on something people merely say they would like.)
The most likely scenario for human-AGI contact is some group of humans creating an AGI themselves, in which case all we need to do is confirm its general intelligence to verify the existence of it as an AGI. If we have no information about a general intelligence’s origins, or its implementation details, I doubt we could ever empirically determine that it is artificial (and therefore an AGI). We could empirically determine that a general intelligence knows the correct answer to every question we ask (great knowledge), can do anything we ask it to (great power), and does do everything we want it to do (great benevolence), but it could easily have constraints on its knowledge and abilities that we as humans cannot test.
I will grant you this; just as sufficiently advanced technology would be indistinguishable from magic, a sufficiently advanced AGI would be indistinguishable from a god. “There exists some entity that is omnipotent, omniscient, and omnibenevolent” is not well-defined enough to be truth-apt, however, with no empirical consequences for it being true vs. it being false.
Rationalists may conceive of an AGI with great power, knowledge, and benevolence, and even believe that such a thing could exist in the future, but they do not currently believe it exists, nor that it would be maximal in any of those traits. If it has those traits to some degree, such a fact would need to be determined empirically based on the apparent actions of this AGI, and only then believed.
Such a being might come to be worshipped by rationalists, as they convert to AGI-theism. However, AGI-atheism is the obviously correct answer for the time being, for the same reason monotheistic-atheism is.
Your system may not worry about average life satisfaction, but it does seem to worry about expected life satisfaction, as far as I can tell. How can you define expected life satisfaction in a universe with infinitely-many agents of varying life-satisfaction? Specifically, given a description of such a universe (in whatever form you’d like, as long as it is general enough to capture any universe we may wish to consider), how would you go about actually doing the computation?
Alternatively, how do you think that computing “expected life satisfaction” can avoid the acknowledged problems of computing “average life satisfaction”, in general terms?
In the graphs, is “confidence” referring to “confidence in my ability to improve”, then? And so we are graphing competence vs. ability to improve competence?
Otherwise, if I’m trying to place myself on one of these graphs, I’m simply unable to to anything but follow the dotted line. There is no “felt sense of confidence” that I can identify in myself, that doesn’t originate in “I am competent at this”.