Got it. Thank you!
Skeptityke
What happens if it only considers the action if it both failed to find “PA+A()=x” inconsistent and found a proof that PA+A()=x proves U()=x? Do an inconsistency check first and only consider/compare the action if the inconsistency check fails.
I had an idea, and was wondering what its fatal flaw was. For UDT, what happens if, instead of proving theorems of the form “actionx --> utilityx” , it proves theorems of the form “PA+actionx |- utilityx”?
At a first glance, this seems to remove the problem of spurious counterfactuals implying any utility value, but there’s probably something big I’m missing.
Completed.
This is actually isomorphic to the absent-minded driver problem. If you precommit to going straight, there is a 50⁄50 chance of being at either one of the two indistinguishable points on the road. If you precommit to turning left, there is a nearly 100% chance of being at the first point on the road (Since you wouldn’t continue on to the second road point with that strategy.) It seems like probability can be determined only after a strategy has been locked into place.
Question for AI people in the crowd: To implement Bayes’ Theorem, the prior of something must be known, and the conditional likelihood must be known. I can see how to estimate the prior of something, but for real-life cases, how could accurate estimates of P(A|X) be obtained?
Also, we talk about world-models a lot here, but what exactly IS a world-model?
I’d call it a net positive. Along the axis of “Accept all interviews, wind up in some spectacularly abysmal pieces of journalism” and “Only allow journalism that you’ve viewed and edited”, the quantity vs quality tradeoff, I suspect the best place to be would be the one where the writers who know what they’re going to say in advance are filtered, and where the ones who make an actual effort to understand and summarize your position (even if somewhat incompetent) are engaged.
I don’t think the saying “any publicity is good publicity” is true, but “shoddy publicity pointing in the right direction” might be.
I wonder how feasible it is to figure out journalist quality by reading past articles… Maybe ask people who have been interviewed by the person in the past how it went?
I think there’s an important distinction to be made between the different levels of earning to give. Really, there’s a spectrum between “donate 5 percent of income” at one end, and “devote existence to resolving issue” at the other end. For humans trying to do the best they can, in fact, trying to scale up too fast can lead to severe burnout. So caring for yourself and having a good life and low stress is a good idea because it guards against burnout. It is better to donate a thousand dollars a month to resolve an issue than three thousand with an 80% chance of burnout. Slowly build up to higher points on the spectrum that don’t give up quality of life.
Remember, the goal is to do that which works, not to win a “I’m way more hardcore about charity than you!” contest. If that which works leads to sacrifice and you can handle it without burnout risk, then sacrifice. If self-sacrifice doesn’t work for solving the issue, then don’t do it. And yes, aligning oneself with the people working on it and supplying them with resources is pretty much exactly what is required in many cases. Earning to give comes from the fact that the “supplying them with resources” step works much better with more resources, and working at high paying jobs is a good way to get resources.
And finally, about not understanding why someone would completely change their lifestyle to help as many as people as possible… Lifestyle changes tend to look really intimidating from the outside, not from the inside. In college, as an example, going “I’m taking >20 credits” makes people mightily impressed and worried about your inevitable lack of a social life, but once you actually start doing it, it doesn’t feel extraordinary or hard from the inside. Dropping annual expenses from 60k to 15k is another thing that sounds intimidating, but from the inside, it isn’t that difficult, and quality of life doesn’t significantly change.
So that’s one part of it, that it doesn’t take as much of a sacrifice as you think. The second part of it is that if there is anything at all that you value more than the thing you would spend the money on instead, moving the money to the more highly valued thing is inevitable if you don’t compartmentalize. I value ten lives more highly than purchasing a shiny new car, and I suspect that most people would agree with this. It’s just a matter of acting on preexisting values and desires.
The reason to make lots of money to give it away is elaborated on here, in the paragraph about the lawyer who wants to clean up the beach.
Summary version: More charities are funding-limited than volunteer-limited, and if you are making a sufficient amount of money, working one extra hour and donating the proceeds from that hour gets more done, saves more people, than using that hour to volunteer. The important part is to actually save people.
Saving people is far more important than giving consistently (If the best way to save people is to give each month, I want to give each month, if the best way to save people is to donate large chunks infrequently, I want to donate large chunks infrequently), saving people is far more important than having a good attitude towards giving (If having a good attitude towards giving makes me donate more, I want to have a good attitude towards giving, if having a selfish attitude towards giving makes me donate more, I want to have a selfish attitude), and saving people is far more important than spiritually developing in the process (I trust you can complete the pattern). I’m not saying these things are bad, it’s just that they are subgoals of the thing you are trying to accomplish, which is doing the most good. Making a great deal to give it away, and making sure you don’t backslide into selfishness are things to do to ensure that the most people can be saved. Regular giving is secondary in importance.
The goal is not fitting conventional patterns of giving, the goal is to help as many people as possible. To try to get a high score in the LIVES IMPROVED statistics column of the game of life. If something helps in this quest, do it, if it doesn’t help, stop doing it.
Just taking a wild shot at this one, but I suspect that the mistake is between C and D. In C, you start with an even distribution over all the people in the experiment, and then condition on surviving. In D, your uncertainty gets allocated among the people who have survived the experiment. Once you know the rules, in C, the filter is in your future, and in D, the filter is in your past.
Just sent 40 bucks your way. Though I am a college student, I decided that I wanted to begin a donation habit so future me is less likely to go “All discretionary income will be used on me personally”. Thus, this.
When I last looked at the bar, it had 99 donors and ~80k dollars donated, and now it has 104 donors and ~190k dollars donated. From this, I can deduce that somebody donated a whole hell of a lot of money.
Positive reinforcement for the donor! Group approval for the benefactor! High fives and internet hugs all around!
Physics puzzle: Being exposed to cold air while the wind is blowing causes more heat loss/feels colder than simply being exposed to still cold air.
So, if the ambient air temperature is above body temperature, and ignoring the effects of evaporation, would a high wind cause more heat gain/feel warmer than still hot air?
6450 stellar sent. For some reason, it took several days to receive the stellar, and I did not receive the 1000 free stellar, instead I got 6500.
Further recommendations. Twice as many this time.
(remember, feedback on which songs were good and which ones sucked, possibly by PM, helps tailor recommendations to what you like.)
Setsugetsuka (Yukari) Another Yukari song since you seem to like those.
Lonesome Cat (Miku) Rock song about a cat. Funny.
Tori No Uta (IA) Cover of a song originally by the voice provider. Probably the best example of vocal tuning I’ve yet come across.
Cloud Rider (IA) Quite energetic, and one of the more prominent IA songs.
Smile Again (Miku, Gumi) I’m pretty sure this song needs to be classified as some sort of mood-altering drug.
And now for the electronic/dance/trance songs. They get more calming as you approach the bottom.
Twilight Star (IA) The first IA song I ever found. Still pretty nice.
Solosail Remix (Miku Dark Append) In a similar vein as Lost Memories and Idiolect. Feel free to let your impressions of those songs direct your interest in this one.
Unlimited Blue (Luka) It’s always nice to see a non-Miku trance song. I mean, there are a lot of really good ones with Hatsune Miku, but the percentage is too high.
Starport (Yukari) Very calming and perfect at the end of a long day.
Planetarium (IA) The single most sleep-inducing Vocaloid song I know of. In a good way.
Oh, btw, for Yumemidori, this is the official upload of the creator which won’t be taken down.
This seems highly exploitable.
Anyone here want to try to use these bogus numbers to get a publisher to market their own fanfiction?
I disagree strongly, but here is a prototype of one anyways.
There are top-down and bottom-up approaches to logical probabilities. Top-down approaches typically involve distributions selected to fit certain properties, and, while being elegant and easy to apply math to, are often quite uncomputable. Bottom-up approaches take an agent with some information, and ask what they should do to assign probabilities/find out more, leading to a more “hacky” probability distribution, but they also tend to be easier to compute. Interestingly enough, given limited computing resources, these two sorts of distributions have distinct similarities. They both involve a starting probability distribution modified by iterated consistency checks.
Did I get it mostly right?
ADBOC.
Yes, we need to shift emphasis from “boo politics” to “politics is a much more difficult topic to discuss rationally than others”.
But “hard mode” doesn’t have nearly the emotional kick needed to dissuade the omnipresent Dunning-Kruger effect in politics. Running with the video game metaphor, I’m thinking something more along the lines of the feeling of great apprehension induced before playing I Wanna Be The Guy, Kaizo Mario, or the Zero Mercy Minecraft maps. But all the phrases used to refer to that particular cluster of challenges are either inapplicable to politics, or have the connotation of “foolish mortal, how dare you think you can challenge such an obviously impossible task bwahaha”. (Like Nightmare Mode)
Is there a compact phrase which has the connotations of “Whoo boy, I’m probably getting in way over my head with this thing”?
Due to the preposterous number of Vocaloid songs out there, “best” in practice often means “personal favorites of the limited subset the person you are talking to has heard of”. Vocaloid seems to follow Sturgeon’s Law, as does everything else with low barriers to entry (like fanfiction), but fortunately, it doesn’t take much time to check whether a given song is good, so hunting for hidden gems is a fairly fruitful activity as far as Vocaloid songs go. A useful site for this task is VocaDB
Endorsing Gwern’s response below, here are five that I’d say are fairly decent.
Not is Destination The producer, Aerial Flow (that’s his channel, by the way, poke around on it) is one of the best trance producers in the community, and there’s quite a bit more stuff by him that I wanted to link but didn’t.
Lost Memories A fairly nice Dark Append song that is tuned higher than the one you linked, but that I still suspect you may like. Not quite sure what to class it as.
Everlasting Love First song I listened to where I went “I like the vocal tuning on this one.” Piano ballad.
Yumemidori Yes, it isn’t Miku. Yes, it isn’t electronic, it’s guitar and drums. I’m still pretty enamored by it.
Idiolect Another popular techno song with the Dark Append.
“But the general result is that one can start with an AI with utility/probability estimate pair (u,P) and map it to an AI with pair (u’,P) which behaves similarly to (u,P’)”
Is this at all related to the Loudness metric mentioned in this paper? https://intelligence.org/files/LoudnessPriors.pdf It seems like the two are related… (in terms of probability and utility blending together into a generalized “importance” or “loudness” parameter)