Ouch! I donated $135 (and asked my employer to match as well) on Nov 2, India time. I had been on a brief vacation and just returned. Now I re-read and found it is too late for the fundraiser. Anyway, please take this as positive reinforcement for what it is worth. You’re doing a good job. Take the money as part of fundraiser or off-fund raiser donations, whatever is appropriate.
blogospheroid
This basically boils down to the root of the impulse to remove a chesterton’s fence, doesn’t it?
Those who believe that these impulses come from genuinely good sources (eg. learned university professors) like to take down those fences. Those who believe that these impulses come from bad sources (eg. status jockeying, holiness signalling) would like to keep them.
The reactionary impulse comes from the basic idea that the practice of repeatedly taking down chesterton’s fences will inevitably auto-cannibalise and the system or the meta-system being used to defend all these previous demolitions will also fall prey to one such wave. The humans left after that catastrophe will be little better than animals, in some cases maybe even worse, lacking the ability and skills to survive.
Donated $100 to SENS. Hopefully, my company matches it. Take that, aging, the killer of all!
I’m not a physicist, but aren’t this and the linked quanta article on Prof. England’s work bad news? (great filter wise)
If this implies self-assembly is much more common in the universe, then that makes it worse for the latter proposed filters (i.e. makes them EDIT higher probability)
I donated $300 which I think my employer is expected to match. So $600 to AI value alignment here!
I feel for you. I agree with salvatier’s point in the linked page. Why don’t you try to talk to FHI directly? They should be able to get some funding your way.
Letting market prices reign everywhere, but providing a universal basic income is the usual economic solution.
Guys everyone on reddit/Hpmor seems to be talking about a spreadsheet with all solutions listed. Could anyone please post the link as a reply to this comment. Pretty please with sugar on top :)
A booster for getting AI values right is the 2 sidedness of the process. Existential risk and benefit.
To illustrate—You solve poverty, you still have to face climate change, you solve climate change, you still have to face biopathogens, you solve biopathogens, you still have to face nanotech, you solve nanotech, you still have to face SI. You solve SI correctly, the rest are all done. For people who use the cui bono argument, I think this answer is usually the best one to give.
Is anyone aware of the explanation behind why technetium is radioactive while molybdenum and ruthenium, the two elements astride it in the periodic table are perfectly normal? Searching on google on why certain elements are radioactive are giving results which are descriptive, as in X is radioactive, Y is radioactive, Z is what happens when radioactive decay occurs, etc. None seem to go into the theories which have been proposed to explain why something is radioactive.
Forum for Exploratory Research in General AI
I think this is a very important contribution. The only internal downside of this might be that the simulation of the overseer within the ai would be sentient. But if defined correctly, most of these simulations would not really be leading bad lives. The external downside is overtaking by other goal oriented AIs.
The thing is, I think in any design, it is impossible to tear away purpose from a lot of the subsequent design decisions. I need to think about this a little deeper.
How do they propose to move the blackholes? Nothing can touch a blackhole, right?
Donated $300 to SENS foundation just now. My company matches donations, so hopefully a large cheque is going there. Fightaging is having a matching challenge for SENS, so even more moolah goes to anti-aging research. Hip Hip Hurray!
Weird fictional theoritical scenario. Comments solicited.
In the future, mankind has become super successful. We have overcome our base instincts and have basically got our shit together. We are no longer in thrall to Azathoth (Evolution) or Mammon (Capitalism).
We meet an alien race, who are way more powerful than us and they show their values and see ours. We seek to cooperate on the prisoner’s dilemma, but they defect. In our dying gasps, one of us asks them “We thought you were rational. WHY?...”
They reply ” We follow a version of your meta-golden rule. Treat your inferiors as you would like to be treated by your superiors. In your treatment of super intelligences that were alive amongst you, the ones you call Azathoth and Mammon, we see that you really crushed them. I mean, you smashed them to the ground and then ran a road roller, twice. I am pretty certain you cooperated with us only because you were afraid. We do to you what you did to them”
What do we do if we could anticipate this scenario? Is it too absurd? Is the idea of extending our “empathy” to the impersonal forces that govern our life too much? What if the aliens simply don’t see it that way?
So, is my understanding correct that your FAI is going to consider only your group/cluster’s values?
Yes, that too.
Poland had used a version of that when arguing with the European union about the share in some commision, I’m not remembering what. It mentioned how much Poland’s population might have been had they not been under attack from 2 fronts, the nazis and the communists.
Not doing so might leave your AI to be vulnerable to a slower/milder version of this. Basically, if you enter a strictly egalitarian weighting, you are providing vindication to those who thoughtlessly brought out children into the world and disincentivizing, in a timeless , acausal sense, those who’re acting sensibly today and restricting reproduction to children they can bring up properly.
I’m not very certain of this answer, but it is my best attempt at the qn.
I went from straight Libertarianism to Georgism to my current position of advocacy of competitive government. I believe in the right to exit and hope to work towards a world where exit gets easier and easier for larger numbers. My current anti-democratic position is informed by the amateur study of public choice theory and incentives. My formalist position is probably due to an engineering background and liking things to be clear.
When the fundamental question arises—what keeps a genuine decision maker, a judge or a bureaucrat in government (of a polity way beyond the dunbar number) honest, then the 3 strands of neo-reaction appear as three possible answers—Either the person believes in a higher power (religious traditionalism) or they feel that the people they are making a decision for are an extended family (ethnic nationalism) or they personally profit from it (Techno-commercialism). Or a mix of the three, which is more probable.
There are discussions in NRx about whether religious traditionalism should even be given a place here, since it is mostly traditional reaction, but that is deviating from the main point. Each of these strands holds something sacred—a theocracy holds the diety supreme, an ethno state holds the race supreme, a catallarchy holds profit supreme. And I think you really can’t have a long term governing structure which doesn’t hold something really sacred. There has to be a cultural hegemony within which diversities which do not threaten the cultural hegemony can flourish. Even Switzerland, the land of 3 nations democratically bound together has a national military draft which ties its men in brotherhood.
A part of me is still populist, I think, holding out for algorithmic governance to be perfected and not having to rely on human judgement which could be biased. But time and time again, human judgement based organizations have defeated, soundly, procedure based organizations. Apple is way more valuable than Toyota. The latter is considered the pinnacle of process based firms. The former was famously run till recently, by a mercurial dictator. So, human judgement has to be respected, which means clear sovereignty of the humans in question, which means something like the neo-cameralism of Moldbug, until the day of FAI.
The link is broken, I think + Didn’t Alex Tabarrok do one better by creating the dominant assurance contract?