Knowledge Seeker https://lorenzopieri.com/
lorepieri
Yes voter, if you can read this: why? It would be great to get an explanation (anon).
Damn, we did not last even 24hrs…
Thanks for the alternative poll. One would think that with rules 2 and 5 out of the way it should be harder to say Yes.
How confident are you that someone is going to press it? If it’s pressed: what’s the frequency of someone pressing it? What can learn from it? Does any of the rules 2-5 play a crucial role in the decision to press it?
(we are still alive so far!)
[Question] [Thought Experiment] Given a button to terminate all humanity, would you press it?
This is a pretty counter-intuitive point indeed, but up to a certain threshold this seems to me the approach that minimise risks, by avoiding large capability jumps and improving the “immune system” of society.
Thanks for the insightful comment. Ultimately the different attitude is about the perceived existential risk posed by the technology and the risks coming by acting on accelerating AI vs not acting.
And yes I was expecting not to find much agreement here, but that’s what makes it interesting :)
A somewhat similar statistical reasoning can be done to argue that the abundance of optional complexity (things could have been similar but simpler) is evidence against the simulation hyphotesis.
See https://philpapers.org/rec/PIETSA-6 (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization)
This is based on the general principle of computational resources being finite for any arbitrary civilisations (assuming infinities are not physical) and therefore minimised when possible by the simulators. In particular one can use the simplicity assumption: If we randomly select the simulation of a civilization in the space of all possible simulations of that civilization that have ever been run, the likelihood of picking a given simulation is inversely correlated to the computational complexity of the simulation.
It is hard to argue that a similar general principle can be found for something being “mundane” since the definition of mundane seems dependent on the simulators point of view. Can you perhaps modify this reasoning to make it more general?
s/acc: Safe Accelerationism Manifesto
Let’s start with one of those insights that are as obvious as they are easy to forget: if you want to master something, you should study the highest achievements of your field.
Even if we assume this, it does not follow that we should try to recreate the subjective conditions that led to (perceived) “success”. The environment is always changing (tech, knowledge base, tools), so many learnings will not apply. Moreover, biographies tend to create a narrative after the fact, emphasizing the message the writer want to convey.
I prefer the strategy to master the basics from previous works and then figure out yourself how to innovate and improve the state of the art.
True :)
(apart for your reply!)
What fact that you know is true but most people aren’t ready to accept it?
Using the Universal Distribution in the context of the simulation argument makes a lot of sense if we think that the base reality has no intelligent simulators, as it fits with our expectations that a randomly generated simulator is very likely to be coincise. But for human (or any agent-simulators) generated simulations, a more natural prior is how easy is the simulation to be run (Simplicity Assumption), since agent-simulators face concrete tradeoffs in using computational resources, while they have no pressing tradeoffs on the length of the program.
See here for more info on the latter assumption.
This is also known as Simplicity Assumption: “If we randomly select the simulation of a civilization in the space of all possible simulations of that civilization that have ever been run, the likelihood of picking a given simulation is inversely correlated to the computational complexity of the simulation.”
In a nutshell, the amount of computation needed to perform simulations matters (if resources are somewhat finite in base reality, which is fair to imagine), and over the long term simple simulations will dominate the space of sims.
See here for more info.
Regarding (D), it has been elaborated more in this paper (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization).
I would suggest to remove “I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community. ” and present your argument, without speaking for the whole community.
A pragmatic metric for Artificial General Intelligence
Very interesting division, thanks for your comment.
Paraphrasing what you said, in the informational domain we are very close to post scarcity already (minimal effort to distribute high level education and news globally), while in the material and human attention domain we likely still need advancements in robotics and AI to scale.
You mean the edit functionality of Gitlab?
Thanks for the gitbook tip, I will look into it.
Yes, the code is open source: https://gitlab.com/postscarcity/map
Hi Clement, I do not have much to add to the previous critiques, I also think that what needs to be simulated is just a consistent enough simulation, so the concept of CI doesn’t seem to rule it out.
You may be interested in a related approach ruling out the sim argument based on computational requirements, as simple simulations should be more likely than complex one, but we are pretty complex. See “The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization” (https://philarchive.org/rec/PIETSA-6)
Cheers!