Knowledge Seeker https://lorenzopieri.com/
lorepieri
Yes voter, if you can read this: why? It would be great to get an explanation (anon).
Damn, we did not last even 24hrs…
Thanks for the alternative poll. One would think that with rules 2 and 5 out of the way it should be harder to say Yes.
How confident are you that someone is going to press it? If it’s pressed: what’s the frequency of someone pressing it? What can learn from it? Does any of the rules 2-5 play a crucial role in the decision to press it?
(we are still alive so far!)
This is a pretty counter-intuitive point indeed, but up to a certain threshold this seems to me the approach that minimise risks, by avoiding large capability jumps and improving the “immune system” of society.
Thanks for the insightful comment. Ultimately the different attitude is about the perceived existential risk posed by the technology and the risks coming by acting on accelerating AI vs not acting.
And yes I was expecting not to find much agreement here, but that’s what makes it interesting :)
A somewhat similar statistical reasoning can be done to argue that the abundance of optional complexity (things could have been similar but simpler) is evidence against the simulation hyphotesis.
See https://philpapers.org/rec/PIETSA-6 (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization)
This is based on the general principle of computational resources being finite for any arbitrary civilisations (assuming infinities are not physical) and therefore minimised when possible by the simulators. In particular one can use the simplicity assumption: If we randomly select the simulation of a civilization in the space of all possible simulations of that civilization that have ever been run, the likelihood of picking a given simulation is inversely correlated to the computational complexity of the simulation.
It is hard to argue that a similar general principle can be found for something being “mundane” since the definition of mundane seems dependent on the simulators point of view. Can you perhaps modify this reasoning to make it more general?
Let’s start with one of those insights that are as obvious as they are easy to forget: if you want to master something, you should study the highest achievements of your field.
Even if we assume this, it does not follow that we should try to recreate the subjective conditions that led to (perceived) “success”. The environment is always changing (tech, knowledge base, tools), so many learnings will not apply. Moreover, biographies tend to create a narrative after the fact, emphasizing the message the writer want to convey.
I prefer the strategy to master the basics from previous works and then figure out yourself how to innovate and improve the state of the art.
True :)
(apart for your reply!)
Using the Universal Distribution in the context of the simulation argument makes a lot of sense if we think that the base reality has no intelligent simulators, as it fits with our expectations that a randomly generated simulator is very likely to be coincise. But for human (or any agent-simulators) generated simulations, a more natural prior is how easy is the simulation to be run (Simplicity Assumption), since agent-simulators face concrete tradeoffs in using computational resources, while they have no pressing tradeoffs on the length of the program.
See here for more info on the latter assumption.
This is also known as Simplicity Assumption: “If we randomly select the simulation of a civilization in the space of all possible simulations of that civilization that have ever been run, the likelihood of picking a given simulation is inversely correlated to the computational complexity of the simulation.”
In a nutshell, the amount of computation needed to perform simulations matters (if resources are somewhat finite in base reality, which is fair to imagine), and over the long term simple simulations will dominate the space of sims.
See here for more info.
Regarding (D), it has been elaborated more in this paper (The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization).
I would suggest to remove “I dont think you are calibrated properly about the ideas that are most commonly shared in the LW community. ” and present your argument, without speaking for the whole community.
Very interesting division, thanks for your comment.
Paraphrasing what you said, in the informational domain we are very close to post scarcity already (minimal effort to distribute high level education and news globally), while in the material and human attention domain we likely still need advancements in robotics and AI to scale.
You mean the edit functionality of Gitlab?
Thanks for the gitbook tip, I will look into it.
Yes, the code is open source: https://gitlab.com/postscarcity/map
Interesting paradox.
As other commented, I see multiple flaws:
We believe to seem to know that there is a reality that exists. I doubt we can conceive reality, but only a vague understanding of it. Moreover we have no experience of “not existing”, so it’s hard to argue that we have a strong grasp on deeply understanding that there is a reality that exists.
Biggest issue is here imho (this is a very common misunderstanding): math is just a tool which we use to describe our universe, it is not (unless you take some approach like the mathematical universe) our universe. The fact that it works well is selection bias. We use math that works well to describe our universe, we discard the rest (see e.g. negative solution to the equation of motion in newtonian mechanics). Math by itself is infinite, we just use a small subset to describe our universe. Also we take insipiration from our universe to build math.
Not conclusive, but still worth doing in my view due to the relative easiness. Create the spreadsheet, make it public and let’s see how it goes.
I would add the actual year in which you think it will happen.
Yea, what I meant is that the slides of Full Stack Deep Learning course materials provide a decent outline of all of the significant architectures worth learning.
I would personally not go to that low level of abstraction (e.g. implementing NNs in a new language) unless you really feel your understanding is shaky. Try building an actual side project, e.g. an object classifier for cars, and problems will arise naturally.
I fear that measuring modifications it’s like measuring a moving target. I suspect it will be very hard to consider all the modifications, and many AIs may blend each other under large modifications. Also it’s not clear how hard some modifications will be without actually carrying out those modifications.
Why not fixing a target, and measuring the inputs needed (e.g. flops, memory, time) to achieve goals?
I’m working on this topic too, I will PM you.
Also feel free to reach out if topic is of interest.
Hi Clement, I do not have much to add to the previous critiques, I also think that what needs to be simulated is just a consistent enough simulation, so the concept of CI doesn’t seem to rule it out.
You may be interested in a related approach ruling out the sim argument based on computational requirements, as simple simulations should be more likely than complex one, but we are pretty complex. See “The Simplicity Assumption and Some Implications of the Simulation Argument for our Civilization” (https://philarchive.org/rec/PIETSA-6)
Cheers!