That assumes the scenario is iterated, I’m talking it’d precomit to do so even in a one-of scenario. The resxzt of you argument was my point, that the same reasoning goes for anger.
Armok_GoB
Wow, people are still finding this occasionally. It fills me with Determination.
Um no. The specif sequence of muscle contractions is the action, and the thing they try to achieve is beautiful patterns of motion with certain kinds of rhythm and elegance, and/or/typically the perception of such in an observer.
This thing is still alive?! :D I really should get working on that updated version sometime.
Didn’t think of it like that, but sort of I guess.
It has near maximal computational capacity, but that capacity isn’t being “used” for anything in particular that is easy to determine.
This is actually a very powerful criteria, in terms of number of false positive and negatives. Sadly, the false positives it DOES have still far outweigh the genuine positives, and includes all the WORST outcomes (aka, virtual hells) as well.
Well, that’s quite obvious. Just imagine the blackmailer is a really stupid human with a big gun that’d fall for blackmail in a variety of awful ways, and has a bad case of typical mind fallacy, and if anything goes other than their expectations they get angry and just shot them before thinking through the consequences.
Another trick it could use is using chatbots most of the time, but swaping them out for real people only for the moments you are actually talking about deep stuff. Maybe you have deep emotional conversations with your family a few hours a week. Maybe once per year, you have a 10 hour intense discussion with Eliezer. That’s not a lot out of 24 hours per day, the vast majority of the computing power is still going into simulating your brain.
Edit: another; the chatbots might have some glaring failure modes if you say the wrong thing, unable to handle edge cases, but whenever you encounter then the sim is restored from a backup 10 min earlier and the specific bug is manually patched. If this went on for long enough the chatbots would become real people, and also bloat slow, but it hasn’t happened yet. or maybe the patches that dont come up in long enoguh get commented out.
Hmm, maybe I need to reveal my epistemology another step towards the bottom. Two things seem relevant here.
I think you you SHOULD take your best model literally if you live in a human brain, since it can never get completely stuck requiring infinite evidence due to it’s architecture, but does have limited computation and doubt can both confuse it and damage motivation. The few downsides there are can be fixed with injunctions and heuristics.
Secondly, you seem to be going with fuzzy intuitions or direct sensory experience as the most fundamental. At my core is instead that I care about stuff, and that my output might determine that stuff. The FIRST thing that happens is conditioning on that my decisions matter, and then I start updating on the input stream of a particular instance/implementation of myself. My working definition of “real” is “stuff I might care about”.
My point wasn’t that the physical systems can be modeled BY math, but that they themselves model math. Further, that if the math wasn’t True, then it wouldn’t be able to model the physical systems.
With the math systems as well you seem to be coming from the opposite direction. Set theory is a formal system, arithmetic can model it using gödel numbering, and you can’t prevent that or have it give different results without breaking arithmetic entirely. Likewise, set theory can model arithmetic. It’s a package deal. Lambda calculus and register machines are also members of that list of mutual modeling. I think even basic geometry can be made sort of Turing complete somehow. Any implementation of any of them must by necessity model all of them, exactly as they are.
You can model an agent that doesn’t need the concepts, but it must be a very simple agent with very simple goals in a very simple environment. To simple to be recognizable as agentlike by humans.
I don’t mean just sticky models. The concepts I’m talking about are things like “probability”, “truth”, “goal”, “If-then”, “persistent objects”, etc. Believing that a theory is true that says “true” is not a thing theories can be is obviously silly. Believing that there is no such things as decisionmaking and that you’re a fraction of a second old and will cease to be within another fraction of a second might be philosophically more defensible, but conditioning on it not being true can never have bad consequences while it has a chance of having good ones.
I were talking about physical systems, not physical laws. Computers, living cells, atoms, the fluid dynamics of the air… “Applied successfully in many cases”, where “many” is “billions of times every second”
Then ZFC is not one of those cores ones, just one of the peripheral ones. I’m talking ones like set theory as a whole, or arithmetic, or Turing machines.
It’s pre alpha, and I basically haven’t worked on it in all the months since posting this, but ok. http://jsbin.com/adipaj/307
The cause of me believing math is not “it’s true in every possible case”, because I can’t directly observe that. Nor is it “have been applied successfully in many cases so far”.
Basically it’s “maths says it’s true” where maths is an interlocking system of many subsystems. MANY of these have been applied successfully in many cases so far. Many of them render considering them not true pointless, in the sense all my reasoning and senses are invalid if they don’t hold so I might as well give up and save computing time by conditioning on them being true. Some of them are in implicit in every single frame of my input stream. Many of them are used by my cognition, and if I consistently didn’t condition on them being true I’d have been unable to read your post or write this reply. Many of them are directly implemented in physical systems around me, which would cease to function if they failed to hold in even one of the billions and billions of uses. Most importantly, many of them claim that several of the others must always be true of they themselves are not, and while gödelian stuff means this can’t QUITE form a perfect loop in the strongest sense, the fact remains that if any of them fell ALL the others would follow like a house of cards; you cant have one of them without ALL the others.
You might try to imagine an universe without math. And there are some pieces of math that might be isolated and in some sense work without the others. But there is a HUGE core of things that cant work without each other, nor without all those outlying pieces, at all even slightly. So your universe couldn’t have geometry, computation, discrete objects that can be moved between “piles”, anything resembling fluid dynamics, etc. Not much of an universe, nor much sensical imaginability, AND it would be necessity be possible to simulate in an universe that does have all the maths so in some sense it still wouldn’t be “breaking” the laws.
Being able to eat while parkouring to your next destination and using a laptop at the same time might. And choosing optimally nutritious food. Even if you did eat with a fork, you wouldn’t bring the fork in a parabola, you’d jerk it a centimeter up to fling it towards the mouth, then bring it back down to do the same to the next bite while the previous is still in transit.
hmm, idea, how well’d this work: you have a machine that drops the reward with a certain low probability every second, but you have to put it back rather than eat it if you weren’t doing the task?
Wish I could upvote this 1000 times. This will probably do far more for this site than 1000 articles of mere content. Certainly, it will for my enjoyment and understanding.
You probably do have a memory, it’s just false. Human brains do that.
What actually happens is you should be consequential at even-numbered meta-levels and virtue-based on the odd numbered ones… or was it the other way around? :p
The obvious things to do here is either:
a) Make a list/plan on paper, abstractly, of what you WOULD do is you had terminal goals, using your existing virtues to motive this act, and then have “Do what the list tells me to” as a loyalty-like high priority virtue. If you have another rationalist you really trust, and who have a very strong honesty commitment, you can even outsource the making of this list.
b) Assemble virtues that sum up to the same behaviors in practice; truth seeking, goodness, and “If something is worth doing it’s worth doing optimally” is a good trio, and will have the end result of effective altruism while still running on the native system.
You are, in this very post, questing and saying that your utility function PROBABLY this and that you dont think there’s uncertainty about it… That is, you display uncertainty about your utility function. Check mate.
Also, “infinity=infinity” is not the case. Infinity ixs not a number, and the problem goes away if you use limits. otherwise, yes, I even probaböly have unbounded but very slow growing facotrs for s bunch of thigns like that.
The solution here might be that it does mainly tell you they have constructed a coherent story in their mind, but that having constructed a coherent story in their mind is still usefull evidence for being true depending on what else you know abaut the person, and thus worth telling. If the tone of the book was differnt, it might say: