I am also in NYC and happy to participate. My lichess rating is around 2200 rapid and 2300 blitz.
robertzk
Thank you, Larks! Salute. FYI that I am at least one who has informally committed (see below) to take up this mantle. When would the next one typically be due?
https://twitter.com/robertzzk/status/1564830647344136192?s=20&t=efkN2WLf5Sbure_zSdyWUw
Inspecting code against a harm detection predicate seems recursive. What if the code or execution necessary to perform that inspection properly itself is harmful? An AGI is almost certainly a distributed system with no meaningful notion of global state, so I doubt this can be handwaved away.
For example, a lot of distributed database vendors, like Snowflake, do not offer a pre-execution query planner. This can only be performed just-in-time as the query runs or retroactively after it has completed, as the exact structure may be dependent on co-location of data and computation that is not apparent until the data referenced by the query is examined. Moreover, getting an accurate dry-run query plan may be as expensive as executing the query itself.
By analogy, for certain kinds of complex inspection procedures you envision, executing the inspection itself thoroughly enough to be reflective of the true execution risk may be as complex and as great of a risk of being harmful according to its values.
I am interested as well. Please share the docs in question with my LW username at gmail dot com if that is a possibility. Thank you!
This was my thought exactly. Construct a robust satellite with the following properties.
Let a “physical computer” be defined as a processor powered by classical mechanics, e.g., through pulleys rather than transistors, so that it is robust to gamma rays, solar flares and EMP attacks, etc.
On the outside of the satellite, construct an onion layer of low-energy light-matter interacting material, such as alternating a coat of crystal silicon / CMOS with thin protective layers of steel, nanocarbon, or other hard material. When the device is constructed, ensure there are linings of Boolean physical input and output channels connecting the surface to the interior (like the proteins coating a membrane in a cell, except that the membrane will be solid rather than liquid), for example, through a jackhammer or moving rod mechanism. This will be activated through a buildup of the material on the outside of the artifact, effectively giving a time counter with arbitrary length time steps depending on how we set up the outer layer. Any possible erosion of the outside of the satellite (from space debris or collisions) will simply expose new layers of the “charging onion”.
In the inside of the satellite, place a 3D printer constructed as a physical computer, together with a large supply of source material. For example, it might print in a metal or hard polymer, possibly with a supply of “boxes” in which to place the printed output. These will be the micro-comets launched as periodic payloads according to the timing device constructed on the surface. The 3D printer will fire according to an “input” event defined by the physical Boolean input, and may potentially be replicated multiple times within the hull in isolated compartments with separate sources of material, to increase reliability and provide failover in case of local failures of the surface layer.
The output of the 3D printer payload will be a replica of the micro-comet containing the message payload, funneled and ejected into an output chute where gravity will take over and handle the rest (this may potentially require a bit of momentum and direction aiming to kick off correctly, but some use of magnets here is probably sufficient). Alternatively, simply pre-construct the micro-comets and hope they stay intact, to be emitted in regular intervals like a gumball machine that fires once a century.
Finally, we compute a minimal set of orbits and trajectories over the continents and land areas likely to be most populated and ensure there is a micro-comet ejected regularly, e.g., say every 25-50 years. It is now easy to complete the argument by fiddling with the parameters and making some “Drake equation”-like assumptions about success rates to say any civilization with X% coverage of the landmass intersecting with the orbits of the comets will have > 25% likelihood of discovering a micro-comet payload.
The only real problem with this approach is guaranteeing your satellites are not removed in the future in the event future ancestors of our civilization disagree with this method. I don’t see a solution to this other than through solving the value reflection problem, building a defense mechanism into the satellites that is certain to fail—as you start getting close to the basic AI drive of self-preservation and will anyway be outsmarted by any future iteration of our civilization—or making the satellites small or undetectable enough that finding and removing them is economically more pain than it is worth.
To not support EA? I am confused. Doesn’t the drowning child experiment lend credence to supporting EA?
Isn’t this an example of a reflection problem? We induce this change in a system, in this case an evaluation metric, and now we must predict not only the next iteration but the stable equilibria of this system.
Did you remove the vilification of proving arcane theorems in algebraic number theory because the LessWrong audience is more likely to fall within this demographic? (I used to be very excited about proving arcane theorems in algebraic number theory, and fully agree with you.)
Incidentally, for a community whose most important goal is solving a math problem, why is there no MathJax or other built-in Latex support?
The thing that eventually leapt out when comparing the two behaviours is that behaviour 2 is far more informative about what the restriction was, than behaviour 1 was.
It sounds to me like the agent overfit to the restriction R. I wonder if you can draw some parallels to the Vapnik-style classical problem of empirical risk minimization, where you are not merely fitting your behavior to the training set, but instead achieve the optimal trade-off between generalization ability and adherence to R.
In your example, an agent that inferred the boundaries of our restriction could generate a family of restrictions R_i that derive from slightly modifying its postulates. For example, if it knows you check in usually at midnight, it should consider the counterfactual scenario of you usually checking in at 11:59, 11:58, etc. and come up with the union of (R_i = play quietly only around time i), i.e., play quietly the whole time, since this achieves maximum generalization.
Unfortunately, things are complicated by the fact you said “I’ll be checking up on you!” instead of “I’ll be checking up on you at midnight!” The agent needs to go one step farther than the machine teaching problem and first know how many counterfactual training points it should generate to infer your intention (the R_i’s above), and then infer it.
A high-level conjecture is whether human CEV, if it can be modeled as a region within some natural high-dimensional real-valued space (e.g., R^n for high n where each dimension is a utility function?), admits minimal or near minimal curvature as a Riemannian manifold assuming we could populate the space with the maximum available set of training data as mined from all human literature.
A positive answer to the above question would be philosophically satisfying as it would imply a potential AI would not have to set up corner cases and thus have the appearance of overfitting to the restrictions.
EDIT: Framed in this way, could we use cross-validation on the above mentioned training set to test our CEV region?
However, UFFire does not uncontrollably exponentially reproduce or improve its functioning. Certainly a conflagration on a planet covered entirely by dry forest would be an unmitigatable problem rather quickly.
In fact, in such a scenario, we should dedicate a huge amount of resources to prevent it and never use fire until we have proved it will not turn “unfriendly”.
I down-voted this comment because it is a clever ploy for karma that rests on exploiting LessWrongers’ sometimes unnecessary enthusiasm for increasingly abstract and self-referential forms of reasoning but otherwise adds nothing to the conversation.
Twist: By “this comment” I actually mean my comment, thereby making this a paraprosdokian.
I am an active github R contributor and stackoverflow R contributor and I would be willing to coordinate. Send me an email: rkrzyz at gmail
So you are saying that explaining something is equivalent to constructing a map that bridges an inferential distance, whereas explaining something away is refactoring thought-space to remove an unnecessary gerrymandering?
It feels good knowing you changed your mind in response to my rebuttal.
I disagree with your preconceptions about the “anti” prefix. For example, an anti-hero is certainly a hero. I think it is reasonable to consider “anti” a contextually overloaded semantic negater whose scope does not have to be the naive interpretation: anti-X can refer to “opposite of X” or “opposite or lacking of a trait highly correlated with X” with the exact choice clear from context.
I got a frequent LessWrong contributor a programming internship this summer.
It is as if you’re buying / shorting an index fund on opinions.
Strong AI could fail if there are limits to computational integrity on sufficiently complex systems, similar to heating and QM problems limiting transistor sizes. For example, perhaps we rarely see these limits in humans because their frequency is one in a thousand human-thought-years, and when they do manifest it is mistaken as a diagnosis of mental illness.
This bounty went to: Victor Levoso.