Open thread, Jul. 03 - Jul. 09, 2017
If it’s worth saying, but not worth its own post, then it goes here.
Notes for future OT posters:
1. Please add the ‘open_thread’ tag.
2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)
3. Open Threads should start on Monday, and end on Sunday.
4. Unflag the two options “Notify me of new top level comments on this article” and “
Postdoctoral position acquired. May be doing some work off a NASA astrobiology grant, eventually.
I think we had a debate about the exact definition of blackmail, so here is an interesting legal opinion:
Blackmail is surprisingly hard to define
I am working on a software tool that allows programmers to automatically extract FSM-like sequence diagrams from their programs (if they use the convention required by the tool).
Here is a diagram expressing the Merge Sort algorithm
Here is the underlying source code.
I believe this kind of tool could be very useful for code documentation purposes. Suggestions or improvements welcome.
Most of code documentation happens in text files. Maybe it’s worth drawing the diagram in ASCII or unicode characters?
You might be interested in Conal Eliott’s work on Compiling to Categories, which enables automatic diagram extraction (among a bunch of other things) for Haskell.
Realistic AI risk scenario similar to The Matrix: ad tech eats the world and keeps humans around for clicks. Clickbots won’t do, because clickbot detection evolves as part of ad tech.
I’ve decided to work on a book while I also work on the computer architecture. It pulls together a bunch of threads of thinking I’ve had around the subject of autonomy. Below is the TL; DR. If lots of peopled are interested I can try and blogify it. If not many people are I might seek your opinions on drafts.
We are entering an age where questions of autonomy become paramount. We have created computers with a certain amount of autonomy and are exploring how to give more autonomy to them. We simultaneously think that autonomous computers are overhyped and that autonomous computers (AI) could one day take over the earth.
The disconnect in views is due to a choice made early in computing’s history that requires a programmer or administrator to look after a computer by directly installing programs and stopping and removing bad programs. The people who are worried about AI are worried that the computers will become more autonomous and no longer need an administrator. People embedded in computing cannot see how this would happen as computers, as they stand, still require someone to control the administrative function and we are not moving towards administrative autonomy.
Can we build computer systems that are administratively autonomous? Administration can be seen as a resource allocation problem, with an explicit administrator serving the same role as a dictator in a command economy. An alternative computer architecture is presented that relies on a market based allocation of resources to programs on based on human feedback. This architecture if realized would allow programs to experiment with new programs in the machine and would lead to a more efficient adaptive computer that didn’t need an explicit administrator. Instead it would be trained by a human.
However making computers more autonomous can either lead to more autonomy for each of us by helping us or it could lead to computers being completely autonomous and us at their mercy. Ensuring the correct level of autonomy in the relationship between computers and people should be a top priority.
The question of more autonomy for humans is a also a tricky one. On the one hand it would allow us to explore the stars and safeguard us from corrupt powers. On the other hand more autonomy for humans might lead to more wars and existential risks due to the increase in destructive powers of individuals and decrease in interdependence.
Autonomy is currently ill defined. It is not an all or nothing affair. During this discussion what we mean by autonomy will be broken down, so that we can have a better way of discussing it and charting our path to the future.
Enjoy this problem!
Your huffman codes with essential indifference are binary trees (each node has 0 or 2 children) with isomorphism.
Let f(n) be the number of trees with n leaves.
Here’s the first 26 numbers of such trees:
[1,1,1,2,3,6,11,23,46,98,207,451,983,2179,4850,10905, 24631,56011,127912,293547,676157,1563372,3626149,8436379,19680277]
It’s something. But what are the codes? An algorithm to create them would suffice. A faster one is better, of course.
The same control flow generates them. In Haskell:
(Beware, I had to use U+2800 to almost align the code block in spite of LW’s software eating whitespace. Source here)
Edit: See also: oeis, where you enter an integer sequence and it tells you where people have seen it.
Very well, congratulations again!
Perhaps a nonrecursive function would be faster.
Not really, the sequence grows quickly enough to outstrip the recursive overhead. To calculate the overhead, replace the
*
inf(i)*f(2n+1-i)
with a+
. Memoizing is of course trivial anyway, using memoFix.I created a 1dollarscan subscription for “100 sets” (each set is 100 pages, so I paid $99 for the ability to scan up to 100sets*100pages/set = 10,000 pages), but I’m not going to use all of the sets, so if you have dead tree books that you’d like to destroy/convert to PDFs, PM me. My subscription ends on July 15th, and you’d have to mail in the books so that they arrive before then.
Where did the term on the top of page three of this paper after “a team’s chance of winning increases by” come from?
https://www.fhi.ox.ac.uk/wp-content/uploads/Racing-to-the-precipice-a-model-of-artificial-intelligence-development.pdf
Two random questions.
1) what is the chance of AGI first happening in Russia? Are they laggards in AI compared to the US and China?
2) is there a connection between fuzzy logic and the logical uncertainty of interest to MIRI or not really?
The kind of money that projects like DeepMind or OpenAI cost seem to be within the budget of a Russian billionaire who strongly cares about the issue.
But there seem to be many countries that are stronger than Russia: https://futurism.com/china-has-overtaken-the-u-s-in-ai-research/
On 2, I’d say not really: fuzzy logic is a logic which has a continuum of truth values. Logical uncertainty works by imposing, on classical logic, a probability assignment that is as “nice” as possible.
1) low chance 2) no connection
This might be better saved for a ‘dumb questions’ thread, but whatever.
So...I’ve had a similar experience a couple of times. You go to the till, make a purchase, something gets messed up and you need to void out. The cashier has to call a manager.
This one time I had a cashier who couldn’t find her manager, so she put the transaction through, then put a refund through. Neither of these required a manager.
Why is it that you need a manager code to void a transaction, while the cashier is presumed confident for sales and refunds?
Voiding a transaction deletes it (I’m pretty sure), which removes the information trail. The other way records the transactions, so if they end up being criminal, the cashier in question is caught.
That sounds right, thanks.
Do we have any non-science-fiction link on a global risk that Narrow AI virus affects robotic hardware, like self-driving cars or home robots, and they start to attack humans?
The story so far:
The thermodynamic arrow of time) says that we tend to end up in macrostates (states of knowledge) that contain many microstates, which is completely compatible with time-symmetric evolution of microstates. Basically physics is like a random walk, which is time-symmetric but you tend to end up in bigger countries. (Bigger countries correspond to macrostates near equilibrium, because there are more ways to arrange two molecules with velocity 10 than one with velocity 0 and another with velocity 20. The difference is exponential in the number of molecules, so the second law of thermodynamics is an iron law indeed.)
The usual problem with that story is Loschmidt’s paradox: if we have a glass of hot water with some ice cubes floating in it, the most probable future of that system is a glass of uniformly warm water, but then so is its most probable past, according to the exact same Bayesian reasoning. Putting that to the extreme, you should conclude that every person you see was a decomposing (recomposing?) corpse a minute ago. That seems weird!
The usual resolution to that paradox is the Past Hypothesis: for predicting the most probable past of a system, we need to condition not just on the present, but also on a very low-entropy distant past. For example, a uniform distribution of matter in the early universe would do the job, because it would be very far from gravitational equilibrium. See this write by Huw Price for a simple explanation.
The trouble is that the Past Hypothesis isn’t completely satisfying. Leaving aside the question of how we can infer the distant past except by looking at the present, in the overall soup of all past and future states it’s still much more likely that any particular low entropy state (like ours) came from a higher entropy one, by pure dumb chance. If only because the future universe will be in equilibrium for a long time, enough for many fluctuations to arise. So you must assume that you’re the smallest possible fluctuation compatible with your experience, which is known as a Boltzmann brain. Basically your whole vision will turn into TV static in the next second. That’s even worse than recomposing corpses!
So what do we make of this? I’ve toyed with the idea that K-complexity might determine which laws of physics we’re likely to see. If you have a bunch of bits describing a world that looks lawful like ours, without recomposing corpses or vision turning into static, then the most likely (K-simplest) future evolution of these bits will follow the same laws, whatever they are. That still leaves the question of figuring out the laws, but at least gives a hint why we aren’t Boltzmann brains, and also why the early universe was simple. That sounds promising! On the other hand, K-complexity feels like a shiny new hammer that can lead to all sorts of paradoxes as well, so we should use it carefully.
What do you think?