In which case, best I can do is 10 lines
MakeIntVar A
Inc A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
In which case, best I can do is 10 lines
MakeIntVar A
Inc A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
A=A*A
Inc A
A=A+A
Well, that does complicate things quite a bit. I threw those lines out of my algorithm generator and the frequency of valid programs generated dropped by ~4 orders of magnitude.
Preliminary solution based on random search
MakeIntVar A
Inc A
Shl A, 5
Inc A
Inc A
A=A*A
Inc A
Shl A, 1
I’ve hit on a bunch of similar solutions, but 2 * (1 + 34^2)
seems to be the common thread.
Define “shortest”. Least lines? Smallest file size? Least (characters * nats/char)?
My mental model of what could possibly drive someone to EA is too poor to answer this with any degree of accuracy. Speaking for myself, I see no reason why such information should have any influence on future human actions.
I’d argue that this is not the case, since the vast majority of people who don’t expect to be “clerks” still end up in similar positions.
Is there any reason to think that % in prison “should” be more equal?
Since we’re talking about optimizing for “equality” between two fundamentally unequal things, why not?
Are you saying having the same amount of men and women in prison would be detrimental to the enforcement of gender equality? How does that follow?
Having actually lived under a regime that purported to “change human behaviour to be more in line with reality”, my prior for such an attempt being made in good faith to begin with is accordingly low.
Attempts to change society invariably result in selection pressure for effectiveness outmatching those for honesty and benevolence. In a couple of generations, the only people left in charge are the kind of people you definitely wouldn’t want in charge, unless you’re the kind of person nobody wants in charge in the first place.
I’m thinking about locating specific centers of our brains and reducing certain activities which undoubtedly make us less aligned with reality and increase the activations of others.
This is the kind of thinking that, given a few years of unchecked power and primate group competition, leads to mass programs of rearranging people’s brain centres with 15th century technology.
Why don’t you spend some time instead thinking about how your forced rationality programme is going to avoid the pitfall all others so far fell into, megalomania and genocide? And why are you so sure your beliefs are the final and correct ones to force on everyone through brain manipulation? If we had the technology to enforce beliefs a few centuries ago, would you consider it a moral good to freeze the progress of human thought at that point? Because that’s essentially what you’re proposing from the point of view of all potential futures where you fail.
Despair and dedicate your remaining lifespan to maximal hedonism.
NRx is systematized hatred.
Am NRx, this assertion is false.
Even if it kill all humans, it will be one human which will survive.
Unless it self-modifies to the point where you’re stretching any meaningful definition of “human”.
Even if his values will evolve it will be natural evolution of human values.
Again, for sufficiently broad definitions of “natural evolution”.
As most human beings don’t like to be alone, he would create new friends that is human simulations. So even worst cases are not as bad as paper clip maximiser.
If we’re to believe Hanson, the first (and possibly only) wave of human em templates will be the most introvert workaholics we can find.
Two things:
all other points have a negative x coordinate, and the x range passed to the tessellation algorithm is [-124, −71]. You probably forgot the minus sign for that point’s x coordinate.
as mentioned above, the algorithm fails to converge because the weights are poorly scaled. For a better graphical representation, you will want to scale them to the range between one and one half of the nearest point distance, but to make it run, just increase the division constant.
The range is specified by the box argument to the compute_2d_voronoi
function, in form [[min_x, max_x], [min_y, max_y]]
. Points and weights can be specified as 2d and 1d arrays, e.g., as np.array([[x1,y1], [x2, y2], [x3, y3], ..., [xn, yn]])
and np.array([w1, w2, w3, ..., wn])
. Here’s an example that takes specified points, and also allows you to plot point radii for debugging purposes: http://pastebin.com/h2fDLXRD
You can use the pyvoro library to compute weigted 2d voronoi diagrams, and the matplotlib library to display them. Here’s a minimal working example with randomly generated data:
http://pastebin.com/wNaYAPvN
edit: It seems this library uses the radical voronoi tessellation algorithm, where “weights” represent point radii. This means if you specify a point radius greater than the distance between it and the closest point, the tessellation will not function correctly, and as a corollary, if a point’s radius is smaller than half of the minimal distance between it and a neighbour, the specified weight will not affect the tessellation process. Therefore, you need a secondary algorithm that takes the point weights and mutual distances into account to produce the desired result here.
A perfect example of a fully general counter-argument!
humanity not extinct or suffering ->FAI black box → humanity still not extinct or suffering
Donate most of your disposable income to MIRI.
In some sense it is voodoo (not very interpretable)
There is research in that direction, particularly in the field of visual object recognising convolutional networks. It is possible to interpret what a neural net is looking for.
*linear algebra computational graph engine with automatic gradient calculation
I really wonder how this will fit into the established deep learning software ecosystem—it has clear advantages over any single one of the large players (Theano, Torch, Caffee), but lacks the established community of any of them. As a researcher in the field, it’s really frustrating that there is no standardisation and you essentially have to know a ton of software frameworks to effectively keep up with research, and I highly doubt Google entering the fray will change this.
Define “optimal”. Optimizing for the utility function of min(my effort), I could misuse more company resources to run random search on.