This site isn’t too active—maybe email someone from CFAR directly?
Manfred
Man, this interviewer sure likes to ask dense questions. Bostrom sort of responded to them, but things would have gone a lot smoother if LARB guy (okay, Andy Fitch) had limited himself to one or two questions at a time. Still, it’s kind of shocking the extent to which Andy “got it,” given that he doesn’t seem to be specially selected—instead he’s a regular LARB contributor and professor in an MFA program.
Hm, the format is interesting. The end product is, ideally, a tree of arguments, with each argument having an attached relevance rating from the audience. I like that they didn’t try to use the pro and con arguments to influence the rating of the parent argument, because that would be too reflective of audience composition.
Infinity minus one isn’t smaller than infinity. That’s not useful in that way.
The thing being added or subtracted is not the mere number of hypotheses, but a measure of the likelihood of those hypotheses. We might suppose an infinitude of mutually exclusive theories of the world, but most of them are extremely unlikely—for any degree of unlikeliness, there are an infinity of theories less likely than that! A randomly-chosen theory is so unlikely to be true, that if you add up the likelihoods of every single theory, they add up to a number less than infinity.
It is for this reason that it is important when we divide our hypotheses between something likely, and everything else. “Everything else” contains infinite possibilities, but only finite likelihood.
I think this neglects the idea of “physical law,” which says that theories can be good when they capture the dynamics and building-blocks of the world simply, even if they are quite ignorant about the complex initial conditions of the world.
Can’t this be modelled as uncertainty over functional equivalence? (or over input-output maps)?
Hm, that’s an interesting point. Is what we care about just the brute input-output map? If we’re faced with a black-box predictor, then yes, all that matters is the correlation even if we don’t know the method. But I don’t think any sort of representation of computations as input-output maps actually helps account for how we should learn about or predict this correlation—we learn and predict the predictor in a way that seems like updating a distribution over computations. Nor does it seem to help in the case of trying to understand to what extend two agents are logically dependent on one another. So I think the computational representation is going to be more fruitful.
Interesting that resnets still seem state of the art. I was expecting them to have been replaced by something more heterogeneous by now. But I might be overrating the usefulness of discrete composition because it’s easy to understand.
Plausibly? LW2 seems to be doing okay, which is gonna siphon off posts and comments.
The dust probably is just dust—scattering of blue light more than red is the same reason the sky is blue and the sun looks red at sunset (Rayleigh scattering / Mie scattering). It comes from scattering off of particles smaller than a few times the wavelength of the light—so if visible light is being scattered less than UV, we know that lots of the particles are of size smaller than ~2 um. This is about the size of a small bacterium, so dust with interesting structure isn’t totally out of the question, but still… it’s probably just dust.
I think people get too hung up on computers as being mechanistic. People usually think of symbol manipulation in terms of easy-to-imagine language-like models, but then try to generalize their intuitions to computation in general, which can be unimaginably complicated. It’s perfectly possible to simulate a human on an ordinary classical computer (to arbitrary precision). Would that simulation of a human be conscious, if they matched the behavior of a flesh and blood human almost perfectly, and could output to you via text channel and output things like “well, I sure feel conscious”?
The reason LWers are so confident that this simulation is conscious is because we think of concepts like “consciousness,” to the extent that they exist, as having something to do with the cause of us talking and thinking about consciousness. It’s just like how the concept of “apples” exists because apples exist, and when I correctly think I see an apple, it’s because there’s an apple. Talking about “consciousness” is presumed to be a consequence of our experience with consciousness. And the things we have experience with that we can label “consciousness” are introspective phenomena, physically realized as patterns of neurons firing, that have exact analogies in the simulation. Demanding that one has to be made of flesh to be conscious is not merely chauvinism, it’s a misunderstanding of what we have access to when we encounter consciousness.
Neat paper about the difficulties of specifying satisfactory values for a strong AI. h/t Kaj Sotala.
The design of social choice AI faces three sets of decisions: standing, concerning whose ethics views are included; measurement, concerning how their views are identified; and aggregation, concerning how individual views are combined to a single view that will guide AI behavior. [] Each set of decisions poses difficult ethical dilemmas with major consequences for AI behavior, with some decision options yielding pathological or even catastrophic results.
I think it’s slightly lacking in sophistication about aggregation of numerical preferences, and in how revealed preferences indicate that we don’t actually have incommensurable or infinitely-strong preferences, but is overall pretty great.
On the subject of the problem, I don’t think we should program in values that are ad-hoc on the object level (what values to use—trying to program this by hand is destined for failure), or even the meta level (whose values to use). But I do think it’s okay to use an ad-hoc process to try to learn the answers to the meta-level questions. After all, what’s the worst that could happen? (irony). Of course, the ability to do this assumes the solution of other, probably more difficult philosophical/AI problems, like how to refer to peoples’ values in the first place.
Yeah, whenever you see a modifier like “just” or “merely” in a philosophical argument, that word is probably doing a lot of undeserved work.
I don’t, and maybe you’ve already been contacted, but you could try contacting him on social sites like this one (user paulfchristiano) and Medium, etc. Typical internet stalking skillset.
Ah, you mean to ask if the brain is special in a way that evades our ability to construct an analogy of the chinese room argument for it? E.g. “our neurons don’t indiviually understand English, and my behavior is just the product of a bunch of neurons following the simple laws of chemistry, therefore there is nothing in my body that understands English.”
I think such an argument is totally valid imitation. It doesn’t necessarily bear on the Chinese room itself, which is a more artificial case, but it certainly applies to AI in general.
You say impressions, but I’m assuming this is just the “things I want changed” thread :)
Vote button visibility and responsiveness is a big one for me. Ideally, it should require one click, be disabled while it messages the server, and then change color much more clearly.
On mobile, the layout works nicely, but load / render times are too long (how much javascript is necessary to serve text? Apparently, lots) and the text formatting buttons take up far too much space.
First time, non-logged in viewers should probably not see the green messaging blob in the corner, particularly on mobile.
I agree that some kind of demarcation between comments, and between comments and “write a new comment”, would be nice. Doesn’t have to be 2009 internet boxes, it can be 2017 internet fading horizontal lines or something.
Well, it really is defined that way. Before doing math, it’s important to understand that entropy is a way of quantifying our ignorance about something, so it makes sense that you’re most ignorant when (for discrete options) you can’t pick out one option as more probable than another.
Okay, on to using the definition of entropy as the sum over event-space of -P log(P) of all the events. E.g. if you only had one possible event, with probability 1, your entropy would be 1 log(1) = 0. Suppose you had two events with different probabilities. If you changed the probability assignment so their probability gets closer together, entropy goes up. This is because the function -P log(P) is concave downwards between 0 and 1 - this means that the entropy is always higher between two points than you’d get by just averaging those points (or taking any weighted average, represented by a straight line connecting the two points).. So if you want to maximize entropy, you move all the points together as far as they can go.
Moderation is basically the only way, I think. You could try to use fancy pagerank-anchored-by-trusted-users ratings, or make votes costly to the user in some way, but I think moderation is the necessary fallback.
Goodhart’s law is real, but people still try to use metrics. Quality may speak for itself, but it can be too costly to listen to the quality of every single thing anyone says.
The only thing I don’t like about the “2017 feel” is that it sometimes feel like you’re just adrift in the text, with no landmarks. Sometimes you just want guides to the eye, and landmarks to keep track of how far you’ve read!
I also agree that HPMOR might need to go somewhere other than the front page. From a strategic perspective, I somehow want to get the benefits of HPMOR existing (publicity, new people finding the community) without the drawbacks (it being too convenient to judge our ideas by association).
Ditto.