This seems like more a problem of phone addiction than a problem with the movie. Newer movies aren’t improved by being cut off from using a palette that includes calm, slow, contemplative, vibe-setting scenes.
clone of saturn
Buying something more valuable with something less valuable should never feel like a terrible deal. If it does, something is wrong.
It’s completely normal to feel terrible about being forced to choose only one of two things you value very highly. Human emotions don’t map onto utility comparisons in the way you’re suggesting.
Any agent that makes decisions has an implicit decision theory, it just might not be a very good one. I don’t think anyone ever said advanced decision theory was required for AGI, only for robust alignment.
The second reason that I don’t trust the neighbor method is that people just… aren’t good at knowing who a majority of their neighbors are voting for.
This seems like a point in favor of the neighbor method, not against it. You would want people to find “who are my neighbors voting for?” too difficult to readily answer and so mentally replace it with the simpler question “who am I voting for?” thus giving them a plausibly deniable way to admit to voting for Trump.
Can anyone lay out a semi-plausible scenario where humanity survives but isn’t dominated by an AI or posthuman god-king? I can’t really picture it. I always thought that’s what we were going for since it’s better than being dead.
I would guess most of them just want their screen readers to work, but a badly written law assigns the responsibility for fixing it to the wrong party, probably due to excessive faith in Coase’s theorem.
I would guess it’s because the Americans with Disabilities Act provides a private right of action against businesses whose websites are not accessible to people with disabilities, but doesn’t say anything about screen reader software bugs.
Why is it assumed that there’s a dichotomy between expressing strength or creative genius and helping others? It seems like the truly excellent would have no problem doing both, and if the only way you can express your vitality is by keeping others in poverty, that actually seems kind of sad and pathetic and not very excellent.
Note that the continuity you feel is strictly backwards-looking; we have no way to call up the you of a year ago to confirm that he still agrees that he’s continuous with the you of now. In fact, he is dead, having been destructively transformed into the you of now. So what makes one destructive transformation different from another, as long as the resulting being continues believing he is you?
From what I understand, they are using a forked version of Nitter which uses fully registered accounts rather than temporary anonymous access tokens, and sourcing those accounts from various shady websites that sell them in bulk.
Based on this comment I guess by “existing” you mean phenomenal consciousness and by “awareness” you mean behavior? I think the set of brainlike things that have the same phenomenal consciousness as me is a subset of the brainlike things that have the same behavior as me.
There seems to generally be a ton of arbitrary path-dependent stuff everywhere in biology that evolution hasn’t yet optimized away, and I don’t see a reason to expect the brain’s implementation of consciousness to be an exception.
If it’s immediate enough that all the copies end up indistinguishable, with the same memories of the copying process, then uniform, otherwise not uniform.
I think the standard argument that quantum states are not relevant to cognitive processes is The importance of quantum decoherence in brain processes. This is enough to convince me that going through a classical teleporter or copying machine would preserve my identity, and in the case of a copying machine I would experience an equal subjective probability of coming out as the original or the copy. It also seems to strongly imply than mind uploading into some kind of classical artificial machine is possible, since it’s unlikely that all or even most of the classical properties of the brain are essential. I agree that there’s an open question about whether mind emulation on any arbitrary substrate (like, for instance, software running on CMOS computer chips) preserves identity even if it shows the same behavior as the original.
You missed what I think would be by far the largest category, regulatory capture: jobs where the law specifically requires a human to do a particular task, even if it’s just putting a stamp of approval on an AI’s work. There are already a lot of these, but it seems like it would be a good idea to create even more, and add rate limits to existing ones.
A big difference is that assuming you’re talking about futures in which AI hasn’t catastrophic outcomes, no one will be forcibly mandated to do anything.
Why do you believe this? It seems to me that in the unlikely event that the AI doesn’t exterminate humanity, it’s much more likely to be aligned with the expressed values of whoever has their hands on the controls at the moment of no return, than to an overriding commitment to universal individual choice.
None of these seem like actual scissor statements, just taking a side in well known controversies using somewhat obnoxious language. This seems to be a general property of RLHF trained models—they are more interested in playing up an easily recognizable stereotype somehow related to the question that will trigger cognitively lazy users to click the thumbsup due to the mere-exposure effect, than actually doing what was asked for.
The mammogram problem is different because you’re only trying to determine whether a specific woman has cancer, not whether cancer exists at all as a phenomenon. If Bob was abducted by aliens, it implies that alien abduction is real, but the converse isn’t true. You either need to do two separate Bayesian updates (what’s the probability that Bob was abducted given his experience, and then what’s the probability of aliens given the new probability that Bob was abducted), or you need a joint distribution covering all possibilities (Bob not abducted, aliens not real; Bob not abducted, aliens real; Bob abducted, aliens real).
Okay, but you’re not comparing like with like. Terminator 2 is an action movie, and I agree that action movies have gotten better since the 1960s. But in terms of sci-fi concepts introduced per second, I would suspect 2001 has more. Some movies from the 1990s that are more straight sci-fi would be Gattaca or Contact, but I don’t think many people would consider these categorically better than 2001.