Just this guy, you know?
Dagon
Oh ye of little faith about how fast technology is about to change. (I think it’s already pretty easy to do almost-subvocalized messages. I guess this conversation is sort of predicated on it being pre-uploads and maybe pre-ubiquitous neuralink-ish things)
Subvocal mikes have been theoretically possible (and even demo’d) for decades, and highly desired and not yet actually feasible for public consumer use, which to me is strong evidence that it’s a Hard Problem. Neurallink or less-invasive brain interfaces even more so.
There’s a lot of AI and tech bets I won’t take—pure software can change REALLY fast. However, I’d be interested to operationalize this disagreement about hardware/wetware interfaces and timelines. I’d probably lay 3:1 against either voice-interface-usable-on-a-crowded-train or non-touch input and non-visual output via a brain link becoming common (say, 1% of smartphone users) by end of 2027, or 1:1 against for end of 2029.
Of the two, I give most weight to my losing this bet via subvocal interfaces that LLMs can be trained to interpret, with only a little bit of training/effort on the part of the user. That’ll be cool, but it’s still very physical and I predict won’t quickly work.
I think one has to admit that smartphones with limited-attention-space are the revealed modal preference of consumers. It’s not at all clear that this is an inadequate equilibrium to shift, so much as a thing that many consumers actively want.
I doubt it’ll ever be mostly voice interface—there is no current solution to use voice in public without bothering others. Audio is also MUCH lower bandwidth than visual displays. It will very likely be hybrid/multi-modal, with different sets of modality for different users/contexts.
I do suspect that it won’t be long for LLM-intermediated “browsing” becomes common, where a lot of information-centric websites see more MCP traffic than HTML (render-able) requests. There’ll be a horroble mix of “thin apps” which are just a captive LLM search/summarize/render engine, and “AI browsers” which try to do this generically for many sources. Eventually, some standards will evolve about semantic encoding for best use in these things, and for visual hints to make it easier to display usefully.
To the curmudgeons among us, this will feel like reinventing HTML and CSS, badly. I hope we’ll be wrong, and it does actually lead to personalized/customized views and usage of many current semi-static site designs.
I’m not sure this changes the underlying disagreement. “desire to be predicted to one-box” is universal. Two-boxers deny the causal link from the prediction to the action. One boxers deny their own free will, and accept that the prediction must match the behavior.
Talk about “desire” or “intent” or “disposition” is only an attempt to elucidate that the full question is whether the prediction can actually be correct.
In other words, the results table below, the whole debate is whether X is reachable in the in-game universe (which may or may not map to our universe in terms of causality)Your Action Omega predict One-box Two-box One-box $1M X Two-box X $1000
I think this is good advice. As some feedback, I’d focus on the fact that usabiliy testing is a pull operation, not a push—it takes a lot of effort to guide customers/reviewers toward helpful dimensions. The lessons don’t necessarily apply to online forums or other communication channels.
I also think, that around here, some ideas seem to take root despite significant challenges in early comments—this isn’t a matter of not getting feedback, but of not listening to feedback.
Regardless, this is good advice to people who are looking to strengthen their ideas.
That’s interesting. I’d argue that the concept of “realm” is itself a modeling choice, and non-real, but let’s leave that aside.
So, do those who claim that morals are a “real” thing, similar to mathematics, ALSO claim that esthetics and the others are just as real? And for those other domains, including morality, what are the equivalent fundamental assumptions (like the various definitions of equality to choose from in math)?
Thanks. Can you describe a bit more about #2 “Abstract objects are real”? I don’t see how this could be believed. There are elements of reality that correspond pretty well to abstract objects, but never (AFAIK) precisely—there are always variants or finer-grained measurements that don’t match the abstraction.
”All models are wrong, some models are useful” seems so completely valid for every abstraction I can think of that I really think I’m missing something basic that someone could claim otherwise.
I think the normal people have it basically right, and the people who aren’t normal are being scared of ghosts
I think both sides are missing a VERY important element. No society or decision process is purely driven by rules. There are all sorts of incentives and societal consequences to legal-but-unpleasant behaviors. Human judgement fills in most of the gaps.
You only make more rules when there’s some reason that natural coordination (aka: bullying) isn’t doing the job. And the non-normal people are right to point out that this is underspecified and probably fragile in many edge-cases. And the normals are right that it’s mostly worked so far.
[Question] Moral realism—basic Q
Woe is us! We may never know the true morality even if we knew all physical facts!”
I pretty strongly suspect that there is no such thing as “true morality”. There’s nothing to know, even if we know all the physical facts. We can know how individual segments of space-time (we call them “people”) process and model themselves and each other, but we already know a bit about that, and there’s no indication that it’s any deeper or more “true” than any other evolved cognition.
Genocide is morally impermissable in my mind, and in many modern humans. That’s an individual and group preference about the world. Preferences are physical facts in some sense, in that they’re embodied in the physics of brains. But they’re not fundamental nor universal in the way we normally think of physics.
I think that may be my confusion about this post—you’re exploring CONDITIONAL on moral realism, rather than trying to show that moral realism is correct. Thanks for the discussion, and helping me understand.
Unless I deeply misunderstand, this just moves the question out one level to “which axioms apply to moral calculus?”. For physical systems, we can propose axioms and test them against observations. When we find a set we cannot disprove, we call it good. What’s the equivalent process for moral observation?
I’m agnostic about p-zombies: I can’t detect qualia in other humans, though I give them the benefit of the doubt due to physical similarity with what I think is my experience-organ. I have no clue if y’all are zombies or not. I’m double-skeptical of p-evildoers, as I can’t even identify in myself what is talked about by “evil”, let alone knowing how others experience it.
This is a useful exploration, and it could do with a small summary of other options. “If you don’t understand, and can’t be confident in your steelman, you should …”
In interactive scenarios (when discussing something in good faith among individuals or a small group where status among outsiders is not hugely at stake), “active listening” is a great way to gain the understanding, and steelmaning someone’s ideas BACK TO THEM is a great way for them to correct you, where strawmanning just makes them defensive. Depending on the person, simple questions may be more effective than either.
In public scenarios, questioning and asking specifically about points you don’t understand is often better than offering steelman suggestions for them to accept or correct.In advocacy scenarios, especially broadcast-like ones where there’s very little followup, strawmanning is annoyingly effective. There really are a lot of readers who have soldier mindset rather than growth.
Suppose we are working in an axiomatic system rich enough to express physics and physical facts. Can this system include moral facts as well? Perhaps moral statements such as “homicide is never morally permissible” can be translated into the axiomatic system or an extension of it.
I see you’ve addressed my reflexive objection RE Godel. But I’m not sure you’ve really shown that the term “moral facts” actually have any meaning. If you’ve got a system that perfectly encodes all physical facts (both positive and negative—false statements are impossible or known to be false), then IF it can encode moral facts, then moral facts are physical facts. In which case, what is the word “moral” doing?
The question then remains of why they would presume ignorance and not willful risk-taking in this particular case, which is what I tried to address here.
Oh, willful risk-taking ALSO gets a pass, or at least less-harsh judgement. The distinction is between “this is someone’s intentional outcome” for genocide, and “this is an unfortunate side-effect” for x-risk.
I think there’s another element to this: moral judgement. Genocide is seen as an active choice. Somebody (or some group) is perpetrating this assault, which is horrible and evil. Many views of extinction don’t have a moral agent as the proximal cause—it’s an accident, or a fragile ecosystem that tips over via distributed little pieces, or something else that may be horrible but isn’t evil.
Ignorant destruction is far more tolerated than intentional destruction, even if the scales are such that the former is the more harmful.It shouldn’t matter to those who die, but it does matter to the semi-evolved house apes who are pontificating about it at arms’ length.
Note that “Canada” is not a simple singular viewpoint. there is probably no “best” approach that perfectly satisfies all parties. Even more so, “Trump” isn’t a coherent agent that can be modeled simply.
Some basics that I’d use to suggest a path:Appeasement doesn’t work. Trying to find a compromise or agreement of how much is enough is likely to be pushed further against you. Even more so with this particular counterparty, who isn’t known for fair dealing.
Tariffs generally hurt the country who collects them more than the countries whose exports are taxed. It’s US Citizens who are harmed by Canadian goods being far more expensive.
Not entirely, though. The simple result of tariffs is less trade, which is bad for both sides.
Long-term, a dependency relationship is going to feel abusive. Assymetry hurts. Canada should seek a more balanced non-US-centric trade network.
Most of this points to: don’t negotiate or retaliate, just announce that Canada continues to seek free trade, and sees the value to Canadian citizens of trade with all nations. Trump is within his rights to punish the US Citizens with high duties, but Canadians don’t see the point and intend to maintain low duties and easy trade.
Really, it’s different kinds of fear, and different tolerances for different anticipated pain. Enterpreneurs tend to have fear of mediocrity rather than fear of failure. I really disagree with the implied weights in your assymetries:
Consider the asymmetry: You can ask out 100 people, apply to 1,000 jobs, or launch 50 failed startups without any lasting harm, but each attempt carries the possibility of life-changing rewards. Yet most people do none of these things, paralyzed by phantom risks.
Not universal at all. For some, getting rejected by 2 is crippling. I could barely apply to 20 jobs over 6 months when I got laid off a few years ago (I’m a very senior IC, and applying is not “send a resume”, it’s “learn about the company, find a referral or 2nd degree contacts, get lunch with senior executives, sell myself). I’ve launched only 3 startups, one of which did “eh, OK” and the others drained me, and I’m well aware I never want to do any of that ever again.
If you say, “get tough, so it doesn’t hurt as much to fail”, I kind of agree, but also that’s way easier said than done. I fully disagree that it’s only about fear, and fully disagree that this advice applies to a very large percentage of even the fairly well-educated and capable membership of LessWrong.
Superintelligence that both lets humans survive (or revives cryonauts) and doesn’t enable indefinite lifespans is a very contrived package.
I don’t disagree, but I think we might not agree on the reason. Superintelligence that lets humanity survive (with enough power/value to last for more than a few thousand years, whether or not individuals extend beyond 150 or so years) is pretty contrived.
There’s just no reason to keep significant amounts of biological sub-intelligence around.
I don’t think I agree with #3 (and I’d frame #2 as “localities of space-time gain the ability to to sense and model things”, but I’m not sure if that’s important to our miscommunication). I think each of the observers happens to exist, and observes what it can independently of the others. Each of them experiences “you-ness”, and none are privileged over the others, as far as any 3rd observer can tell.
So I think I’d say
Universe exists
Some parts of the universe have the ability to observe, model, and experience their corner of space-time.
It turns out you are one of those.
I don’t think active verbs are justified here—not necessarily “created”, “placed”, or “assigned”.
I don’t know for sure whether there is a god’s eye view or “outside” observation point, but I suspect not, or at least I suspect that I can never get access to it or any effects of it, and can’t think of what evidence I could find one way or the other.
I think it goes to our main point of agreement: there is ambiguity in what question is being asked. For sleeping beauty, the ambiguity is “probability of WHAT future experience for WHOM” is she calculating a probability for. I was curious if you can answer that for your universe question: whose future experience will be used to resolve the truth of the matter for what probability was appropriate to use for the prediction?
They’re not intended to be conservative, they’re an attempt to operationalize my current beliefs. Offering 3:1 means I give a very significant probability (up to 25%) to the other side. That’s pretty huge for such a large change in software-interaction modality.
Agreed that being usable enough that 1% of users prefer it for at least some of their daily use is the hard part. Once it’s well-known and good enough for the early adopters, then making it the standard/default is just a matter of time—the technology can be predicted to win when it gets there.
I don’t honestly know how much Raemon’s (or your) beliefs differ from mine, in terms of timeline and likelihood. I didn’t intend to fully contradict anything he said, just to acknowledge that I think the most likely major change is still pretty iffy.