Just this guy, you know?
Dagon
I’d call that an empirical problem that has philosophical consequences :)
And it’s still not worth a lot of debate about far-mode possibilities, but it MAY be worth exploring what we actually know and we we can test in the near-term. They’ve fully(*) emulated some brains—https://openworm.org/ is fascinating in how far it’s come very recently. They’re nowhere near to emulating a brain big enough to try to compare WRT complex behaviors from which consciousness can be inferred.
* “fully” is not actually claimed nor tested. Only the currently-measurable neural weights and interactions are emulated. More subtle physical properties may well turn out to be important, but we can’t tell yet if that’s so.
But if someone finds the correct answer to a philosophical question, then they can… try to write essays about it explaining the answer? Which maybe will be slightly more effective than essays arguing for any number of different positions because the answer is true?
I think this is a crux. To the extent that it’s a purely philosophical problem (a modeling choice, contingent mostly on opinions and consensus about “useful” rather than “true”), posts like this one make no sense. To the extent that it’s expressed as propositions that can be tested (even if not now, it could be described how it will resolve), it’s NOT purely philosophical.
This post appears to be about an empirical question—can a human brain be simulated with sufficient fidelity to be indistinguishable from a biological brain. It’s not clear whether OP is talking about an arbitrary new person, or if they include the upload problem as part of the unlikelihood. It’s also not clear why anyone cares about this specific aspect of it, so maybe your comments are appropriate.
This comes down to a HUGE unknown—what features of reality need to be replicated in another medium in order to result in sufficiently-close results?
I don’t know the answer, and I’m pretty sure nobody else does either. We have a non-existence proof: it hasn’t happened yet. That’s not much evidence that it’s impossible. The fact that there’s no actual progress toward it IS some evidence, but it’s not overwhelming.
Personally, I don’t see much reason to pursue it in the short-term. But I don’t feel a very strong need to convince others.
I mean “mass and energy are conserved”—there’s no way to gain weight except if losses are smaller than gains. This is a basic truth, and an unassailable motte about how physics works. It’s completely irrelevant to the bailey of weight loss and calculating calories.
Not sure this is a new frontier, exactly—it was part of high-school biology classes decades ago. Still, very worth reminding people and bringing up when someone over-focuses on the bailey of “legible, calculated CICO” as opposed to the motte of “absorbed and actual CICO”.
I’d enjoy some acknowledgement that there IS an interplay between cognitive beliefs (based on intelligent modeling of the universe and other people) and intuitive experienced emotions. “not a monocausal result of how smart or stupid they are” does not imply total lack of correlation or impact. Nor does it imply that cognitive ability to choose a framing or model is not effective in changing one’s aliefs and preferences.
I’m fully onboard with countering the bullying and soldier-mindset debate techniques that smart people use against less-smart (or equally-smart but differently-educated) people. I don’t buy that everyone is entitled to express and follow any preferences, including anti-social or harmful-to-others beliefs. Some things are just wrong in modern social contexts.
I appreciate the discussion, but I’m disappointed by the lack of rigor in proposals, and somewhat expect failure for the entire endeavor of quantifying empathy (which is the underlying drive for discussing consciousness in these contexts, as far as I’m concerned).
Of course, we do not measure computers by mass, but by speed, number of processors and information integration. But if you directly do not have enough computing capacity, your neural network is simply small and information processing is limited.
It’s worth going one step further here—how DO we measure computers, and how might that apply to consciousness? Computer benchmarking is a pretty complex topic, with most of the objective trivial measures (FlOps, IOPS, data throughput, etc.) being well-known to not tell the important details, and specific usage benchmarks being required to really evaluate a computing system. Number of transistors is a marketing datum, not a measure of value for any given purpose.
Until we get closer to actual measurements of cognition and emotion, we’re unlikely to get any agreement on relative importance of different entities’ experiences.
I suspect that almost all work that can be done remotely can be done even more cheaply the more remote you make it (not outside-the-city, but outside-the-continent). I also suspect that it’s not all that long before many or most mid-level fully-remotable jobs become irrelevant entirely. Partially-remotable jobs (WFH 80% or less of the time, where the in-office human connections are (seen as) important part of the job) don’t actually let people live somewhere truly cheap.
I think you’re also missing many of the motivations for preferring a suburban area near (but not in the core of) a big city—schools and general sortation (having most neighbors in similar socioeconomic situation).
I wonder if you’re objecting to identifying this group as cult-like, or to implying that all cults are bad and should be opposed. Personally, I find a LOT of human behavior, especially in groups, to be partly cult-like in their overfocus on group-identification and othering of outsiders, and often in outsized influence of one or a few leaders. I don’t think ALL of them are bad, but enough are to be a bit suspicious without counter-evidence.
I tend to use nlogn (N things, times logN overhead) as my initial complexity estimate for coordinating among “things”. It’ll, of course, vary widely with specifics, but it’s surprising how often it’s reasonably useful for thinking about it.
Wish I could upvote and disagree. Evolution is a mechanism without a target. It’s the result of selection processes, not the cause of those choices.
There have been a number of debates (which I can’t easily search on, which is sad) about whether speech is an action (intended to bring about a consequence) or a truth-communication or truth-seeking (both imperfect, of course) mechanism. It’s both, at different times to different degrees, and often not explicit about what the goals are.
The practical outcome seems spot-on. With some people you can have the meta-conversation about what they want from an interaction, with most you can’t, and you have to make your best guess, which you can refine or change based on their reactions.
Out of curiosity, when chatting with an LLM, do you wonder what its purpose is in the responses it gives? I’m pretty sure it’s “predict a plausible next token”, but I don’t know how I’ll know to change my belief.
Gah! I missed my chance to give one of my favorite Carl Sagan quotes, a recipe for Apple Pie, which demonstrates the universality and depth of this problem:
If you wish to make an apple pie from scratch you must first invent the universe.
Note that the argument whether MWI changes anything is very different from the argument about what matters and why. I think it doesn’t change anything, independently of how much what things in-universe matter.
Separately, I tend to think “mattering is local”. I don’t argue as strongly for this, because it’s (recursively) a more personal intuition, less supported by type-2 thinking.
I think all the same arguments that it doesn’t change decisions also apply to why it doesn’t change virtue evaluations. It still all adds up to normality. It’s still unimaginably big. Our actions as well as our beliefs and evaluations are irrelevant at most scales of measurement.
I think this is the right way to think of most anti-inductive (planner-adversarial or competitive exploitation) situations. Where there are multiple dimensions of assymetric capabilities, any change is likely to shift the equilibrium, but not necessarily by as much as the shift in component.
That said, tipping points are real, and sometimes a component shift can have a BIGGER effect, because it shifts the search to a new local minimum. In most cases, this is not actully entirely due to that component change, but the discovery and reconfiguration is triggered by it. The rise of mass shootings in the US is an example—there are a lot of causes, but the shift happened quite quickly.
Offense-defense is further confused as an example, because there are at least two different equilibria involved. when you sayThe offense-defense balance is a concept that compares how easy it is to protect vs conquer or destroy resources.
Conquer control vs retain control is a different thing than destroy vs preserve. Frank Herbert claimed (via fiction) that “The people who can destroy a thing, they control it.” but it’s actually true in very few cases. The equilibrium of who gets what share of the value from something can shift very separately from the equilibrium of how much total value that thing provides.
Hmm. I think there are two dimensions to the advice (what is a reasonable distribution of timelines to have, vs what should I actually do). It’s perfectly fine to have some humility about one while still giving opinions on the other. “If you believe Y, then it’s reasonable to do X” can be a useful piece of advice. I’d normally mention that I don’t believe Y, but for a lot of conversations, we’ve already had that conversation, and it’s not helpful to repeat it.
note: this was 7 years ago and I’ve refined my understanding of CDT and the Newcomb problem since.
My current understanding of CDT is that it’s does effectively assign a confidence of 1 to the decision not being causally upstream of Omega’s action, and that is the whole of the problem. It’s “solved” by just moving Omega’s action downstream (by cheating and doing a rapid switch). It’s … illustrated? … by the transparent version, where a CDT agent just sees the second box as empty before it even realizes it’s decided. It’s also “solved” by acausal decision theories, because they move the decision earlier in time to get the jump on Omega.
For non-rigorous DTs (like human intuition, and what I personally would want to do), there’s a lot of evidence in the setup that Omega is going to turn out to be correct, and one-boxing is an easy call. If the setup is somewhat difference (say, neither Omega nor anyone else makes any claims about predictions, just says “sometimes both boxes have money, sometimes only one”), then it’s a pretty straightforward EV calculation based on kind of informal probability assignments.But it does require not using strict CDT, which rejects the idea that the choice has backward-causality.
Thanks for this—it’s important to keep in mind that a LOT of systems are easier to sustain or expand than to begin. Perhaps most systems face this.
In a lot of domains, this is known as the “bootstrap” problem, based on the concept of “lift yourself up by your bootstraps”, which doesn’t actually work well as a metaphor. See Bootstrapping—Wikipedia
In CS, for instance, compilers are pieces of software that turn source code into machine code. Since they’re software, they need a complier to build them. GCC (and some other from-scratch compilers, but many other compilers just depend on GCC) includes a “bootstrap C compiler”, which is some hand-coded (actually nowadays it’s not, it’s compiled as well) executable code which can compile a minimal “stage 2″ compiler, which then compiles the main compiler, and then the main compiler is used to build itself again, with all optimizations available.In fact, you’ve probably heard the term “booting up” or “rebooting” your computer. This is a shortening of the word “bootstrap”, and refers to powering on without any software, loading a small amount of code from ROM or Flash (or other mostly-static store), and using that code to load further stages of Operating System.
Hmm, still not following, or maybe not agreeing. I think that “if the reasoning used to solve the problem is philosophical” then “correct solution” is not available. “useful”, “consensus”, or “applicable in current societal context” might be better evaluations of a philosophical reasoning.