I’m an independent researcher currently working on a sequence of posts about consciousness. You can send me anonymous feedback here: https://www.admonymous.co/rafaelharth. If it’s about a post, you can add [q] or [nq] at the end if you want me to quote or not quote it in the comment section.
Rafael Harth
Gotcha. I’m happy to offer 600 of my reputation points vs. 200 of yours on your description of 2026-2028 not panning out. (In general if it becomes obvious[1] that we’re racing toward ASI in the next few years, then people should probably not take me seriously anymore.)
- ↩︎
well, so obvious that I agree, anyway; apparently it’s already obvious to some people.
- ↩︎
I feel like a bet is fundamentally unfair here because in the cases where I’m wrong, there’s a high chance that I’ll be dead anyway and don’t have to pay. The combination of long timelines but high P(doom|AGI soon) means I’m not really risking my reputation/money in the way I’m supposed to with a bet. Are you optimistic about alignment, or does this asymmetry not bother you for other reasons? (And I don’t have the money to make a big bet regardless.)
Just regular o1, I have the 20$/month subscription not the 200$/month
You could call them logic puzzles. I do think most smart people on LW would get 10⁄10 without too many problems, if they had enough time, although I’ve never tested this.
About two years ago I made a set of 10 problems that imo measure progress toward AGI and decided I’d freak out if/when LLMs solve them. They’re still 1⁄10 and nothing has changed in the past year, and I doubt o3 will do better. (But I’m not making them public.)
Will write a reply to this comment when I can test it.
Because if you don’t like it you can always kill yourself and be in the same spot as the non-survival case anyway.
Not to get too morbid here but I don’t think this is a good argument. People tend not to commit suicide even if they have strongly net negative lives
My probably contrarian take is that I don’t think improvement on a benchmark of math problems is particularly scary or relevant. It’s not nothing—I’d prefer if it didn’t improve at all—but it only makes me slightly more worried.
The Stanford Enyclopedia thing is a language game. Trying to make deductions in natural language about unrelated statements is not the kind of thing that can tell you what time is, one way or another. It can only tell you something about how we use language.
But also, why do we need an argument against presentism? Presentism seems a priori quite implausible; seems a lot simpler for the universe to be an unchanging 4d block than a 3d block that “changes over time”, which introduces a new ontological primitive that can’t be formalized. I’ve never seen a mathematical object that changes over time, I’ve only seen mathematical objects that have internal axes.
This all seems correct. The one thing I might add is that imE the usual effect of stating, however politely, that someone may not be 100% acting in good faith is to turn the conversation into much more of a conflict than it already was, which is why pretending as if it’s an object level disagreement is almost always the correct strategy. But I agree that actually believing the other person is acting in good faith is usually quite silly.
(I also think the term is horrendous; irrc I’ve never used either “good faith” or “bad faith” in conversation.)
((This post also contributes to this nagging sense that I sometimes have that Zack is the ~only person on this platform who is actually doing rationality in a completely straight-forward way as intended, and everyone else is playing some kind of social game in which other considerations restrict the move set and rationality is only used to navigate within the subset of still permissible moves. I’m not in the business of fighting this battle, but in another timeline maybe I would be.))
Yeah, e.g., any convergent series.
This is assuming no expression that converges to the constants exists? Which I think is an open question. (Of course, it would only be finite if there are such expressions for all constants. But even so, I think it’s an open question.)
As someone who expects LLMs to be a dead end, I nonetheless think this post makes a valid point and does so using reasonable and easy to understand arguments. I voted +1.
As I already commented, I think the numbers here are such that the post should be considered quite important even though I agree that it fails at establishing that fish can suffer (and perhaps lacks comparison to fish in the wild). If there was another post with a more nuanced stance on this point, I’d vote for that one instead, but there isn’t. I think fish wellbeing should be part of the conversation more than it is right now.
It’s also very unpleasant to think or write about these things, so I’m also more willing to overlook flaws than I’d be by default.
Shape can most certainly be emulated by a digital computer. The theory in the paper you linked would make a brain simulation easier, not harder, and the authors would agree with that
Would you bet on this claim? We could probably email James Pang to resolve a bet. (Edit: I put about 30% on Pang saying that it makes simulation easier, but not necessarily 70% on him saying it makes simulation harder, so I’d primarily be interested in a bet if “no idea” also counts as a win for me.)
It is not proposing that we need to think about something other than neuronal axons and dendrites passing information, but rather about how to think about population dynamics.
Really? Isn’t the shape of the brain something other than axons and dendrites?
The model used in the paper doesn’t take any information about neurons into account, it’s just based on a mesh of the geometry of the particular brain region.
So this is the opposite of proposing a more detailed model of brain function is necessary, but proposing a courser-grained approximation.
And they’re not addressing what it would take to perfectly understand or reproduce brain dynamics, just a way to approximately understand them.
The results (at least the flagship result) are about a coarse approximation, but the claim that anatomy restricts function still seems to me like contradicting the neuron doctrine.
Admittedly the neuron doctrine isn’t well-defined, and there are interpretations where there’s no contradiction. But shape in particular is a property that can’t be emulated by digital computers, so it’s a contradiction as far as the OP goes (if in fact the paper is onto something).
I mean, we have formalized simplicity metrics (Solomonoff Induction, minimal description length) for a reason, and that reason is that we don’t need to rely on vague intuitions to determine whether a given theory (like wave function collapse) is plausible.
No reputable neuroscientist argued against it to any strong degree, just for additional supportive methods of information transmission.
I don’t think this is correct. This paper argues explicitly against the neuron doctrine (enough so that they’ve put it into the first two sentences of the abstract), is published in a prestigious journal, has far above average citation count, and as far as I can see, is written by several authors who are considered perfectly fine/serious academics. Not any huge names, but I think enough to clear the “reputable” bar.
I don’t think this is very strong evidence since I think you can find people with real degrees supporting all sorts of contradicting views. So I don’t think it really presents an issue for your position, just for how you’re phrased it here.
Two thoughts here
-
I feel like the actual crux between you and OP is with the claim in post #2 that the brain operates outside the neuron doctrine to a significant extent. This seems to be what your back and forth is heading toward; OP is fine with pseudo-randomness as long as it doesn’t play a nontrivial computational function in the brain, so the actual important question is not anything about pseudo-randomness but just whether such computational functions exist. (But maybe I’m missing something, also I kind of feel like this is what most people’s objection to the sequence ‘should’ be, so I might have tunnel vision here.)
-
(Mostly unrelated to the debate, just trying to improve my theory of mind, sorry in advance if this question is annoying.) I don’t get what you mean when you say stuff like “would be conscious (to the extent that I am), and it would be my consciousness (to a similar extent that I am),” since afaik you don’t actually believe that there is a fact of the matter as to the answers to these questions. Some possibilities what I think you could mean
I don’t actually think these questions are coherent, but I’m pretending as if I did for the sake of argument
I’m just using consciousness/identity as fuzzy categories here because I assume that the realist conclusions must align with the intuitive judgments (i.e., if it seems like the fuzzy category ‘consciousness’ applies similarly to both the brain and the simulation, then probably the realist will be forced to say that their consciousness is also the same)
Actually there is a question worth debating here even if consciousness is just a fuzzy category because ???
Actually I’m genuinely entertaining the realist view now
Actually I reject the strict realist/anti-realist distinction because ???
-
I think causal closure of the kind that matters here just means that the abstract description (in this case, of the brain as performing an algorithm/computation) captures all relevant features of the the physical description, not that it has no dependence on inputs. Should probably be renamed something like “abstraction adequacy” (making this up right now, I don’t have a term on shelf for this property). Abstraction (in)adequacy is relevant for CF I believe (I think it’s straight-forward why?). Randomness probably doesn’t matter since you can include this in the abstract description.
Not that one; I would not be shocked if this market resolves Yes. I don’t have an alternative operationalization on hand; would have to be about AI doing serious intellectual work on real problems without any human input. (My model permits AI to be very useful in assisting humans.)