Well for (1) I don’t see what’s written in the post matches your 2-20std estimation. You said yourself “But it’s not clear that there should be much qualitative increase in philosophical problem-solving ability.”.
Like higher communication bandwidth would be nice, but it’s not like more than 30 people can do significantly useful alignment research and even within those who can there’s a huge heavytail IMO.
If you could just write more like e.g. if you imagine a smart person effectively getting sth like a bigger brain by recruting areas from some other person (though then it’d presumably require a decent amount of artificial connections again?). Or do you imagine many people turing into sth like a hivemind (and how more precisely might the hivemind operate and why would they be able to be much smarter together than individually)? Such details would be helpful.
For (2) I just want to ask for clarification whether your 2% estimate in the table includes mitigating the value drift problems you mentioned. (Which then would seem reasonable to me. But one might also read the table as “2% that it works at all and even then there would probably be significant value drift”.) Like with a few billion dollars we could manufacture enough electronmicroscopes to get a human connectome and i’d unfortunately expect that it’s not too hard to guess some of the important learning rules and simulate a bunch until the connectome seems like a plausible equilibrium given the firing and learning rules and then it can sorta run and bootstrap even if there’s significant divergence from the original human.
why would they be able to be much smarter together than individually
Ok some examples:
Multiple attention heads.
One person solves a problem that induces genuine creative thinking; the other person watches this, and learns how genuine creative thinking works. Not very feasible with current setup, maybe feasible with low-cost hardware access.
One person works on a difficult, high-context question; the other person remembers the stack trace, notices and remembers paths [noticed, but not taken, and then forgotten], debugs including subtle shifts, etc. Not very feasible currently without a bunch of distracting exposition. See TAP.
More direct (hence faster, deeper) implicit knowledge/skill sharing.
But a lot of the point is that there are thoughtforms I’m not aware of, which would be created by networked people. The general idea is as I stated: you’ve genuinely moved somewhat away from several siloed human minds, toward something more integrated.
If one person could think with two brains, they’d be much smarter. Two people connected is not the same thing, but could get some of the benefits. The advantages of an electric interface over spoken language are higher bandwidth, lower latency, less cost (producing and decoding spoken words), and potentially more extrospective access (direct neural access to inexplicit neural events).
Do you think that one person with 2 or more brains would be 2-20 SDs?
Such details would be helpful.
I have no idea, that’s why the range is so high.
(2):
The .02 is, as the table says, “as described”; so it should be plausibly a realistic emulation of the human brain. That would include getting slower dynamics right-ish, but wouldn’t exclude getting value drift anyway.
it’s not too hard to guess some of the important learning rules
Do you think that one person with 2 or more brains would be 2-20 SDs?
If I had another copy of my brain I’d guess that might give me like +1std or possibly +2std but very hard to predict.
If a +6std person would get another brain from a +5std person the effect would be much lower I’d guess, maybe yielding overall +6.4std or possibly +6.8std.
But idk the counterfactual seems hard to predict because I cannot imagine it that concretely. Could be totally wrong.
it’s not too hard to guess some of the important learning rules
Maybe. Why do you think this?
This was maybe not that well expressed. I mostly don’t know but it doesn’t seem all that unlikely it could work. (I might read your timelines post within a week or so and maybe then I have a better model of your model to better locate cruxes, idk.)
I mostly don’t know but it doesn’t seem all that unlikely it could work.
My main evidence is
It’s much easier to see the coarse electrical activity, compared to 5-second / 5-minute / 5-hour processes. The former, you just measure voltage or whatever. The latter you have to do some complicated bio stuff (transcriptomics or other *omics).
I’ve asked something like 8ish people associated with brain emulation stuff about slow processes, and they never have an answer (either they hadn’t thought about it, or they’re confused and think it won’t matter which I just think they’re wrong about, or they’re like “yeah totally but we’ve already got plenty of problems just understanding the fast electrical stuff”).
We have very little understanding of how the algorithms actually do their magic, so we’re relying on just copying all the details well enough that we get the whole thing to work.
I mean you can look at neurons in vitro and see how they adopt to different stimuli.
Idk I’d weakly guess that the neuron level learning rules are relatively simple, and that they construct more complex learning rules for e.g. cortical minicolumns and eventually cortical columns or sth, and that we might be able to infer from the connectome what kind of function cortical columns perhaps implement, and that this can give us a strong hint for what kind of cortical-column-level learning rules might select for the kind of algorithms implemented there abstractly, and that we can trace rules back to lower levels given the connectome. Tbc i don’t think it might look exactly like that, just saying sth roughly like that, where maybe it’s actually some common circut loops instead of cortical columns which are interesting or whatever.
Well for (1) I don’t see what’s written in the post matches your 2-20std estimation. You said yourself “But it’s not clear that there should be much qualitative increase in philosophical problem-solving ability.”.
Like higher communication bandwidth would be nice, but it’s not like more than 30 people can do significantly useful alignment research and even within those who can there’s a huge heavytail IMO.
If you could just write more like e.g. if you imagine a smart person effectively getting sth like a bigger brain by recruting areas from some other person (though then it’d presumably require a decent amount of artificial connections again?). Or do you imagine many people turing into sth like a hivemind (and how more precisely might the hivemind operate and why would they be able to be much smarter together than individually)? Such details would be helpful.
For (2) I just want to ask for clarification whether your 2% estimate in the table includes mitigating the value drift problems you mentioned. (Which then would seem reasonable to me. But one might also read the table as “2% that it works at all and even then there would probably be significant value drift”.) Like with a few billion dollars we could manufacture enough electronmicroscopes to get a human connectome and i’d unfortunately expect that it’s not too hard to guess some of the important learning rules and simulate a bunch until the connectome seems like a plausible equilibrium given the firing and learning rules and then it can sorta run and bootstrap even if there’s significant divergence from the original human.
Ok some examples:
Multiple attention heads.
One person solves a problem that induces genuine creative thinking; the other person watches this, and learns how genuine creative thinking works. Not very feasible with current setup, maybe feasible with low-cost hardware access.
One person works on a difficult, high-context question; the other person remembers the stack trace, notices and remembers paths [noticed, but not taken, and then forgotten], debugs including subtle shifts, etc. Not very feasible currently without a bunch of distracting exposition. See TAP.
More direct (hence faster, deeper) implicit knowledge/skill sharing.
But a lot of the point is that there are thoughtforms I’m not aware of, which would be created by networked people. The general idea is as I stated: you’ve genuinely moved somewhat away from several siloed human minds, toward something more integrated.
(1):
Do you think that one person with 2 or more brains would be 2-20 SDs?
I have no idea, that’s why the range is so high.
(2):
The .02 is, as the table says, “as described”; so it should be plausibly a realistic emulation of the human brain. That would include getting slower dynamics right-ish, but wouldn’t exclude getting value drift anyway.
Maybe. Why do you think this?
If I had another copy of my brain I’d guess that might give me like +1std or possibly +2std but very hard to predict.
If a +6std person would get another brain from a +5std person the effect would be much lower I’d guess, maybe yielding overall +6.4std or possibly +6.8std.
But idk the counterfactual seems hard to predict because I cannot imagine it that concretely. Could be totally wrong.
This was maybe not that well expressed. I mostly don’t know but it doesn’t seem all that unlikely it could work. (I might read your timelines post within a week or so and maybe then I have a better model of your model to better locate cruxes, idk.)
My main evidence is
It’s much easier to see the coarse electrical activity, compared to 5-second / 5-minute / 5-hour processes. The former, you just measure voltage or whatever. The latter you have to do some complicated bio stuff (transcriptomics or other *omics).
I’ve asked something like 8ish people associated with brain emulation stuff about slow processes, and they never have an answer (either they hadn’t thought about it, or they’re confused and think it won’t matter which I just think they’re wrong about, or they’re like “yeah totally but we’ve already got plenty of problems just understanding the fast electrical stuff”).
We have very little understanding of how the algorithms actually do their magic, so we’re relying on just copying all the details well enough that we get the whole thing to work.
I mean you can look at neurons in vitro and see how they adopt to different stimuli.
Idk I’d weakly guess that the neuron level learning rules are relatively simple, and that they construct more complex learning rules for e.g. cortical minicolumns and eventually cortical columns or sth, and that we might be able to infer from the connectome what kind of function cortical columns perhaps implement, and that this can give us a strong hint for what kind of cortical-column-level learning rules might select for the kind of algorithms implemented there abstractly, and that we can trace rules back to lower levels given the connectome. Tbc i don’t think it might look exactly like that, just saying sth roughly like that, where maybe it’s actually some common circut loops instead of cortical columns which are interesting or whatever.