I don’t understand what’s being claimed here, and feel the urge to get off the boat at this point without knowing more. Most stuff we care about isn’t about 3-second reactions, but about >5 minute reactions. Those require thinking, and maybe require non-electrical changes—synaptic plasticity, as you mention. If they do require non-electrical changes, then this reasoning doesn’t go through, right? If we make a thing that simulates the electrical circuitry but doesn’t simulate synaptic plasticity, we’d expect to get… I don’t know, maybe a thing that can perform tasks that are already “compiled into low-level code”, so to speak, but not tasks that require thinking? Is the claim that thinking doesn’t require such changes, or that some thinking doesn’t require such changes, and that subset of thinking is enough for greatly decreasing X-risk?
Seconded! I too am confused and skeptical about this part.
Humans can do lots of cool things without editing the synapses in their brain. Like if I say: “imagine an upside-down purple tree, and now answer the following questions about it…”. You’ve never thought about upside-down purple trees in your entire life, and yet your brain can give an immediate snap answer to those questions, by flexibly combining ingredients that are already stored in it.
…And that’s roughly how I think about GPT-4’s capabilities. GPT-4 can also do those kinds of cool things. Indeed, in my opinion, GPT-4 can do those kinds of things comparably well to a human. And GPT-4 already exists and is safe. So that’s not what we need.
GPT4 can’t solve IMO problems. Now take an IMO gold medalist about to walk into their exam, and upload them at that state into an Em without synaptic plasticity. Would the resulting upload would still be able to solve the exam at a similar level as the full human?
I don’t have a strong belief, but my intuition is that they would. I recall once chatting to @Neel Nanda about how he solved problems (as he is in fact an IMO gold winner), and recall him describing something that to me sounded like “introspecting really hard and having the answers just suddenly ‘appear’...” (though hopefully he can correct that butchered impression)
Do you think such a student Em would or would not perform similarly well in the exam?
I don’t have a strong opinion one way or the other on the Em here.
In terms of what I wrote above (“when I think about what humans can do that GPT-4 can’t do, I think of things that unfold over the course of minutes and hours and days and weeks, and centrally involve permanently editing brain synapses … being able to figure things out”), I would say that human-unique “figuring things out” process happened substantially during the weeks and months of study and practice, before the human stepped into the exam room, wherein they got really good at solving IMO problems. And hmm, probably also some “figuring things out” happens in the exam room, but I’m not sure how much, and at least possibly so little that they could get a decent score without forming new long-term memories and then building on them.
I don’t think Ems are good for much if they can’t figure things out and get good at new domains—domains that they didn’t know about before uploading—over the course of weeks and months, the way humans can. Like, you could have an army of such mediocre Ems monitor the internet, or whatever, but GPT-4 can do that too. If there’s an Em Ed Witten without the ability to grow intellectually, and build new knowledge on top of new knowledge, then this Em would still be much much better at string theory than GPT-4 is…but so what? It wouldn’t be able to write groundbreaking new string theory papers the way real Ed Witten can.
I have said many times that uploads created by any process I know of so far would probably be unable to learn or form memories. (I think it didn’t come up in this particular dialogue, but in the unanswered questions section Jacob mentions having heard me say it in the past.)
Eliezer has also said that makes it useless in terms of decreasing x-risk. I don’t have a strong inside view on this question one way or the other. I do think if Factored Cognition is true then “that subset of thinking is enough,” but I have a lot of uncertainty about whether Factored Cognition is true.
Anyway, even if that subset of thinking is enough, and even if we could simulate all the true mechanisms of plasticity, then I still don’t think this saves the world, personally, which is part of why I am not in fact pursuing uploading these days.
I don’t understand what’s being claimed here, and feel the urge to get off the boat at this point without knowing more. Most stuff we care about isn’t about 3-second reactions, but about >5 minute reactions. Those require thinking, and maybe require non-electrical changes—synaptic plasticity, as you mention. If they do require non-electrical changes, then this reasoning doesn’t go through, right? If we make a thing that simulates the electrical circuitry but doesn’t simulate synaptic plasticity, we’d expect to get… I don’t know, maybe a thing that can perform tasks that are already “compiled into low-level code”, so to speak, but not tasks that require thinking? Is the claim that thinking doesn’t require such changes, or that some thinking doesn’t require such changes, and that subset of thinking is enough for greatly decreasing X-risk?
Seconded! I too am confused and skeptical about this part.
Humans can do lots of cool things without editing the synapses in their brain. Like if I say: “imagine an upside-down purple tree, and now answer the following questions about it…”. You’ve never thought about upside-down purple trees in your entire life, and yet your brain can give an immediate snap answer to those questions, by flexibly combining ingredients that are already stored in it.
…And that’s roughly how I think about GPT-4’s capabilities. GPT-4 can also do those kinds of cool things. Indeed, in my opinion, GPT-4 can do those kinds of things comparably well to a human. And GPT-4 already exists and is safe. So that’s not what we need.
By contrast, when I think about what humans can do that GPT-4 can’t do, I think of things that unfold over the course of minutes and hours and days and weeks, and centrally involve permanently editing brain synapses. (See also: “AGI is about not knowing how to do something, and then being able to figure it out.”)
Hm, here’s a test case:
GPT4 can’t solve IMO problems. Now take an IMO gold medalist about to walk into their exam, and upload them at that state into an Em without synaptic plasticity. Would the resulting upload would still be able to solve the exam at a similar level as the full human?
I don’t have a strong belief, but my intuition is that they would. I recall once chatting to @Neel Nanda about how he solved problems (as he is in fact an IMO gold winner), and recall him describing something that to me sounded like “introspecting really hard and having the answers just suddenly ‘appear’...” (though hopefully he can correct that butchered impression)
Do you think such a student Em would or would not perform similarly well in the exam?
I don’t have a strong opinion one way or the other on the Em here.
In terms of what I wrote above (“when I think about what humans can do that GPT-4 can’t do, I think of things that unfold over the course of minutes and hours and days and weeks, and centrally involve permanently editing brain synapses … being able to figure things out”), I would say that human-unique “figuring things out” process happened substantially during the weeks and months of study and practice, before the human stepped into the exam room, wherein they got really good at solving IMO problems. And hmm, probably also some “figuring things out” happens in the exam room, but I’m not sure how much, and at least possibly so little that they could get a decent score without forming new long-term memories and then building on them.
I don’t think Ems are good for much if they can’t figure things out and get good at new domains—domains that they didn’t know about before uploading—over the course of weeks and months, the way humans can. Like, you could have an army of such mediocre Ems monitor the internet, or whatever, but GPT-4 can do that too. If there’s an Em Ed Witten without the ability to grow intellectually, and build new knowledge on top of new knowledge, then this Em would still be much much better at string theory than GPT-4 is…but so what? It wouldn’t be able to write groundbreaking new string theory papers the way real Ed Witten can.
I have said many times that uploads created by any process I know of so far would probably be unable to learn or form memories. (I think it didn’t come up in this particular dialogue, but in the unanswered questions section Jacob mentions having heard me say it in the past.)
Eliezer has also said that makes it useless in terms of decreasing x-risk. I don’t have a strong inside view on this question one way or the other. I do think if Factored Cognition is true then “that subset of thinking is enough,” but I have a lot of uncertainty about whether Factored Cognition is true.
Anyway, even if that subset of thinking is enough, and even if we could simulate all the true mechanisms of plasticity, then I still don’t think this saves the world, personally, which is part of why I am not in fact pursuing uploading these days.
That’s a very interesting point, ‘synaptic plasticity’ is probably a critical difference. At least the recent results in LLMs suggest.
The author not considering, or even mentioning, it also suggests way more work and thought needs to be put into this.