The brain is likely a computer. Sufficiently advanced neuroscience will likely solve most of our confusion about what our values are, what makes us happy, etc. You want to have rewards in the present moment and not have regret (negative reward) in the future moment due to actions in the present monent. Maximising total reward integrated across time is a mathematically well defined problem if you mathematically define an environment and a reward function as a function of brain state.
xpostah
Thanks for the links. Might go through when I find time.
Even if the papers prove that there’s similiarities, I don’t see how this proves anything about evolution versus within-lifetime learning.
But there’s decent evidence that there’s not much more initialization than that, and that that huge fraction of the brain has to slowly pick up knowledge within the human lifetime before it starts being useful, e.g. https://pmc.ncbi.nlm.nih.gov/articles/PMC9957955/
This seems like your strongest argument. I will have to study more to understand this.
our DNA has on the order of a megabyte to spend on the brain
That’s it? Really? That is new information for me.
Tbh your argument might end up being persuasive to me. So thank you for writing them.
The problem is that me building a background in neuroscience to the point I’m confident I’m not being fooled, will take time. And I’m interested in neuroscience but not that interested in studying it just for AI safety reasons. If you have like a post that covers this argument well (around initialisation not storing a lot of information) it’ll be nice. (But not necessary ofcourse, that’s upto you)
Yes your paraphrase is not bad. I think we can assume things outside of Earth don’t need to be simulated, it would be surprising to me if events outside of Earth made the difference between evolution producing Homo sapiens versus some other less intelligent species. (Maybe a few basic things like temperature of the Earth being shifted slowly) For the most part the Earth is causally isolated from the rest of the universe.
Now which parts of the Earth can we safely omit simulating is a harder question as there’s more causal interactions going on. I can make some guesses around parts of the earths environment that can be ignored by the simulation, but they’ll be guesses only.An idea I’ve seen floating around here is that natural selection built our brain randomly with a reward function that valued producing offspring so there is a lot of architecture that is irrelevant to intelligence
Yes gradient descent is likely a faster search algorithm, but IMO you’re still using it to search the big search space that evolution searched through, not the smaller one a human brain searches through after being born.
See e.g. papers finding that you can use a linear function to translate some concepts between brain scans and internal layers in a LLM, or the extremely close correspondence between ConvNet feature and neurons in the visual cortex.
I would love links to these if you have time.
But also, let’s says it’s true that there is similarity in internal structure of the end results—adult human brain and trained LLM. Adult human brain was produced by evolution + learning after birth. Trained LLM was produced by gradient descent. This does not tell me evolution doesn’t matter and learning after birth matters.
> But most of the human brain (the neocortex) already learns its ‘weights’ from experience over a human lifetime, in a way that’s not all that different from self-supervised learning if you squint.The difference is that the weights are not initialised with random values at birth (or at the embryo stage, to be more precise).
They only apply in a weaker sense where you are aware you’re working with analogy, and should hopefully be tracking some more detailed model behind the scenes.
What do you mean by weaker sense? I say irrelevant and you say weaker sense, so we’re not yet in agreement then. How much predictive power does this analogy have as per you personally?
I think it depends on some factors actually.
For instance if we don’t get AGI by 2030 but lots of people still believe it could happen by 2040, we as a species might be better equipped to form good beliefs on it, figure out who to defer to, etc.
I already think this has happened btw. AI beliefs in 2024 are more sane on average than beliefs in say 2010 IMO.
P.S. I’m not talking about what you personally should do with your time and energy, maybe there’s other projects that appeal to you more. But I think it is worthwhile for someone to be doing the thing I ask. It won’t take much effort.
If all the positions were collated in one place it’ll also be easy to get some statistics about them a few years from now.
+1
On lesswrong, everyone and their mother has an opinion on AI timelines. People just stating their views without any arguments doesn’t add a lot of value to the conversation. It would be good if there was a single (monthly? quarterly?) thread that collates all the opinions that are stated without proof. And outside of this thread only posts with some argumentation are allowed.
P.S. Here’s my post
P.P.S. Sorry for the wrong link, it’s fixed now
Sorry if this is rude, but your comment doesn’t engage with any of the arguments in the post, or make arguments in favour of your own position. If you’re just stating your view without proof then sure, that works.
Im selling $1000 tier5 OpenAI credits at discount. DM me if interested.
You can video call me and all my friends to reduce the probably I end up scamming you. Or vice versa I can video call your friends. We can do the transaction in tranches if we still can’t establish trust.