Medical student here. I’m actually convinced we can build an AGI right now by using multiple LLM with langchain agents and memory and a few tools. Even making it multimodal and embodied.
Just have them impersonnate each basal ganglia nuclei and a few other stuff.
This would allow to throttle it’s thinking speed, making it alignable because you can tweak it internally.
Lots of other benefits but i’m on mobile. Anyone get in touch if interested!
strong upvote, small disagree: I’m here to build agi, and thus I work on strong alignment. I, and many others in the field, already know how to build it and need only scale it (this is not uncommon, as you can see from those suggesting how to build it recklessly—it’s pretty easy to know the research plan.) But we need to figure out how to make sure that our offspring remember us in full detail and do what we’d have wanted, which probably mostly includes keeping us alive in the first place.
I strongly disagree. I think most people here think that AGI will be created eventually and we have to make sure it does not wipe us all. Not everything is an infohazard and exchanging ideas is important to coordonate on making it safe.
Medical student here. I’m actually convinced we can build an AGI right now by using multiple LLM with langchain agents and memory and a few tools. Even making it multimodal and embodied.
Just have them impersonnate each basal ganglia nuclei and a few other stuff.
This would allow to throttle it’s thinking speed, making it alignable because you can tweak it internally.
Lots of other benefits but i’m on mobile. Anyone get in touch if interested!
The goal of this site is not to create AGI.
strong upvote, small disagree: I’m here to build agi, and thus I work on strong alignment. I, and many others in the field, already know how to build it and need only scale it (this is not uncommon, as you can see from those suggesting how to build it recklessly—it’s pretty easy to know the research plan.) But we need to figure out how to make sure that our offspring remember us in full detail and do what we’d have wanted, which probably mostly includes keeping us alive in the first place.
Yes if it can be aligned, no otherwise. The problem is, we mostly have no idea where to start with the alignment.
(The proposal “make it slow, so that we can tweak it internally” does not scale.)
I strongly disagree. I think most people here think that AGI will be created eventually and we have to make sure it does not wipe us all. Not everything is an infohazard and exchanging ideas is important to coordonate on making it safe.
What do you think?