strong upvote, small disagree: I’m here to build agi, and thus I work on strong alignment. I, and many others in the field, already know how to build it and need only scale it (this is not uncommon, as you can see from those suggesting how to build it recklessly—it’s pretty easy to know the research plan.) But we need to figure out how to make sure that our offspring remember us in full detail and do what we’d have wanted, which probably mostly includes keeping us alive in the first place.
I strongly disagree. I think most people here think that AGI will be created eventually and we have to make sure it does not wipe us all. Not everything is an infohazard and exchanging ideas is important to coordonate on making it safe.
The goal of this site is not to create AGI.
strong upvote, small disagree: I’m here to build agi, and thus I work on strong alignment. I, and many others in the field, already know how to build it and need only scale it (this is not uncommon, as you can see from those suggesting how to build it recklessly—it’s pretty easy to know the research plan.) But we need to figure out how to make sure that our offspring remember us in full detail and do what we’d have wanted, which probably mostly includes keeping us alive in the first place.
Yes if it can be aligned, no otherwise. The problem is, we mostly have no idea where to start with the alignment.
(The proposal “make it slow, so that we can tweak it internally” does not scale.)
I strongly disagree. I think most people here think that AGI will be created eventually and we have to make sure it does not wipe us all. Not everything is an infohazard and exchanging ideas is important to coordonate on making it safe.
What do you think?