Prefacing with the fact I am hardly well versed in the subject so could easily be missing the forest for the trees.
One question this discussion raised for me is that of level of “intelligence” needed before on can think about starting to train for alignment. Is it possible that the AIs need to reach a certain level before we can even talk about training them to be aligned? (Seems like that is what we see with humans.) Is that a position that is already present in the landscape of those working on this? I don’t get that impression but as I said, I will probably miss a lot that others already take as common knowledge.
Part of the discussion also prompted a somewhat different take on the ChatGTP (and probably GTP4). Why shouldn’t I just take these to be very complex and complicated databases with vast amounts of data and a natural language query language? The alignment aspect here seems more about what is then shared with the DB user rather than what’s contained in the DB so we’re talking about informational access rights.
Since these systems don’t do things in the world yet, that’s the case. Robotics transformers are a thing though. This current system appears to have usable but slow vision.
A system that puts it all together: a sota LLM, with vision, with a subsystem that accepts tokens and uses them to set realtime robotic control policy, and the LLM has a fine tune from thousands of hours of robotics practice in simulation appears imminent.
Prefacing with the fact I am hardly well versed in the subject so could easily be missing the forest for the trees.
One question this discussion raised for me is that of level of “intelligence” needed before on can think about starting to train for alignment. Is it possible that the AIs need to reach a certain level before we can even talk about training them to be aligned? (Seems like that is what we see with humans.) Is that a position that is already present in the landscape of those working on this? I don’t get that impression but as I said, I will probably miss a lot that others already take as common knowledge.
Part of the discussion also prompted a somewhat different take on the ChatGTP (and probably GTP4). Why shouldn’t I just take these to be very complex and complicated databases with vast amounts of data and a natural language query language? The alignment aspect here seems more about what is then shared with the DB user rather than what’s contained in the DB so we’re talking about informational access rights.
Since these systems don’t do things in the world yet, that’s the case. Robotics transformers are a thing though. This current system appears to have usable but slow vision.
A system that puts it all together: a sota LLM, with vision, with a subsystem that accepts tokens and uses them to set realtime robotic control policy, and the LLM has a fine tune from thousands of hours of robotics practice in simulation appears imminent.