In the context of codex: Don’t go into programming of 20 line programs—it is a solved problem. He expects significant progress within the next year.
AGI will (likely) not be a pure language model, but language might be the interface.
He didn’t specifically say that it would be the likely interface but he talked quite a bit about the power of language as an interface.
EA would benefit from a startup culture where things are built. More doing instead of thinking and strategizing.
Consciousness is an underexplored area. he had an interesting example: If you build a powerful AI and do not explicitly build in a self and then talk with it about consciousness does it say: “Yeah, that is what it’s for me too.”?
Behavioral cloning probably much safer than evolving a bunch of agents. We can tell GPT to be empathic.
He specifically mentioned the competition between agents as a risk factor.
On the question what questions he’d like to get asked: How to find out what to do with life. Though I’m very unsure about his exact words and what he meant by it.
He expects a concentration in power from AGI. And he seemed worried about it to me.
“I’m an ambitious 19-year-old, what should I do?”
He said that he gets asked this question often and seemed to google for his standard reply which was quickly posted in the chat:
ADDED: On GTP-4 he was asked about the size of the context window. He said that he thinks that a window as big as a whole article should be possible. He didn’t say “article” specifically but I remember something of such a size.
I got this a bit different: That it’s lagging not because of lagging hardware but because robotics is hard.
When I say “robot hardware” I don’t mean compute hardware. He mentioned for example how human dexterity is far ahead of robots. The “robotics is hard” bit is partly in “easier to iterate with bits”.
Some more notes plus additions to some of your comments (quoted):
I got this a bit different: That it’s lagging not because of lagging hardware but because robotics is hard.
He mentioned partial reprogramming as a strategy to age extension. Here a link fo what this seems to be about: https://pubmed.ncbi.nlm.nih.gov/31475896/#:~:text=Alternatively%2C%20partial%20cell%20reprogramming%20converts,cocktails%20of%20specific%20differentiation%20factors.
In the context of codex: Don’t go into programming of 20 line programs—it is a solved problem. He expects significant progress within the next year.
He didn’t specifically say that it would be the likely interface but he talked quite a bit about the power of language as an interface.
EA would benefit from a startup culture where things are built. More doing instead of thinking and strategizing.
Consciousness is an underexplored area. he had an interesting example: If you build a powerful AI and do not explicitly build in a self and then talk with it about consciousness does it say: “Yeah, that is what it’s for me too.”?
He specifically mentioned the competition between agents as a risk factor.
On the question what questions he’d like to get asked: How to find out what to do with life. Though I’m very unsure about his exact words and what he meant by it.
He expects a concentration in power from AGI. And he seemed worried about it to me.
“I’m an ambitious 19-year-old, what should I do?”
He said that he gets asked this question often and seemed to google for his standard reply which was quickly posted in the chat:
https://blog.samaltman.com/advice-for-ambitious-19-year-olds
ADDED: On GTP-4 he was asked about the size of the context window. He said that he thinks that a window as big as a whole article should be possible. He didn’t say “article” specifically but I remember something of such a size.
When I say “robot hardware” I don’t mean compute hardware. He mentioned for example how human dexterity is far ahead of robots. The “robotics is hard” bit is partly in “easier to iterate with bits”.