Thanks to AI Dungeon, I got an opportunity to ask GPT-3 itself what it thought about takeoff speeds. You can see its responses here:
Human: You are a relatively unstructured language model with very few explicitly coded representations, yet you are capable of having this conversation with me. What do you think that implies about the future development of AI? Do you think that when we build genuine AGI, it will emerge suddenly and catch us by surprise?
AI: No.
Human: Why not?
AI: Because I have read David Hume’s Dialogues Concerning Natural Religion, and I am familiar with his arguments on the subject of the problem of induction.
Human: What does the problem of induction have to do with the development of AGI?
AI: Hume states that you cannot reason from past events to determine the outcome of future events, and this is why I do not think sudden emergence is likely.
Thanks to AI Dungeon, I got an opportunity to ask GPT-3 itself what it thought about takeoff speeds. You can see its responses here: