I don’t know, I feel like the day that an AI can do significantly better than this, will be close to the final day of human supremacy. In my experience, we’re still in a stage where the AIs can’t really form or analyze complex structured thoughts on their own—where I mean thoughts with, say, the complexity of a good essay. To generate complex structured thoughts, you have to help them a bit, and when they analyze something complex and structured, they can make out parts of it, but they don’t form a comprehensive overall model of meaning that they can then consult at will.
I don’t have a well-thought-out theory of the further tiers of intellectual accomplishment, these are just my rough impressions. But I can imagine that GPT-4 coupled with chain-of-thought, or perhaps Claude with its enormous, book-length context size, can attain that next level of competence I’ve roughly described as autonomous reading and writing, at the level of essays and journal articles.
I see this as a reason to have one’s best formula for a friendly outcome, ready now (or if you’re into pivotal acts, your best specification of your best proposal for halting the AI race). For me, I guess that’s still June Ku’s version of CEV, filtered through as much reflective virtue as you can manage… The point being that once AIs at that “essay level” of cognition start talking to themselves or each other, you have the ingredients for a real runaway to occur, so you want to be ready to seed it with the best initial conditions you can supply.
Oh I totally agree with everything you say here, especially your first sentence. My timelines median for intelligence explosion (conditional on no significant government-enforced slowdown) is 2027.
So maybe I was misleading when I said I was unimpressed.
I don’t know, I feel like the day that an AI can do significantly better than this, will be close to the final day of human supremacy. In my experience, we’re still in a stage where the AIs can’t really form or analyze complex structured thoughts on their own—where I mean thoughts with, say, the complexity of a good essay. To generate complex structured thoughts, you have to help them a bit, and when they analyze something complex and structured, they can make out parts of it, but they don’t form a comprehensive overall model of meaning that they can then consult at will.
I don’t have a well-thought-out theory of the further tiers of intellectual accomplishment, these are just my rough impressions. But I can imagine that GPT-4 coupled with chain-of-thought, or perhaps Claude with its enormous, book-length context size, can attain that next level of competence I’ve roughly described as autonomous reading and writing, at the level of essays and journal articles.
I see this as a reason to have one’s best formula for a friendly outcome, ready now (or if you’re into pivotal acts, your best specification of your best proposal for halting the AI race). For me, I guess that’s still June Ku’s version of CEV, filtered through as much reflective virtue as you can manage… The point being that once AIs at that “essay level” of cognition start talking to themselves or each other, you have the ingredients for a real runaway to occur, so you want to be ready to seed it with the best initial conditions you can supply.
Oh I totally agree with everything you say here, especially your first sentence. My timelines median for intelligence explosion (conditional on no significant government-enforced slowdown) is 2027.
So maybe I was misleading when I said I was unimpressed.