My view is that human development during childhood implies AI self-improvement is possible. That means “threshold” problems.
One concern about AI is that it would be super-persuasive, and thus able to take over institutions. We already have people who believe something dumb because “GPT-4 said it so it must be true”. We have some powerful people enthusiastic about giving power to AI, more interested in that than giving power to smart humans. We have some examples of LLMs being more persuasive to average people than humans are. And we have many historical examples of notable high-verbal-IQ people doing that.
We have this recent progress that wasn’t predicted very well in advance, indicating uncharted waters.
My view is that human development during childhood implies AI self-improvement is possible. That means “threshold” problems.
One concern about AI is that it would be super-persuasive, and thus able to take over institutions. We already have people who believe something dumb because “GPT-4 said it so it must be true”. We have some powerful people enthusiastic about giving power to AI, more interested in that than giving power to smart humans. We have some examples of LLMs being more persuasive to average people than humans are. And we have many historical examples of notable high-verbal-IQ people doing that.
We have this recent progress that wasn’t predicted very well in advance, indicating uncharted waters.
instrumental convergence arguments
[redacted technical thing]