“AI will never be smarter than my dad.”
I believe ranked comparing intelligence between two artificial or biological agents can only be down subjectively with someone deciding what they value.
Additionally, I think there is no agreed upon whether the definition “intelligence” should include knowledge. For example, can you consider an AI “smart” if it doesn’t know anything about humans?
On the other hand, I value my dad’s ability to have knowledge about my childhood and have a model of my behavior across tens of years very highly. Thus, I will never agree that AI is smarter than my dad I will only agree that AI is better at certain cognitive skills while my dad is better at certain other cognitive skills even if some of those skills only requires a simple memory lookup.
Whether certain relatively general AI will be better at learning a random set of cognitive tasks than my dad is a different question, if it will be then I will admit that it’s better at certain or maybe all known generality benchmarks but only I can decide what cognitive skills I value for myself.
AI-Caused Extinction Ingredients
Below is what I see is required for AI-Caused Extinction to happen in the next few tens of years (years 2024-2050 or so). In brackets is my very approximate probability estimation as of 2024-07-25 assuming all previous steps have happened.
AI technologies continue to develop at approximately current speeds or faster (80%)
AI manages to reach a level where it can cause an extinction (90%)
AI that can cause an extinction did not have enough alignment mechanisms in place (90%)
AI executes an unaligned scenario (low, maybe less than 10%)
Other AIs and humans aren’t able to notice and stop the unaligned scenario in time (50-50ish)
Once the scenario is executed humanity is never able to roll it back (50-50ish)