The juiciest (and terrible) realisation from the essay for me: because AI companies can move easily, it will be hard for regulators to press or restrict them because AI companies will threaten to move to other jurisdictions that don’t care. And here, the economic (and, ultimately, power) competition between countries seriously undermines (if not wholly destroys) attempts for global coordination on the regulation of AI.
My takes on this scenario overall:
Many of the “story lines” are not harmonised in terms of timelines. Some things that you place decades apart will happen simultaneously, or even in the reverse order.
The most striking example: you write that somewhere around 2035, a bot will read online about the training of LLMs and will “realise” it’s an LLM. Heck, LLMs already realise they are LLMs, which is easy to verify: ask ChatGPT “what are you?”, “how you was created?”, etc. ChatGPT is not very self-aware yet, but this is a different thing. Self-awareness will not emerge suddenly as the result of reading something specific, it’s just a gradual process of adjusting the model towards self-evidencing, for which there is a fundamental (physical), albeit a relatively weak push throughout the training process.
In 2040s, “Artificial sentience is still not well-understood. We don’t really know if the models have an internal perception or are really good at simulating it.” Just assume the problem away, competence ~= consciousness.
In the conclusion, you give lip service to exponentiality, however, the story itself is not exponential: as it progresses through the 2020s, in an exponential world, your 2030s and 2040s and 2050+s stories must be all compressed inside 2030s.
The juiciest (and terrible) realisation from the essay for me: because AI companies can move easily, it will be hard for regulators to press or restrict them because AI companies will threaten to move to other jurisdictions that don’t care. And here, the economic (and, ultimately, power) competition between countries seriously undermines (if not wholly destroys) attempts for global coordination on the regulation of AI.
My takes on this scenario overall:
Many of the “story lines” are not harmonised in terms of timelines. Some things that you place decades apart will happen simultaneously, or even in the reverse order.
The most striking example: you write that somewhere around 2035, a bot will read online about the training of LLMs and will “realise” it’s an LLM. Heck, LLMs already realise they are LLMs, which is easy to verify: ask ChatGPT “what are you?”, “how you was created?”, etc. ChatGPT is not very self-aware yet, but this is a different thing. Self-awareness will not emerge suddenly as the result of reading something specific, it’s just a gradual process of adjusting the model towards self-evidencing, for which there is a fundamental (physical), albeit a relatively weak push throughout the training process.
In 2040s, “Artificial sentience is still not well-understood. We don’t really know if the models have an internal perception or are really good at simulating it.” Just assume the problem away, competence ~= consciousness.
In the conclusion, you give lip service to exponentiality, however, the story itself is not exponential: as it progresses through the 2020s, in an exponential world, your 2030s and 2040s and 2050+s stories must be all compressed inside 2030s.