This does not sound very encouraging from the perspective of AI Notkilleveryoneism. When the announcement of the foundation model task force talks about safety, I cannot find hints that they mean existential safety. Rather, it seems about safety for commercial purposes.
A lot of the money might go into building a foundation model. At least they should also announce that they will not share weights and details on how to build it, if they are serious about existential safety.
This might create an AI safety race to the top as a solution to the tragedy of the commons
This seems to be the opposite of that. The announcement talks a lot about establishing UK as a world leader, e.g. “establish the UK as a world leader in foundation models”.
This does not sound very encouraging from the perspective of AI Notkilleveryoneism. When the announcement of the foundation model task force talks about safety, I cannot find hints that they mean existential safety. Rather, it seems about safety for commercial purposes.
A lot of the money might go into building a foundation model. At least they should also announce that they will not share weights and details on how to build it, if they are serious about existential safety.
This seems to be the opposite of that. The announcement talks a lot about establishing UK as a world leader, e.g. “establish the UK as a world leader in foundation models”.