This section reeks of the guaranteed safe AI agendas, a lot of agreement. For example, using formal methods to harden any box we try to put the AI in is a kind of defensive acceleration that doesn’t work (too expensive) until certain pre-ASI stages of development. I’m working on formal verification agents along these lines right now.
This section reeks of the guaranteed safe AI agendas, a lot of agreement. For example, using formal methods to harden any box we try to put the AI in is a kind of defensive acceleration that doesn’t work (too expensive) until certain pre-ASI stages of development. I’m working on formal verification agents along these lines right now.