Edit: I do think that there is some future line, across which AI academic publishing would be unequivocally bad. I also think slowing down AI progress in general would be a good thing.
Okay, I guess my question still applies?
For example, it might be that letting it progress without restriction has more upsides then slowing it down.
An example of something I would be strongly against anyone publishing at this point in history is an algorithmic advance which drastically lowered compute costs for an equivalent level of capabilities, or substantially improved hazardous capabilities (without tradeoffs) such as situationally-aware strategic reasoning or effective autonomous planning and action over long time scales. I think those specific capability deficits are keeping the world safe from a lot of possible bad things.
I think… maybe I see the world and humanity’s existence on it, as a more fragile state of affairs than other people do. I wish I could answer you more thoroughly.
Okay, I guess my question still applies?
For example, it might be that letting it progress without restriction has more upsides then slowing it down.
An example of something I would be strongly against anyone publishing at this point in history is an algorithmic advance which drastically lowered compute costs for an equivalent level of capabilities, or substantially improved hazardous capabilities (without tradeoffs) such as situationally-aware strategic reasoning or effective autonomous planning and action over long time scales. I think those specific capability deficits are keeping the world safe from a lot of possible bad things.
Yes it’s clear these are your views, Why do you believe so?
I think… maybe I see the world and humanity’s existence on it, as a more fragile state of affairs than other people do. I wish I could answer you more thoroughly.
https://www.lesswrong.com/posts/uPi2YppTEnzKG3nXD/nathan-helm-burger-s-shortform?commentId=qmrrKminnwh75mpn5