When you say you don’t think you could have written it, do you mean that you couldn’t have written it without all the things you’ve learned from talking to Yudkowsky, or that you couldn’t have written it even now? Most of this list was things I’ve seen Yudkowsky write before, so if it’s the latter that surprises me.
swift_spiral
My thoughts after the first cycle: I found listing my bugs, Yoda Timers, and TAPs very helpful. The later days were interesting to read, but they did not help me solve many of my bugs (time calibration is a very useful skill, but I was already pretty good at it, at least for the short projects I can get quick feedback on). I think my largest benefits from reading this came from setting aside time to fix various things I had been procrastinating on, and motivating me to actually start doing things I had been planning to do for a while.
This is better, thank you!
Some thoughts after reading Artificial Intelligence: A Modern Approach
The best change I made was keeping a notebook nearby when I am reading on my computer. This makes me more likely to take notes on what I am reading, which helps with 1) remembering what I am reading, 2) noticing when I don’t understand what I am reading, and 3) noticing that if I do not have anything I want to write down about something, I may not be learning much from reading it.
I picked standing up as my sapience spell, but I’m not sure having something that happens at unpredictable times is more helpful than taking a few minutes to focus as the start of each day.
I found the idea of TAPs useful—rehearsing a habit I want several times in a row when I first start doing it was something I hadn’t thought of doing before, and it seems very helpful.
I ordered a new clock on Amazon, cleared my desk, sent several emails, and cancelled some email subscriptions I no longer wanted. I also decided on specific later times to do a few things, but I will need to wait and see if I successfully do those things.
I just found this sequence, and am going to try going through it. I came up with about 90 bugs. I was surprised by how many easy to fix problems I had been procrastinating on—motivating me to just solve all of those now was an immediate benefit of reading this. I haven’t been able to think of a strange bug fix story, but one general thing that has helped me is keeping things I might need close—for example, having multiple pencils and a tissue box on my desk, rather than in a different room.
Why does burning all GPUs succeed at preventing unaligned AGI, rather than just delaying it? It seems like you would need to do something more like burning all GPUs now, and also any that get created in the future, and also monitor for any other forms of hardware powerful enough to run AGI, and for any algorithmic progress that allows creating AGI with weaker hardware, and then destroying that other hardware too. Maybe this is what you meant by “burn all GPUs”, but it seems harder to make an AI safely do than just doing that once, because you need to allow the AI to defend itself indefinitely against people who don’t want it to keep destroying GPUs.