I don’t really understand why AGI is so different from currently existing software. Current software seems docile—we worry more about getting it to do anything in the first place, and less about it accidentally doing totally unrelated things. Yet AGI seems to be the exact opposite. It seems we think of AGI as being ‘like humans, only more so’ rather than ‘like software, only more so’. Indeed, in many cases it seems that knowing about conventional software actually inhibits one’s ability to think about AGI. Yet I don’t really understand why this should be the case.
Was there anything you didn’t understand this week?
I don’t really understand why AGI is so different from currently existing software. Current software seems docile—we worry more about getting it to do anything in the first place, and less about it accidentally doing totally unrelated things. Yet AGI seems to be the exact opposite. It seems we think of AGI as being ‘like humans, only more so’ rather than ‘like software, only more so’. Indeed, in many cases it seems that knowing about conventional software actually inhibits one’s ability to think about AGI. Yet I don’t really understand why this should be the case.