(though admittedly I lost a bet that it would lose to lee sedol.)
Condolances :( I often try to make money of future knowledge only to lose to precise timing or some other specific detail.
I wonder why I missed deep learning. Idk whether I was wrong to, actually. It obviously isn’t AGI. It still can’t do math and so it still can’t check its own outputs. It was obvious that symbolic reasoning was important. I guess I didn’t realize the path to getting my “dreaming brainstuff” to write proofs well would be long, spectacular and profitable.
Hmm, the way humans’ utility function is shattered and strewn about a bunch of different behaviors that don’t talk to each other, I wonder if that will always happen in ML too (until symbolic reasoning and training in the presence of that)
Condolances :( I often try to make money of future knowledge only to lose to precise timing or some other specific detail.
I wonder why I missed deep learning. Idk whether I was wrong to, actually. It obviously isn’t AGI. It still can’t do math and so it still can’t check its own outputs. It was obvious that symbolic reasoning was important. I guess I didn’t realize the path to getting my “dreaming brainstuff” to write proofs well would be long, spectacular and profitable.
Hmm, the way humans’ utility function is shattered and strewn about a bunch of different behaviors that don’t talk to each other, I wonder if that will always happen in ML too (until symbolic reasoning and training in the presence of that)