BTW one of the things our theory tells us is you can never build half an AI. It will jump straight from very minimal functionality to universal functionality, just as computer programming languages do. (The “jump to universality” is discussed by David Deutsch in The Beginning of Infinity). One thing this means is there is no way to know how far along we are—the jump could come at any time with one new insight.
That sounds pretty bizarre. So much for the idea of progress via better and better compression and modeling. However, it seems pretty unlikely to me that you actually know what you are talking about here.
Insulting my expertise is not an argument. (And given you know nothing about my expertise, it’s a silly too. Concluding that people aren’t experts because you disagree with them is biased and closed minded.)
Are you familiar with the topic? Do you want me to give you a lecture on it? Will you read about it?
That sounds pretty bizarre. So much for the idea of progress via better and better compression and modeling. However, it seems pretty unlikely to me that you actually know what you are talking about here.
Insulting my expertise is not an argument. (And given you know nothing about my expertise, it’s a silly too. Concluding that people aren’t experts because you disagree with them is biased and closed minded.)
Are you familiar with the topic? Do you want me to give you a lecture on it? Will you read about it?