Theoretical systems are useful so long as you keep track of where they depart from reality.
Consider the following exchange:
Engineer: The programme is acquiring more memory than it is releasing’ so it will eventually fill the memory and crash.
Computer Scientist: No it won’t, the memory is infinite.
Do the MIRI crowd make similar errors? Sure, consider Bostrom’s response to Oracle AI. He assumes that an Oracle can only be a general intelligence coupled to a utility function that makes it want to answer questions and do nothing else.
I take your point that theorists can appear to be concerned with problems that have very little impact. On the other hand, there are some great theoretical results and concepts that can prevent us futility wasting our time and guide us to areas where success is more likely.
I think you’re being ungenerous to Bolstrom. His paper on the possibility of Oracle type AIs is quite nuanced, and discusses many difficulties that would have to be overcome …
I think your criticism is a little harsh. Turing machines are impossible to implement as well, but they are still a useful theoretical concept.
Theoretical systems are useful so long as you keep track of where they depart from reality.
Consider the following exchange:
Engineer: The programme is acquiring more memory than it is releasing’ so it will eventually fill the memory and crash.
Computer Scientist: No it won’t, the memory is infinite.
Do the MIRI crowd make similar errors? Sure, consider Bostrom’s response to Oracle AI. He assumes that an Oracle can only be a general intelligence coupled to a utility function that makes it want to answer questions and do nothing else.
I take your point that theorists can appear to be concerned with problems that have very little impact. On the other hand, there are some great theoretical results and concepts that can prevent us futility wasting our time and guide us to areas where success is more likely.
I think you’re being ungenerous to Bolstrom. His paper on the possibility of Oracle type AIs is quite nuanced, and discusses many difficulties that would have to be overcome …
http://www.nickbostrom.com/papers/oracle.pdf
To be fair to Bostrom’ he doesn’t go all the way down the rabbit hole—arguing that oracles aren’t any different to agentive AGIs.