In my reading it means, that there are already actual implementations for all probability inference operations that the authors consider in the book.
This has been probably a true statement, even in the 60′ies. It does not mean that the robot as a whole is resource-wise feasible.
An analogy: It is not hard to implement all (non-probabilistic) logical derivation rules. It is also straightforward to use them to generate all true mathematical theorems (e.g. within ZFC). However this does not imply that we have an practical (i.e. efficient) general purpose mathematical theorem-prover. It gives an algorithm to prove every provable theorems eventually, but its run-time consumption makes this approach practically useless.
In my reading it means, that there are already actual implementations for all probability inference operations that the authors consider in the book.
This has been probably a true statement, even in the 60′ies. It does not mean that the robot as a whole is resource-wise feasible.
An analogy: It is not hard to implement all (non-probabilistic) logical derivation rules. It is also straightforward to use them to generate all true mathematical theorems (e.g. within ZFC). However this does not imply that we have an practical (i.e. efficient) general purpose mathematical theorem-prover. It gives an algorithm to prove every provable theorems eventually, but its run-time consumption makes this approach practically useless.