Normal Artificial Neural Networks are Turing complete with a certain amount of hidden layers (I think 4, but it has been a long time, and I don’t know the reference off hand, this says 1 for universal approximation (paywalled)). A bit of googling says that recurrent neural networks are turing complete.
Feed forward neural networks can represent any computable function between its input and the output. They are not Turing complete with respect to the past inputs and the output as AIXI is.
Note this doesn’t say anything about the set of training data needed to get the network to represent the function or how big the network would need to be. Just about the possibility.
Couldn’t “not” negatively reinforce a hidden node level between the input and output?
I’d like to hear what an expert like Phil has to say on this topic.
Normal Artificial Neural Networks are Turing complete with a certain amount of hidden layers (I think 4, but it has been a long time, and I don’t know the reference off hand, this says 1 for universal approximation (paywalled)). A bit of googling says that recurrent neural networks are turing complete.
Feed forward neural networks can represent any computable function between its input and the output. They are not Turing complete with respect to the past inputs and the output as AIXI is.
Note this doesn’t say anything about the set of training data needed to get the network to represent the function or how big the network would need to be. Just about the possibility.