Now this is really interesting! If we take this and extrapolate it the same way as we did our previous miss-conception, it seems like having so little complexity to work with is an important factor in causing the generality!
predictions from this:
Species with lower mutation rate and more selection pressure, while it should be much better of at first glance, would have the advance much further before reaching similar amounts of generality. (makes for great scifi!)
Approaches to AI involving very minimal, accessible on a low level from within, and entangling the AI with every other function on the actual physical computer, may be a better idea than one would otherwise expect. (which, depending on what you’d expect, might still not be much)
it seems like having so little complexity to work with is an important factor in causing the generality
Probably. My favourite example here are first class functions in programming languages. There is talk about “currying”, “anonymous functions”, “closures”… that needlessly complicates the issue. They look like additional features which complicate the language and make people wonder why they would ever need that.
On the other hand, you can turn this reasoning on its head if you think of functions as mere mathematical objects, like integers. Now the things you can do with integers you can’t do with functions (besides arithmetic), are restrictions. Lifting those restrictions would make your programming language both simpler and more powerful.
Now there’s a catch: all complexity does not lie in arbitrary quirks or restrictions. You need a minimum amount to do something useful. So I’m not sure to what extent the “simplify as much as you can” can generalize. It sure is very helpful when writing programs,
There’s a catch however: the complexity I remove here was completely destructive. Here using the general formulae for edge cases merely lifted restrictions! I’m not sure that’s always the case. You do need a minimum amount of complexity to do anything. For instance, Windows could fit in a book if Microsoft cared about that, so maybe that’s why it (mostly) doesn’t crash down in flames. On the other hand, something that really cannot fit in less than 10 thousand books is probably beyond our comprehension. Hopefully a seed FAI will not need more than 10 books. But we still don’t know everything about morality and intelligence.
Intuitively, the complexity of the program would have to match the complexity of the problem domain. If it’s less, you get lack of features and customizability. If it’s more, you get bloat.
Now this is really interesting! If we take this and extrapolate it the same way as we did our previous miss-conception, it seems like having so little complexity to work with is an important factor in causing the generality!
predictions from this:
Species with lower mutation rate and more selection pressure, while it should be much better of at first glance, would have the advance much further before reaching similar amounts of generality. (makes for great scifi!)
Approaches to AI involving very minimal, accessible on a low level from within, and entangling the AI with every other function on the actual physical computer, may be a better idea than one would otherwise expect. (which, depending on what you’d expect, might still not be much)
Probably. My favourite example here are first class functions in programming languages. There is talk about “currying”, “anonymous functions”, “closures”… that needlessly complicates the issue. They look like additional features which complicate the language and make people wonder why they would ever need that.
On the other hand, you can turn this reasoning on its head if you think of functions as mere mathematical objects, like integers. Now the things you can do with integers you can’t do with functions (besides arithmetic), are restrictions. Lifting those restrictions would make your programming language both simpler and more powerful.
Now there’s a catch: all complexity does not lie in arbitrary quirks or restrictions. You need a minimum amount to do something useful. So I’m not sure to what extent the “simplify as much as you can” can generalize. It sure is very helpful when writing programs,
There’s a catch however: the complexity I remove here was completely destructive. Here using the general formulae for edge cases merely lifted restrictions! I’m not sure that’s always the case. You do need a minimum amount of complexity to do anything. For instance, Windows could fit in a book if Microsoft cared about that, so maybe that’s why it (mostly) doesn’t crash down in flames. On the other hand, something that really cannot fit in less than 10 thousand books is probably beyond our comprehension. Hopefully a seed FAI will not need more than 10 books. But we still don’t know everything about morality and intelligence.
Intuitively, the complexity of the program would have to match the complexity of the problem domain. If it’s less, you get lack of features and customizability. If it’s more, you get bloat.
What about 10 thousand cat videos? :p
But yea, upvoted.