But in practice we have a fuzzier definition of ‘function’ along the lines of ‘predict the outcome as accurately as you can without actually affecting it’, and prediction markets suffer an AI-alignment-esque issue in pursuing this goal.
Part of the issue is that “predict the outcome as accurately as you can without actually affecting it” is a causal concept, but prediction markets can’t do anything about causality because they can’t model counterfactuals such as what would have happened if someone was not allowed to make the bet. This makes it a fundamentally unsolvable problem if one is restricted to purely mechanical rules like those of prediction markets.
If by ‘function’ you mean ‘successfully predict an outcome’, the example above is great! We started out unsure if you would publish a blog post, then you entered the market, and now we are certain of the result! Hooray!
Note that this doesn’t work in general; see my response to Dagon.
Part of the issue is that “predict the outcome as accurately as you can without actually affecting it” is a causal concept, but prediction markets can’t do anything about causality because they can’t model counterfactuals such as what would have happened if someone was not allowed to make the bet. This makes it a fundamentally unsolvable problem if one is restricted to purely mechanical rules like those of prediction markets.
Note that this doesn’t work in general; see my response to Dagon.