do you see a potential conceptual distinction between my idea and classic paperclip maximization?
No. Not without a lot more work, because markets, evolution, gradient descent, Bayesian inference, and logical inference/prediction markets all have various isomorphisms and formal identities, which can make their ‘differences’ more a matter of nominalist preference, notation, and emphasis than necessarily any genuine conceptual distinction. You can define AIs which are quite explicitly architected as ‘markets’ of various sorts, like the ‘Hayek machine’ or the ‘neural bucket brigade’, or interpret them as natural selection if you prefer on agents with log utility (evolutionary finance), and so on; are those “markets”, which can trade paperclips? Sure, why not.
I see that I need a post to at least explain myself. On the other hand, I worry to post too soon (maybe it’s better to discuss something beforehand?). For the moment I decided to post this comment. I know, it’s not formal, but I wanted to show what type of AI thinking I have in mind. And sorry for an annoying semantic nitpick ahead.
Not without a lot more work, because markets, evolution, gradient descent, Bayesian inference, and logical inference/prediction markets all have various isomorphisms and formal identities, which can make their ‘differences’ more a matter of nominalist preference, notation, and emphasis than necessarily any genuine conceptual distinction.
I think we can use 2 metrics to compare those ideas:
Does this idea describe what the AI tries to achieve?
Does this idea describe how the AI thinks internally?
My idea is 80% about (1) and 20% about (2). Gradient descent is 100% about (2). Evolution, Bayesian inference and prediction markets are 100% about (2).
Because of this I feel like there’s only 20% chance those ideas are equivalent/there’s only 20% equivalence between them.
So, I feel like those ideas are different enough: “an AI that works like a market” and “an AI that seeks markets in the world and analyzes their properties”.
No. Not without a lot more work, because markets, evolution, gradient descent, Bayesian inference, and logical inference/prediction markets all have various isomorphisms and formal identities, which can make their ‘differences’ more a matter of nominalist preference, notation, and emphasis than necessarily any genuine conceptual distinction. You can define AIs which are quite explicitly architected as ‘markets’ of various sorts, like the ‘Hayek machine’ or the ‘neural bucket brigade’, or interpret them as natural selection if you prefer on agents with log utility (evolutionary finance), and so on; are those “markets”, which can trade paperclips? Sure, why not.
Thank you for taking the time to answer!
I see that I need a post to at least explain myself. On the other hand, I worry to post too soon (maybe it’s better to discuss something beforehand?). For the moment I decided to post this comment. I know, it’s not formal, but I wanted to show what type of AI thinking I have in mind. And sorry for an annoying semantic nitpick ahead.
I think we can use 2 metrics to compare those ideas:
Does this idea describe what the AI tries to achieve?
Does this idea describe how the AI thinks internally?
My idea is 80% about (1) and 20% about (2). Gradient descent is 100% about (2). Evolution, Bayesian inference and prediction markets are 100% about (2).
Because of this I feel like there’s only 20% chance those ideas are equivalent/there’s only 20% equivalence between them.
So, I feel like those ideas are different enough: “an AI that works like a market” and “an AI that seeks markets in the world and analyzes their properties”.