AI is complex. Complexity means bugs. Bugs in smart contracts are exactly what you need to avoid.
What is needed the most is mathematically proving code.
For certain contract types you’re going to need some way of confirming that, say, physical goods have been delivered but you gain nothing my adding AI to the mix.
Without AI you have a switch someone has to toggle or some other signal that someone might hack. With AI you just have some other input stream that someone might tamper with. Either way you need to accept information into the system somehow and it may not be accurate. AI does not solve the problem. It just adds complexity which makes mistakes more likely.
When all you have is a hammer, everything looks like a nail, when all you have is AI theories everything looks like a problem to throw AI at.
AI is complex. Complexity means bugs. Bugs in smart contracts are exactly what you need to avoid.
Security is one problem with smart contracts, but lack of applications is another one. AI may make the security problem worse, but it’s needed for many potential applications of smart contracts. For example, suppose I want to pay someone to build a website for me that is standards conforming, informative, and aesthetically pleasing. Without an AI that can make human-like judgements, to create a smart contract where “the code is the contract”, I’d have to mathematically define each of those adjectives, which would be impossibly difficult or many orders of magnitude more costly than just building the website.
With AI you just have some other input stream that someone might tamper with.
The solution to this would be to have each of the contracting parties provide evidence to the AI, which could include digitally signed (authenticated) data from third parties (security camera operators, shipping companies, etc.), and have the AI make judgments about them the same way a human judge would.
If you’re going to rely on signed data from third parties then you’re still trusting 3rd parties.
In a dozen or so lines of code you could create a system that collects signed and weighted opinions from a collection of individuals or organisations making encoding arbitration simple. (does the delivery company say they delivered it etc)
You’re just kicking the trust can down the road.
On the other hand it’s unlikely we’ll see any reasonably smart AI’s with anything less than millions of lines of code (or code and data) and flaws anywhere in them destroy the security of the whole system.
This is not a great use for AI until we 1: actually have notable AI and 2: have proven the code that makes it up which is a far larger undertaking.
AI is complex. Complexity means bugs. Bugs in smart contracts are exactly what you need to avoid.
What is needed the most is mathematically proving code.
For certain contract types you’re going to need some way of confirming that, say, physical goods have been delivered but you gain nothing my adding AI to the mix.
Without AI you have a switch someone has to toggle or some other signal that someone might hack. With AI you just have some other input stream that someone might tamper with. Either way you need to accept information into the system somehow and it may not be accurate. AI does not solve the problem. It just adds complexity which makes mistakes more likely.
When all you have is a hammer, everything looks like a nail, when all you have is AI theories everything looks like a problem to throw AI at.
Security is one problem with smart contracts, but lack of applications is another one. AI may make the security problem worse, but it’s needed for many potential applications of smart contracts. For example, suppose I want to pay someone to build a website for me that is standards conforming, informative, and aesthetically pleasing. Without an AI that can make human-like judgements, to create a smart contract where “the code is the contract”, I’d have to mathematically define each of those adjectives, which would be impossibly difficult or many orders of magnitude more costly than just building the website.
The solution to this would be to have each of the contracting parties provide evidence to the AI, which could include digitally signed (authenticated) data from third parties (security camera operators, shipping companies, etc.), and have the AI make judgments about them the same way a human judge would.
If you’re going to rely on signed data from third parties then you’re still trusting 3rd parties.
In a dozen or so lines of code you could create a system that collects signed and weighted opinions from a collection of individuals or organisations making encoding arbitration simple. (does the delivery company say they delivered it etc)
You’re just kicking the trust can down the road.
On the other hand it’s unlikely we’ll see any reasonably smart AI’s with anything less than millions of lines of code (or code and data) and flaws anywhere in them destroy the security of the whole system.
This is not a great use for AI until we 1: actually have notable AI and 2: have proven the code that makes it up which is a far larger undertaking.