My original thought was selling access to lawyers who are preparing cases. It could also be valuable to people who are trying to maneuver in complex legal environments—executives and politicians and such.
It seems to me that there should a limited cheap or free version, but I’m not sure how that would work.
Hmmm. Okay. So the reason this is profitable is because it’s gotten SO hard to keep track of all the laws that even lawyers would be willing to pay for software that can help them check their legal ideas against the database of existing laws?
There’s probably a bit of money in distilling legalese into simpler language. Nolo Press, for instance, is in that field.
The real money in lawyering, however, is in applying the law to the available evidence in a very specific case. This is why some BigLaw firms charge hourly fees measured by the boatload. A brilliant entrepreneur able to develop an artificial intelligence application which could apply the facts to the law as effectively as a BigLaw firm should eventually be able to cut into some BigLaw action. That’s a lot of money.
This is a hard problem. My personal favorite Aesop’s fable about applying the facts to the law is Isaac Asimov’s short story Runaround . Worth reading all the way through, but for our purposes, the law is very clear and simple: the three laws of robotics. The fact situation is that the human master has casually and lightly ordered the robot to do something which was unexpectedly very dangerous to the robot. The robot then goes nuts, spinning around in a circle. Asimov says it better of course:
Powell’s radio voice was tense in Donovan’s car: “Now, look, let’s start with the three fundamental Rules of Robotics—the three rules that are built most deeply into a robot’s positronic brain.” In the darkness, his gloved fingers ticked off each point.
“We have: One, a robot may not injure a human being, or, through inaction, allow a human being to come to harm.”
“Right!”
“Two,” continued Powell, “a robot must obey the orders given it by human beings except where such orders would conflict with the First Law.”
“Right!”
“And three, a robot must protect its own existence as long as such protection does Dot conflict with the First or Second Laws.”
“Right! Now where are we?”
“Exactly at the explanation. The conflict between the various rules is ironed out by the different positronic potentials in the brain. We’ll say that a robot is walking into danger and knows it. The automatic potential that Rule 3 sets up turns him back. But suppose you order him to walk into that danger. In that case, Rule 2 sets up a counterpotential higher than the previous one and the robot follows orders at the risk of existence.”
“Well, I know that. What about it?”
“Let’s take Speedy’s case. Speedy is one of the latest models, extremely specialized, and as expensive as a battleship. It’s not a thing to be lightly destroyed.”
“So?”
“So Rule 3 has been strengthened-that was specifically mentioned, by the way, in the advance notices on the SPD models-so that his allergy to danger is unusually high. At the same time, when you sent him out after the selenium, you gave him his order casually and without special emphasis, so that the Rule 2 potential set-up was rather weak. Now, hold on; I’m just stating facts.”
“All right, go ahead. I think I get it.”
“You see how it works, don’t you? There’s some sort of danger centering at the selenium pool. It increases as he approaches, and at a certain distance from it the Rule 3 potential, unusually high to start with, exactly balances the Rule 2 potential, unusually low to start with.”
Donovan rose to his feet in excitement. “And it strikes an equilibrium. I see. Rule 3 drives him back and Rule 2 drives him forward - ”
“So he follows a circle around the selenium pool, staying on the locus of all points of potential equilibrium. And unless we do something about it, he’ll stay on that circle forever, giving us the good old runaround.”
In the real world, courts hardly ever decide that the law is indecipherable, and so the plaintiff should run around in a circle singing nonsense songs (but see, Ashford v Thornton [(1818) 106 ER 149].) The moral of the story, however, is that there is ambiguity in the application of the simplest and clearest of laws.
And so the whole human race spins in circles. Yes, I see. (: And so, do you propose that this software also takes out ambiguity? Do you see a way around that other than specifying exactly what to do in every situation? BTW, I rewrote the intro on the OP—any suggestions?
Now that I think about it, a program which can do a good job of finding laws which are relevant to a case would and or ranking laws by relevance probably be valuable—even if it’s not as good as the best lawyers.
It might also be a good way of making money.
So we can see your vision, please describe how this would work?
My original thought was selling access to lawyers who are preparing cases. It could also be valuable to people who are trying to maneuver in complex legal environments—executives and politicians and such.
It seems to me that there should a limited cheap or free version, but I’m not sure how that would work.
Hmmm. Okay. So the reason this is profitable is because it’s gotten SO hard to keep track of all the laws that even lawyers would be willing to pay for software that can help them check their legal ideas against the database of existing laws?
There’s probably a bit of money in distilling legalese into simpler language. Nolo Press, for instance, is in that field.
The real money in lawyering, however, is in applying the law to the available evidence in a very specific case. This is why some BigLaw firms charge hourly fees measured by the boatload. A brilliant entrepreneur able to develop an artificial intelligence application which could apply the facts to the law as effectively as a BigLaw firm should eventually be able to cut into some BigLaw action. That’s a lot of money.
This is a hard problem. My personal favorite Aesop’s fable about applying the facts to the law is Isaac Asimov’s short story Runaround . Worth reading all the way through, but for our purposes, the law is very clear and simple: the three laws of robotics. The fact situation is that the human master has casually and lightly ordered the robot to do something which was unexpectedly very dangerous to the robot. The robot then goes nuts, spinning around in a circle. Asimov says it better of course:
In the real world, courts hardly ever decide that the law is indecipherable, and so the plaintiff should run around in a circle singing nonsense songs (but see, Ashford v Thornton [(1818) 106 ER 149].) The moral of the story, however, is that there is ambiguity in the application of the simplest and clearest of laws.
And so the whole human race spins in circles. Yes, I see. (: And so, do you propose that this software also takes out ambiguity? Do you see a way around that other than specifying exactly what to do in every situation? BTW, I rewrote the intro on the OP—any suggestions?
Now that I think about it, a program which can do a good job of finding laws which are relevant to a case would and or ranking laws by relevance probably be valuable—even if it’s not as good as the best lawyers.