Today, thousands link their machine learning projects up to the internet, without a thought of the supposed risk. For them, the “box” is the rest of the internet—which won’t let them do very much, and locks down most types of access on security grounds. The problem the programmers usually face, is not their baby getting too much power, but it being ignored and neglected. If only having too much power was a problem for most machine learning systems. The best machine learning systems are doing work in the world, answering queries, making investment decisions—and so on. That’s the situation today. Project it forwards, and see how relevant you think all this is going to be.
Also, programmers routinely use test harnesses today—to keep their programs offline during testing. It is a pretty trivial and commonplace thing to do. If we have smarter programs, surely, we will have smarter boxes to put them in.
That doesn’t seem like a very sympathetic interpretation. It was intended more like an invitation to do your best job.
If you have a projection involving a totalitarian government clampdown on research into machine intelligence (or similar), perhaps share the scenario—so others can better visualise how realistic it is.
The smarter boxes have to come from somewhere. Paul is proposing such a solution. Assuming that someone will take care of the problem in the future seems in general to be a poor approach.
It’s not an “approach”, it’s a projection. For a long time we have been able to construct good quality prisons, with the level of security we desire. The jailers have the force of society behind them. More intelligence is unlikely to change the situation—since the jailers and the incarcerated will be lifted by that equally.
Today, thousands link their machine learning projects up to the internet, without a thought of the supposed risk. For them, the “box” is the rest of the internet—which won’t let them do very much, and locks down most types of access on security grounds. The problem the programmers usually face, is not their baby getting too much power, but it being ignored and neglected. If only having too much power was a problem for most machine learning systems. The best machine learning systems are doing work in the world, answering queries, making investment decisions—and so on. That’s the situation today. Project it forwards, and see how relevant you think all this is going to be.
Also, programmers routinely use test harnesses today—to keep their programs offline during testing. It is a pretty trivial and commonplace thing to do. If we have smarter programs, surely, we will have smarter boxes to put them in.
As in curve fit and then extrapolate while ignoring all other information?
Using this method, how long until transistors become subatomic particles?
That doesn’t seem like a very sympathetic interpretation. It was intended more like an invitation to do your best job.
If you have a projection involving a totalitarian government clampdown on research into machine intelligence (or similar), perhaps share the scenario—so others can better visualise how realistic it is.
The smarter boxes have to come from somewhere. Paul is proposing such a solution. Assuming that someone will take care of the problem in the future seems in general to be a poor approach.
It’s not an “approach”, it’s a projection. For a long time we have been able to construct good quality prisons, with the level of security we desire. The jailers have the force of society behind them. More intelligence is unlikely to change the situation—since the jailers and the incarcerated will be lifted by that equally.