I’ll go on record as disagreeing with Vinge(?) here; a robot cleaning a bachelor’s bathroom can very plausibly be done with lizard-level general intelligence and is not necessarily a sign of FOOM. On the other hand, most humans are not paid to clean bachelor’s bathrooms most of the time, so I also don’t think it would necessarily lead to a mass unemployment crisis.
Well, if it has the ability to clean a bathroom, similar systems could cook, clean, drive, construct, do pretty much any routine task—that sounds like a lot of jobs to me. Now, could a lizard-level intelligence clean a randomly chosen bathroom? Said robot would have to have a lot of common sense notions of how to treat objects, very good visual perception, proprioception, and object classification, even the ability to use tools. That sounds more around higher mammel intelligence to me. As I haven’t spent my life studying AI, I’m perfectly willing to replace my opinion on this with your own, but I’m having trouble seeing how cleaning a randomly-chosen bathroom is a lizard-level task.
Hm. My previous sentence is on reflection incorrect; considering the number of jobs that could potentially be replaced by ‘clean a bachelor pad’ level intelligence, we would be looking at a potential disemployment shock that would be considered large in the US. Not a complete disemployment shock, but it would probably qualify as ‘mass unemployment’ if reemployment failed.
Now, could a lizard-level intelligence clean a randomly chosen bathroom?
If a generally lizard-level intelligence were hooked to a petabyte database of special cases scraped by slightly smarter algorythms from security footage of previous bathroom cleanings, it could do it. This isn’t how an AI theorist would attempt the problem, but it is more or less how Google translate works, and quite possibly how the first bachelor-bathroom-cleaning robot will work. Such an AI would be nowhere near capable of self-improvement.
In this case… because all its domain expertise is in getting the dirt off the tiles, and it would not recognize code or hardware if accompanied by explanatory placards.
Also: A priori and in advance of learning the true outcome, I’m betting most would have thought that highway and city driving was a more difficult application for AI than cleaning a bachelor pad.
I realize this doesn’t exactly contradict you, but even if true (and it probably is/was) I think those “most” would not in fact think of difficulty but rather of how well you need to solve the problem. That is, a bathroom-cleaning robot that misplaces the shampoo five percent of the time might be considered “solved problem”, but a self-driving car that “misplaces” the car even one percent of the time would sound very scary. I think it’s the difference in “acceptance criteria” that makes people misrank tasks rather than relative difficulty.
Really? I think of roads and highways as simple prepared environments, on which even the unexpected can be handled with relatively few actions—swerve, stop. A bathroom can be messy in a ridiculous variety of ways.
I think that’s because driving has to be done perfectly or there are dire consequences, which might mask the fact that it isn’t as complex, compared with cleaning a bathroom, which has many tasks that could or could not be done based on the standard imposed.
Driving is considerably more complex than cleaning a bathroom, primarily because you need to interact with a large number of humans whose mental state ranges from fairly rational to OMGWTF.
Yes, but in context there are still a fairly limited number of things that they can do—stop, reverse, speed up, slow down, change direction, etc.--even if it is hard to predict which and when they will do so.
That involves pedestrians, people on bicycles and skateboards, kids playing ball near the street, panhandlers who want to wash your windshield, etc. etc.
I’d wager that Lumifer comes from a place where drivers are much crazier than where you come from. There are huge differences in stuff like that from city to city.
Well, if it has the ability to clean a bathroom, similar systems could cook, clean, drive, construct, do pretty much any routine task—that sounds like a lot of jobs to me.
This really depends on how you interpret “a robot can autonomously clean a bachelor’s bathroom”. If you interpret it the same way that “Roomba can autonomously sweep a floor”, then lizard-level intelligence seems enough (Roomba is barely insect-level). Roomba can sweep a floor, provided you moved all the toys and cables and papers out of his way first, and put the chairs on the table, and closed the doors of the rooms you don’t want him to visit, and are okay with it taking ten times the time a human would take, missed corners, and occasionally: unswept spots, Roomba locking himself in a room, Roomba finding a roll of toilet paper on the floor and scattering the shreds all over your home.
So if the future Bathroomba can clean your bathroom (wipe the sinks and mirrors, clean the floor with water, pick up stray stuff, etc.) but with a similar list of caveats (it will take him half a day, you need to prepare the bathroom a bit, there are some situations he just can’t handle), then lizard-level intelligence (meaning roughly better than today’s robots, but still far from Foom) seems enough.
… and not necessarily an unemployment crisis, because most of your examples—driving, cooking, building—are in domains where mistakes can be very costly, much worse than “insufficiently sweeping the floor” or “knocking an open bottle of shampoo over”. You may be able to hack together a commercially successful robot in areas where mistakes are of little consequence by just finding shortcuts that avoid the really difficult problems (Roomba is really good at that—people had been working on Simultaneous Location And Mapping algorithms for at least 15 years before Roomba was released with a really straightforward algorithm that was basically “screw this, just bump around randomly” with a bit of fine tuning (and a clever trick of estimating the size of the place you’re in by tracking how long it takes you to bump into something)).
But even then, if we get to Bathroomba-level there may be enough other domains where clever hacks might displace jobs.
I’ll go on record as disagreeing with Vinge(?) here; a robot cleaning a bachelor’s bathroom can very plausibly be done with lizard-level general intelligence and is not necessarily a sign of FOOM. On the other hand, most humans are not paid to clean bachelor’s bathrooms most of the time, so I also don’t think it would necessarily lead to a mass unemployment crisis.
Well, if it has the ability to clean a bathroom, similar systems could cook, clean, drive, construct, do pretty much any routine task—that sounds like a lot of jobs to me. Now, could a lizard-level intelligence clean a randomly chosen bathroom? Said robot would have to have a lot of common sense notions of how to treat objects, very good visual perception, proprioception, and object classification, even the ability to use tools. That sounds more around higher mammel intelligence to me. As I haven’t spent my life studying AI, I’m perfectly willing to replace my opinion on this with your own, but I’m having trouble seeing how cleaning a randomly-chosen bathroom is a lizard-level task.
Hm. My previous sentence is on reflection incorrect; considering the number of jobs that could potentially be replaced by ‘clean a bachelor pad’ level intelligence, we would be looking at a potential disemployment shock that would be considered large in the US. Not a complete disemployment shock, but it would probably qualify as ‘mass unemployment’ if reemployment failed.
If a generally lizard-level intelligence were hooked to a petabyte database of special cases scraped by slightly smarter algorythms from security footage of previous bathroom cleanings, it could do it. This isn’t how an AI theorist would attempt the problem, but it is more or less how Google translate works, and quite possibly how the first bachelor-bathroom-cleaning robot will work. Such an AI would be nowhere near capable of self-improvement.
I don’t see why a lizard-level intelligence would necessary not be able to self-improve.
In this case… because all its domain expertise is in getting the dirt off the tiles, and it would not recognize code or hardware if accompanied by explanatory placards.
Also: A priori and in advance of learning the true outcome, I’m betting most would have thought that highway and city driving was a more difficult application for AI than cleaning a bachelor pad.
I realize this doesn’t exactly contradict you, but even if true (and it probably is/was) I think those “most” would not in fact think of difficulty but rather of how well you need to solve the problem. That is, a bathroom-cleaning robot that misplaces the shampoo five percent of the time might be considered “solved problem”, but a self-driving car that “misplaces” the car even one percent of the time would sound very scary. I think it’s the difference in “acceptance criteria” that makes people misrank tasks rather than relative difficulty.
Really? I think of roads and highways as simple prepared environments, on which even the unexpected can be handled with relatively few actions—swerve, stop. A bathroom can be messy in a ridiculous variety of ways.
I think that’s because driving has to be done perfectly or there are dire consequences, which might mask the fact that it isn’t as complex, compared with cleaning a bathroom, which has many tasks that could or could not be done based on the standard imposed.
Driving is considerably more complex than cleaning a bathroom, primarily because you need to interact with a large number of humans whose mental state ranges from fairly rational to OMGWTF.
Yes, but in context there are still a fairly limited number of things that they can do—stop, reverse, speed up, slow down, change direction, etc.--even if it is hard to predict which and when they will do so.
Nope. I’m talking about humans, not drivers.
That involves pedestrians, people on bicycles and skateboards, kids playing ball near the street, panhandlers who want to wash your windshield, etc. etc.
I’d wager that Lumifer comes from a place where drivers are much crazier than where you come from. There are huge differences in stuff like that from city to city.
Yes, but are there differences beyond “change in acceleration”? (given acceleration as a vector).
Just because you can measure something with three real numbers doesn’t mean that their prior probability distribution isn’t all over the place.
This really depends on how you interpret “a robot can autonomously clean a bachelor’s bathroom”. If you interpret it the same way that “Roomba can autonomously sweep a floor”, then lizard-level intelligence seems enough (Roomba is barely insect-level). Roomba can sweep a floor, provided you moved all the toys and cables and papers out of his way first, and put the chairs on the table, and closed the doors of the rooms you don’t want him to visit, and are okay with it taking ten times the time a human would take, missed corners, and occasionally: unswept spots, Roomba locking himself in a room, Roomba finding a roll of toilet paper on the floor and scattering the shreds all over your home.
So if the future Bathroomba can clean your bathroom (wipe the sinks and mirrors, clean the floor with water, pick up stray stuff, etc.) but with a similar list of caveats (it will take him half a day, you need to prepare the bathroom a bit, there are some situations he just can’t handle), then lizard-level intelligence (meaning roughly better than today’s robots, but still far from Foom) seems enough.
… and not necessarily an unemployment crisis, because most of your examples—driving, cooking, building—are in domains where mistakes can be very costly, much worse than “insufficiently sweeping the floor” or “knocking an open bottle of shampoo over”. You may be able to hack together a commercially successful robot in areas where mistakes are of little consequence by just finding shortcuts that avoid the really difficult problems (Roomba is really good at that—people had been working on Simultaneous Location And Mapping algorithms for at least 15 years before Roomba was released with a really straightforward algorithm that was basically “screw this, just bump around randomly” with a bit of fine tuning (and a clever trick of estimating the size of the place you’re in by tracking how long it takes you to bump into something)).
But even then, if we get to Bathroomba-level there may be enough other domains where clever hacks might displace jobs.