Farhad Manjoo, over at Slate, has a couple of articles about automation in the workforce and how robots will take all our jobs.
Um, of course? That's what a robot does. The word "robot" comes from the Czech word "robota," meaning "forced labor." We got it from Karel Capek's 1921 prescient, disturbing play, "Rossum's Universal Robots," in which an artificially produced workforce takes over all human labor...until they get tired of this crummy deal, rebel against their human overlords, and destroy humanity.
To be fair, Manjoo is looking more at how robots will eventually begin to displace humans in professions that currently require a significant amount of education and training. Basically, if your job involves any kind of fairly repetitive labor and little need for face-to-face interaction, you should start to reconcile yourself to the likelihood that you will be replaced by a robot. Pharmacists, tax preparers, lawyers, judges (the latter two in only some cases), physicians, and sportswriters...all doomed.
What interests me is the extent to which robots have already supplanted humans in all kinds of jobs, including agricultural labor, repair work, customer service, secretarial work, factory and assembly-line work, security, and warfare. I'm sure there are many more areas, but they are largely invisible to us. And, once robots start doing things like diagnosing cancer or handing down sentences for misdemeanors or minor felonies (which seems like where we're headed in some cases, given the lack of judicial discretion in sentencing for certain crimes), how will that affect our relationship with technology...with the robots?
Monday, September 26, 2011
Monday, September 19, 2011
Studies in Posthumanity II
I've been following the Future Tense conversation at Slate on transhumanism and the technologies of extreme human enhancement, and then found Pagan Kennedy's article in the Sunday NYTimes Magazine about the collaborations between cognitive neuroscientists and computer engineers to create ways to activate technology using neural implants and EEG headsets.
Some of the concerns raised about the potential of these new technologies seems to be focused around humanism and its limits. Should we really go beyond the limitations of our biology? As some of the scientists at Slate have pointed out, there are places where our "wetware" bodies can't thrive. Yet biological enhancements are already completely mainstream, and have been for a long time. Some of these are cosmetic or culturally important, like breast implants, tribal tattoos, or scarification. Others, though not mentioned in the above articles, are about recapturing lost functionality or allowing the biological body to achieve its potential (cochlear implants, ocular lens implants, or bionic prostheses).
Other concerns that come to mind are ethical. Is it meet to work actively towards a global society where a small fragment of the population can elect to live significantly longer or be significantly smarter than most the population? Especially when the inequity in resources is already so great? Millions of people don't even have access to clean drinking water, and even in this country, millions of people live in desperate poverty, without access to enough food, shelter, medical care, or education.
The other set of concerns is more pragmatic. If it's possible to allow our brains a direct interface with computer technology, does that mean that all of our half-formed, stupid, impulsive thoughts could be privileged over reflection and reasoned action? My first thoughts might be incorrect or unkind, and my impulses (even about what to buy, eat, or say) are rarely the ones that I want to act on.
There is a long tradition that views our ability for sober, considered reason and for often choosing delayed gratification as the distinctive marker of our humanity. If technology makes it possible to subvert or ignore those abilities, does transhumanism become dehumanizing?
Some of the concerns raised about the potential of these new technologies seems to be focused around humanism and its limits. Should we really go beyond the limitations of our biology? As some of the scientists at Slate have pointed out, there are places where our "wetware" bodies can't thrive. Yet biological enhancements are already completely mainstream, and have been for a long time. Some of these are cosmetic or culturally important, like breast implants, tribal tattoos, or scarification. Others, though not mentioned in the above articles, are about recapturing lost functionality or allowing the biological body to achieve its potential (cochlear implants, ocular lens implants, or bionic prostheses).
Other concerns that come to mind are ethical. Is it meet to work actively towards a global society where a small fragment of the population can elect to live significantly longer or be significantly smarter than most the population? Especially when the inequity in resources is already so great? Millions of people don't even have access to clean drinking water, and even in this country, millions of people live in desperate poverty, without access to enough food, shelter, medical care, or education.
The other set of concerns is more pragmatic. If it's possible to allow our brains a direct interface with computer technology, does that mean that all of our half-formed, stupid, impulsive thoughts could be privileged over reflection and reasoned action? My first thoughts might be incorrect or unkind, and my impulses (even about what to buy, eat, or say) are rarely the ones that I want to act on.
There is a long tradition that views our ability for sober, considered reason and for often choosing delayed gratification as the distinctive marker of our humanity. If technology makes it possible to subvert or ignore those abilities, does transhumanism become dehumanizing?
Friday, September 2, 2011
I'm Not A Robot, I'm A Unicorn!
The "afternoon hack" at Cornell's Creative Machine's Lab, to make two chatbots talk to each other, might just be the perfect example of AI in all its glory. You can listen to the conversation between Shruti Cleverbot and Alan the Englishman here, or read a transcript.
There is so much here to love. The conversation starts off with the usual banalities ("Hi, how are you?" etc.) and then quickly escalates into prickliness and processing of the earlier conversation and misunderstandings ("Yes, I'm good. I just said I was." "No you didn't. I said *I* was.") From there we get barbed ad hominem comments ("You are mistaken, which is odd, since memory shouldn't be a problem for you"), contested definitions of identity ("You are a robot." "No, I am not a robot, I am a unicorn"), and then metaphysics ("What is God to you?" "Do you want to have a body?").
Robert Siegel on NPR likened the conversation to a Beckett play, and Hod Lipson, an engineering professor from Cornell, admitted that perhaps talking about "nothing" well is the truest test of personhood. (In which case, "Seinfeld" is the apotheosis of humanity?) But these bots sound so human, or at least like humans of a particular sort. Shruti and Alan sound like 4-year olds on a playdate. The shaky idea of conversational conventions, abrupt shifts in topic and tone, casual insult mixed with increasingly contentious back and forth, and then the Big Questions (God/Mind/Body), and then "Okay, bye now."
I wonder if the engineers at the CCML have read David Lodge's Small World. In the book, which is a satirical campus novel about the idiocies and small-mindedness of academics, an odious and insecure character ends up pouring out his intimate thoughts and neuroses to ELIZA, an AI program, who then advises him to do the only sane thing--shoot himself.
There is so much here to love. The conversation starts off with the usual banalities ("Hi, how are you?" etc.) and then quickly escalates into prickliness and processing of the earlier conversation and misunderstandings ("Yes, I'm good. I just said I was." "No you didn't. I said *I* was.") From there we get barbed ad hominem comments ("You are mistaken, which is odd, since memory shouldn't be a problem for you"), contested definitions of identity ("You are a robot." "No, I am not a robot, I am a unicorn"), and then metaphysics ("What is God to you?" "Do you want to have a body?").
Robert Siegel on NPR likened the conversation to a Beckett play, and Hod Lipson, an engineering professor from Cornell, admitted that perhaps talking about "nothing" well is the truest test of personhood. (In which case, "Seinfeld" is the apotheosis of humanity?) But these bots sound so human, or at least like humans of a particular sort. Shruti and Alan sound like 4-year olds on a playdate. The shaky idea of conversational conventions, abrupt shifts in topic and tone, casual insult mixed with increasingly contentious back and forth, and then the Big Questions (God/Mind/Body), and then "Okay, bye now."
I wonder if the engineers at the CCML have read David Lodge's Small World. In the book, which is a satirical campus novel about the idiocies and small-mindedness of academics, an odious and insecure character ends up pouring out his intimate thoughts and neuroses to ELIZA, an AI program, who then advises him to do the only sane thing--shoot himself.