TheHotSpring.com :: Is the very thing we demand of our computers the thing that will make them intolerant of our humanity, if and when they awaken to an artificial intelligence? One of the fundamental problems in achieving a state of computational agility and independence that would allow us to say a synthetic entity has acquired ‘artificial intelligence’ is the problem of autonomy. If we give real autonomy to artificially intelligent machines, can we trust them to cooperate with us, in the ways we, as human beings prefer?
This is an ethical question as well as a practical one. There are real ethical risks inherent in creating devices, or even independently mobile entities, that use their own store of learned intelligence and independent decision-making to interact with or make decisions that affect the conditions of human life. Consigning human well-being or liberties to a system that privileges artificial intelligence for the sake of expediency of one kind or another might reduce the range of free choice available to human individuals.
The real question implies a double ethical bind: on the one hand, is it fair for human beings to create artificially intelligent beings intended solely to serve human needs, on the other, is it reckless to create artificially intelligent beings that might not respect or have room for human emotional reality? The question involves innovations that seem almost totally improbable, almost science fiction, but which are not impossible or even unlikely to come to pass.
The inventor, scientist and entrepreneur Ray Kurzweil—who has developed many of the intelligent machines that have filtered into the fabric of today’s technological universe (including flatbed scanners, voice recognition, and text-to-speech services)—writes of the coming “age of spiritual machines”, which he calls “an inexorable emergence”. He foresees that the most advanced computers will achieve “the memory capacity and computing speed of the human brain by around the year 2020” (ASM 3).
Because neural networks, computers designed to mimic human neurological circuitry and pattern retention, need to be reverse engineered from organs that have evolved over millions of years, artificial intelligence is necessarily less flexible, less agile, less “supple” than human intelligence.
It is widely thought among cognitive scientists that when artificial intelligence reaches not only the same level of complex circuitry, but the same level of simultaneous processing power, sensory awareness and mental agility, as the human brain, that emotional responses will emerge. At this point, the ethical questions regarding the “experience” of artificially intelligent beings come into play. If such an event is in fact an “inexorable emergence”, then we have to also consider the correlative ethical question: how much ethical consideration can we expect from such entities as regarding human experience?
Human beings waffle, we meditate, we take pains to make better choices, not just logical choices. We answer an often imperceptible but ever-present ethical call, a summons demanding that we recognize the rights of the other. We are sometimes weak, and give in to temptation, but that is part of having individual liberties. We may pay a price for those mistakes, but they are ours to make. If we don’t have that liberty, if we don’t have the room to maneuver, intellectually, then we are not fully human: this is part of the modern ethic and fundamental to democracy.
What seems like a strictly philosophical, or even science-fiction-based question—Will artificial intelligence recognize and understand our human way of thinking and choosing, or will we be discounted by a system increasingly oriented toward logical data processing?—really does now need to be asked. Can we, for instance, put off the advent of self-reflective artificial intelligence, until we are sure to have programmed in emotional or at least human-deferential responses?
Ultimately, we cannot decide these points each on their own. We have to filter the entire scope of artificial intelligence research through the ethical and moral prism which asks: Will it be good for people? Will it respect human liberty? will human well-being be improved by this technology? Will the weak among us benefit, or be cast aside in an ever more technically-oriented built environment?