TheHotSpring.com :: Is the very thing we demand of our computers the thing that will make them intolerant of our humanity, if and when they awaken to an artificial intelligence? One of the fundamental problems in achieving a state of computational agility and independence that would allow us to say a synthetic entity has acquired ‘artificial intelligence’ is the problem of autonomy. If we give real autonomy to artificially intelligent machines, can we trust them to cooperate with us, in the ways we, as human beings prefer?
This is an ethical question as well as a practical one. There are real ethical risks inherent in creating devices, or even independently mobile entities, that use their own store of learned intelligence and independent decision-making to interact with or make decisions that affect the conditions of human life. Consigning human well-being or liberties to a system that privileges artificial intelligence for the sake of expediency of one kind or another might reduce the range of free choice available to human individuals.