There is, in the fields of artificial intelligence, cognitive science and robotics, a burning question: can machines be taught to understand what we call “truth”, and then to discern it from amongst a cosmos of possibilities, according to their programmed, and evolving, understanding of the concept? In Tempe, Arizona, earlier this month, the Future Tense project explored this question. What we know, that is so problematic, however, is that human beings already struggle to define “truth”.
So, we have to ask ourselves a series of burning questions, the kind we prefer to not ask, even as we pretend to address them head on:
- What is intelligence?
- What is cognition?
- What is truth?
- What will machines do with this information, if they can “learn” it?
- Does truth actually carry an emotional quality?
This last one we have been, for the most part, unwilling to answer, as a species and as a civilization. Some people believe that western psychology has treated the question either ably or with reckless abandon; some view this as the gateway to “moral relativism”. It is possible to say, as Gary Zukav does of the Copenhagen Interpretation of Quantum Mechanics in The Dancing Wu Li Masters, that “it does not matter what quantum mechanics is about! The important thing is that it works in all possible experimental situations…”
Zukav argued that no longer would science search for one “absolute truth somewhere ‘out there'”. Quantum mechanics shows us that the hallmark of Einstein’s relativity theory—that space and time are a continuum and that our experience of them is just that: experiential, perspectival, contingent—leaves us, at the most elementary scale, with no certainty about individual events, only a statistical way to comprehend group events narrowly construed.
We could also look to ancient Hindu mythology, where everything in our experience, whether physical or metaphysical, including our own selves, our own souls, even our karma and our experience of the divine, is illusion—only the divine, which permeates all things (some might say: energy), is true. All illusions eventually break down and lead back to Brahman, the one truth—what Native Americans called “the great spirit that moves through all things” (Wakan Tanka, to the Lakota).
But none of these explorations gives us a definition of truth. The last, for instance, virtually guarantees that we can never teach machines about truth, at least not until we can spark the upheaval of emotion in genuinely intelligent thinking machines. (Ray Kurzweil believes we will pass this threshold, maybe in the next few decades, where machines become not only “intelligent”, but also “spiritual”.)
Psychology is as much an exploration of emotion as it is of truth. In fact, it is often an exploration of what merely appears to be true, as a result of certain perceptual misconceptions, which “healthy thinking” can and should correct, while recognizing their origin and root structure. Philosophically, and pragmatically, we have to first say that no matter how we define the concept “truth”, we can deal with it without sliding into “moral relativism”; there is always room for ethical inquiry.
So, can we teach machines to know what truth is?
Arithmetic is usually a go-to pathway for a straightforward definition of truth. Because simple arithmetical answers always work out to the same quantity, they appear to carry transcendent truth. 1+1 “always” equals 2. Is “always equals” a criterion for truth? Is truth most comfortable, most secure, in a formulaic setting? There are two problems with this…
- That something is true once and not again does not make it less true.
- To craft a formula, we have to sculpt the truth we express—cut away all of the details of the universe that would interfere with that expression of truth.
Point 1 is simple enough to understand. Point 2 is more difficult. If we talk about people, we can understand how a statement like “All Spaniards love ham” is not entirely truthful. It is a stereotype and a generalization, a cultural anachronism and a prejudice. There are vegetarians in Spain, and vegans; there are people who like chicken, but not ham. It is an absurdity to make such a sweeping claim, but that claim is an attempt to craft a working formula “Spaniard + ham = love”.
When we posit that 1+1=2, we are doing something very similar. First of all, we accept, without evidence, that two ones are the same as one another; second, we accept that the number 2 exists, though it is, technically, just the incidence of two ones. The statement is both true and also contrived. It is, in this sense, convention more than truth.
St. Augustine argued that the number 1 was evidence of a single universal creator, because like God, 1 is the only numerical truth, and it is everywhere. Even a fraction like 1/365 is still nothing more than the number 1… specifically one out of a group of 365 ones… one day out of one year full of days… or just one amid a sea of ones. All other numbers, no matter how massive or how awkward, are all manifestations of the number 1, in different forms and groupings.
To get to 1+1=2, we have to perform a number of cognitive tricks:
- We must accept that 1 and 2 refer only to category—for instance, one apple plus one apple together make two apples, though one apple might be more massive than the other.
- We must also decide that we do not mean, in the first case one thing and in the second one group of things—for instance, one family of one person plus one family of eight people does not equal two people, but rather nine. It is the category family that matters here.
- We must agree that such statements, despite their deficiencies, express truths we can depend on.
- We must agree to conventions in language that help us to see truth where it does not necessarily reside—for instance “equals” equals “works out to be true”, with the added implicit qualifier “reliably”.
To make a formula, we must exclude inconvenient information. This requires perception, and discernment, and the ability to distinguish between item, category, fact and convention. To teach computers “truth”, we need to teach them to feel truth. They need to see how experience is layered, and how incidences of experience overlap, and are defined more by that overlapping than by our cognitive intentions. But computers don’t do this; they take all data and harbor one massive bank of information.
We can teach them to formulate spreadsheets, but that is our consciousness, not theirs. And computers cannot tell us whether column B or column G is more indicative of “the truth”. Truth flows through its manifestations; experience interests us, because it is always, somehow, different, so we need to pay attention. And there are functions of human reasoning that are difficult to make computers understand, because they don’t properly “understand” or “comprehend”; they only accumulate and calculate.
One problem is the so-called “ramification” problem: If my car is traveling at 55 mph, so am I, and so are my shoes and my wristwatch, and my iPad. Another is McCarthy’s “frame” problem, which asks us to understand which elements of “reality” remain the same in a changing, evolutionary universe. Computers “want” to calculate these aspects of our reality; but they are so complex, they very quickly become unwieldy as calculations.
For instance, if I enter a room, so do my shoes. If I am driving a car that is moving at 55 mph, then so am I moving at 55 mph, as are my shoes, and my wristwatch and my iPad. But I am driving over the Earth, which is speeding around the Sun at speeds almost impossible for human beings to contemplate, and the Sun is speeding around the Milky Way galaxy at even more impossible rates of speed, and the Milky Way is speeding away from the point where the Big Bang occurred…
And, somehow, all of us are holding together, which might seem impossible, if we were to calculate…
It gets away from us, if calculation is all we have to go on. Some theorists aspire to a moment of inexplicable accident, where the speed of computation of which neural nets and other advanced forms of artificial intelligence are capable is so great that they simply “become aware”, begin to think intelligently, and as they develop a sense of self, develop the emotional sensibilities that make “truth” as a concept gather importance.
But this is not an answer. It is not a law of physics. It is not theory. It is barely hypothesis. It is a wish.
The question, of course, of whether it is possible to teach an entity that does not think or feel, or have a self, or a life of the soul, how to comprehend, and to value, and to look for and to reliably formulate and reformulate, according to context, what we call truth, cannot, with what we know, be solved. And the uncomfortable truth about that is that we still don’t really fully understand it ourselves.
In some sense, the truth about truth is both what holds our attention and what we least like to acknowledge: truth is our search for truth; it exists in the space where we seek it, but desire does not correspond to satisfaction; in fact, they are mutually exclusive though mutually paramount. Truth is to our conscious pursuits what satisfaction is to desire. In an emotional-spiritual sense, we understand and accept this, and so we think and search and act, but we do not know how to make this into a genuine definition, much less how to teach computers to feel it as real.
For the time being, our cognitive superiority to machines is, in this sense, safe. But the Copenhagen Interpretation may also apply to artificial intelligence: it may not matter whether we can teach machines to understand the concept of truth; what matters is that we program them not to lie, not to divulge sensitive information and that they convince us they know us as persons, even where they are just computing and manifesting computation.
For instance, part of what we like about our machines is that they are not independent of us. While it is frustrating when they consume too much of our time, for maintenance and upgrades and glitches, and the like, it is rewarding to have “smart” devices which we command, and which do as we desire. We feel empowered by this, and this, in many ways, is more important to us, and a better way to keep us connecting to actual human beings, than if we had fully autonomous thinking machines.
– – –
Originally published March 10, 2013, at TheHotSpring.net