The realm of science fiction is one of great imagination, and often of foresight. While scientists pound away at working out new technologies, writers and movie makers come up with more notions of what is possible. In this paper is a sampling of some of the visions that movie makers have had for the technologies of artificial intelligence (AI). The scope here is confined to the last thirty years, for brevity's sake and also since this is roughly the amount of time that AI has been seriously considered in the "real world."
One area of focus here is the portrayal of these machines as having some kind of "gender." Nearly every robot in these movies is presented as being "male" or "female," through body shapes or voices. Another important subject is how these machines are treated linguistically; whether or not they have natural human language, how they use it, and what we can learn about them through that usage. Part 3 has a slightly more nebulous nature. This is the section that deals with how these AI's view themselves, if in fact they appear to be viewing themselves at all. This area includes strands of religious, philosophical and ethical themes.
Each movie is summarized as much as is necessary for the purposes of this paper. The reader is advised to see all of them; what some of them may lack in quality, they make up for in cultural interest and sheer amusement value.
Star Trek: First Contact is the latest in a series of Star Trek films, in which a race of cyborg aliens known as "The Borg" attempt to take over Earth. The two primary saviors of human life as the 24th Century knows it are a man, Captain Jean-Luc Picard of the Starship Enterprise, and an android, Lieutenant Commander Data. The strategy of the Borg is to assimilate entire species into their "collective." As they say just before capturing a planet, "We will add your biological and technological distinctiveness to our own. Your culture will adapt to us. Resistance is futile." They then proceed to inject individuals with electronic implants, rendering them personality-less drones.
The Borg have already assimilated Captain Picard once, but he (since he is a main character) is the only person to ever escape the Borg Collective. He spends much of First Contact raging against what the Borg have done to him in the past, and vowing revenge. But the Borg pay little attention to him this time around; they have a special new conquest in mind -- an individual who is already a synthetic entity -- Data.
The character Data is "a mechanical, cybernetic and positronic composite with some biological components.... an android of sophisticated design and programming, deemed a sentient being and accorded full civil rights" in the ST world. He was given a special "emotion chip" in the last movie, as part of his continuing mission to become more human. In this movie, he (for Data's body and voice are male) finds himself captured by the Borg, and comes face to face, body to body, with the embodiment of the Borg's collective consciousness.
This being is presented as a female head and shoulders with an electronic spinal column of sorts, which is inserted into a fully mechanical female body. While she is probably repulsive to most human viewers of the film, she attempts to use her own version of "feminine wiles" to seduce Data into joining her collective. First, she grafts organic skin onto his mechanical arm and smiles at him slightly as she blows across the skin, gifting him with new sensations and a notion that he is closer than ever before to humanity. Later she asks him, "Are you familiar with the physical forms of pleasure?" He replies, "If you are referring to sexuality I am fully functional, programmed in multiple techniques..." and then she kisses him. She also promises Data a special place in the collective; more than just a drone, he would "rule" by her side. For Data this is the ultimate test of his "hu-man"hood: will he succumb to the feminine temptation or will he resist, helping to save the humans whom he so admires? As any good male android should, Data gets it both ways; he "gets the girl" but he also betrays her in the end, saving Picard and his fellow humans. Even so, he says, "For a time... I was tempted by her offer. 0.68 seconds; for an android, that is nearly an eternity."
Because of the speed at which computers can make calculations, most of these films suggest that androids' brains work much faster and more efficiently than humans'. This translates to the physical body as well; the introduction to the film Blade Runner calls the "replicants," genetically engineered AI's, "superior in strength and agility, and possibly intelligence." These replicants were created by humans as slave labor and soldiers, but now some of them have escaped their enslavers. On earth, Rick Deckard is a "blade runner," one whose mission is to seek and destroy these renegade replicants. "This was not called execution," states the movie, "it was called retirement."
Deckard encounters four dangerous replicants whom he is supposed to "retire." He also meets a female replicant named Rachael, who has been given human memories and originally believes that she is actually human. Each replicant has been given a life-span of only four years, so that they will self-destruct before they begin to "develop" complex emotions. The four escapees, led by a poetry-spouting military replicant named Roy, want to lengthen their lifetimes, and do not seem to care whom they kill along the way. Deckard finds it emotionally harder to "retire" them each time, as he begins to have compassion for them as people. The movie is crafted so that we do, as well; we see, for instance, the relationship between Roy and Pris, a female replicant who is a "standard pleasure model." These two replicants have developed enough of those complex emotions to become romantically involved, and the big, bad killer Roy even cries when the little pleasure model goes bezerk and "dies." Deckard, meanwhile, has Rachael, whom he cannot think of as human and yet desires as a woman.
Deckard attempts to resolve his dilemma by demanding that Rachael go to bed with him, even dictating the words she must say to him as he makes love to her. Apparently he has no qualms about this, since she was designed to be a slave, after all. Female androids designed specifically for the purpose of companionship for men are the subject of The Stepford Wives. This movie is told from the point of view of a woman named Joanna who moves with her family to the suburb of Stepford, Connecticut. There she finds a neighborhood where all the women love to cook and clean for their husbands and a mysterious men's club meets every night.
Joanna finds one woman in town to whom she can relate, and the two of them attempt to start a women's club but find that none of the women of the town wish to be involved; they only want to stay home and please their husbands. When Joanna's friend goes away for "such a romantic" weekend and comes back acting like all the other women of the town, Joanna gets scared and finds out that the men's club has systematically planned and carried out such "changes" for every woman in the town. She tries to flee, but too late; she runs straight to the lair of the man in charge of the men's club. He asks her, "Wouldn't it be nice to have some stud whose only purpose was to please you?"
This notion has been considered in the real world, as well. Grant Fjermedal quotes University of Minnesota cultural anthropologist Arthur Harkins in The Tomorrow Makers as saying, "Already there is talk of creating androids for sexual purposes. I think you are going to see an industry develop in the sexual-appliance area.... You will see androids that are manufactured, or grown, specifically for those types of purposes." In The Stepford Wives, only the men get someone to please them. Near movie's end, Joanna discovers a not-quite-finished robot copy of herself, which kills her to take her place. At the end of the movie, we see this replacement in a long matronly dress, pushing a cart up and down the aisles of a grocery store, staring straight ahead.
As she moves along the aisles, the Joanna-robot mechanically says hello to each of the other wife-robots she meets along the way. Why these objects communicate with one another at all is unclear, but it is evident from their greetings that they are not using language in a very creative way; nearly the same words are used for each salutation. Earlier in the movie, one of the men from the men's club records Joanna saying a long list of vocabulary words; little does she know that these recordings will be entered into her double, in order to gift her double with a voice. A telltale sign when Joanna's friend has been "changed" is that she no longer knows a vocabulary word that she knew before. To the viewers, a telltale sign of who is a robot and who is not is that the robots tend to repeat phrases, use clichés, and not understand advanced vocabulary. When there is a malfunction (when Joanna stabs the copy of her friend with a kitchen knife, for instance, or when one of the robots has too much alcohol), the first thing to go wrong is that the robot repeats a single phrase or -- when things get really bad -- portion of a phrase. The implication, of course, is that one of the highest functions of an android is language, a difficult human quality to duplicate.
This is the case for Data in Star Trek. Even with his emotion chip enabling him to experience feelings, Data still does not experience language in quite the way humans do. He can process information at tremendous speeds, but he never speeds up his speech by using simple contractions. This makes him sound like he is reading an academic paper all the time. His sense of appropriateness of language is a little bit odd as well; he ignores a spray of bullets which one character fires at him, and then says to her, "Greetings" in a matter-of-fact manner, causing her to faint. At another point he is so enthralled by his emotional experience that he tells Captain Picard during a particularly tense moment, "I believe I am feeling anxiety. It is an intriguing sensation..." at which point Picard cuts him off by saying, "Data, I am sure it's a fascinating experience but perhaps you should turn off your emotion chip for now." Which Data does, with a jerk of the head and a beep. Data's evident curiosity about the human condition means that he constantly wants to analyze every aspect of a situation, no matter how inappropriate analysis may seem at the time. The word "intriguing" appears in nearly every scene in which Data appears.
In the Star Wars trilogy, the first characters we see in the opening of the first movie are AI's. They are C3PO ("Threepio") and R2D2 ("Artoo"), two "droids." Threepio has a male, British voice and a humanoid form, in gold color; Artoo is a short little droid which rolls and beeps its way through the movies. Threepio is a "protocol droid," specifically designed for translation purposes, and so he is gifted with human language and is in fact "fluent in over six million forms of communication." Artoo, meanwhile, can only beep; Threepio understands and converses with Artoo, but none of the humans in the movies can without a protocol droid or another advanced computer to translate for them.
Artoo still manages to make all sorts of statements without human language, however. Some beeps seem angry, or happy, and there is at least one mournful beep which may cause some viewers to perceive this hunk of metal as rather cute and cuddly. After one particularly excited beep, Threepio says to Artoo, "Now you watch your language!" Evidently beeps can be nasty, too. Threepio calls Artoo his "counterpart," and the two of them apparently have a friendly relationship, full of bantering witticisms of which we mere humans can only understand the half.
It seems that Threepio would be the easier droid for us to like, since he speaks our language. As the movies unfold, however, we find that Threepio is mostly a big metal windbag, and that Artoo is really the one with the sense. Threepio is certainly useful, especially when in Return of the Jedi he is thought to be a god by a race of "Ewoks," and can communicate with them in their language (though "they have a most peculiar dialect," he says, and "it is against my programming to impersonate a deity"). But Threepio rarely has any thoughts of his own, other than thoughts of worry. "We're doomed!" is his favorite phrase. He annoys one human, Han Solo, so much by his incessant talk of imminent destruction that Solo shuts him off. Return of the Jedi makes all this especially clear when we see Threepio improperly assembled, complaining and flailing his arms with his head on backward, while Artoo has a series of ideas and abilities that save all the main characters. Threepio has human body and language; Artoo has the smarts. Perhaps that is why they are "counterparts." At least we know that these movies do not quite support the notion that human language is the pinnacle of a machine's intelligence.
Both of these droids are more than happy to please humans all the time, and neither of them believes itself to be infallible. A more sinister vision is HAL, short for HAL-9000, the shipboard computer in the movie 2001: A Space Odyssey. HAL, with a male voice and human language but not a humanoid form, first appears as a red light (his eye) in which we see the reflection of a man coming to talk to him. HAL is the third member of the ship's crew, and when Dave, a human crewmember, is asked if HAL has feelings, he answers, "He acts like he has genuine emotions; of course he's programmed that way in order to make it easier for us to talk to him. But... I don't think anyone can truly answer." Language, however, is a more important (and less noticed) factor in HAL's programming than the appearance of emotions, for allowing him to relate to the human crew members. The men are able to speak into the air of the ship and give HAL any kind of natural language command; all they have to do is address HAL directly and use his nickname (for example, "Open the pod-bay doors, HAL."), and HAL will perform the task.
HAL also speaks to them, and asks questions in order to "work up [the] crew psych report," analyzing the psychological state of the human crewmembers. This is reminiscent of ELIZA, Joseph Weizenbaum's machine which would use a set of responses to people's statements to assume the role of psychotherapist. Like Dave said about HAL, it at least seemed to people that ELIZA really understood what was going on; ELIZA was programmed that way. Weizenbaum writes that people who conversed with ELIZA, would "insist, in spite of my explanations, that the machine really understood them." This illusion, Weizenbaum claims, is almost exclusively because of the natural-type language programmed into ELIZA.
In 2001, it seems that HAL is the one in need of psychotherapy. HAL is supposed to be flawless. He apparently takes pride in this, calling himself "foolproof and incapable of error." He is not quite perfect, however; he is psychopathic. Whether it be because of one bad connection or the emotional rigors of space, HAL kills all but one of the human crew members. The remaining human on the ship angrily disconnects HAL. As he removes HAL's circuits one by one, the movie's audience observes what is happening through HAL's voice and words. When Dave begins the process, HAL challenges him: "Just what do you think you're doing, Dave? Dave, I really think that I'm entitled to an explanation..." echoing the direct-address, name usage which the humans use to command HAL. The resolute human does not obey; he continues the "retirement" of the advanced machine. HAL's voice slows down and becomes deeper, like the sound of a tape recorder running on low batteries. Near the end, we hear him say, "I can - feel it... I can - feel it... I'm a-fraid, Dave..." and then he begins singing a song that he was taught on the day he "became operational." We hear this computer attempting to ease his own death by singing a human tune with human words: "Daisy, I'm a crazy, all for the love of you..."
Another powerful AI that goes wrong is the Master Control Program (MCP) in TRON. While characters in one section of the ENCOM corporation building argue that "computers are just machines, they can't think," several floors above them the MCP is talking to its creator. "It's so hard dealing with humans," it says, and then its creator replies, "Now wait a minute, I wrote you." "I've gotten 2,415 times smarter since then. With information and access I could run things 900 times better than any human," answers the MCP. The MCP, with the male voice of its creator, speaks audibly, and we also see its words appear on a computer screen as it speaks. It (or he) demands that the human bring him "that Chinese language program I asked for," displaying power in the demand, but also a dependency on additional programming for speech.
This is a program which does not have a body of its own, but does possess a large degree of intelligence and authority in the world of humans. There is another world in TRON: the world of programs, a kind of virtual or "cyberspatial" world where programs are portrayed as having bodies, and information has spatial structure. In this world, the MCP has captured any program that has been used to subvert its authority. A virtual jail is set up for these, and programs are tortured and called "religious freaks" if they believe that there are such things as "users" (humans).
The plot of TRON contains a messianic allegory, for one of the "users," a male programmer named Flynn, is accidentally beamed into the world of programs, and treated like another captured program. The programs with which (or whom) he comes into contact begin to realize that "There's something different about the new guy." Flynn has powers to do miraculous works that none of the programs can do, and one program has a revelatory religious experience when he realizes that Flynn is a user come down in program form. At the end, Flynn sacrifices himself in order to destroy the MCP -- and reappears in human form back in the human world.
While it is partly a powerful act of personification and imagination, giving the programs faces and names, TRON also suggests that the programs humans make have, in a sense, little minds of their own. When they move beyond performing basic fuctions for humans, things can go wrong. Weizenbaum writes that "we could say something about what machines ought and ought not to be doing." "Since we do not now," he continues later, "have any ways of making computers wise, we ought not now to give computers tasks that demand wisdom." Perhaps running corporations, like the MCP does, requires wisdom and performing the smaller tasks of the other programs of TRON does not.
One such non-wisdom requiring task could be "smart-bombing," which is taken to a new metaphorical level in the movie Dark Star. In this low-budget 1970's sci-fi picture, a team of men fly around the universe, seeking "unstable" planets to blow up. They have a set of special smart bombs in order to do this, to which they or the (female-voiced) ship's computer can give verbal commands or instructions. When one of the bombs seems about to blow up the ship because of a malfunction in its release mechanism, the humans have to try to convince it not to explode. Orders do not work; once armed, it is counter to the bomb's programming to not explode. One of the humans therefore tries to talk the bomb out of it, seeking to teach it abstract "phenomenology" in the nine minutes before it explodes. The philosophical dialogue between man and bomb is telling, especially considering that the bomb most likely never read Descartes:
MAN: How do you know you exist?
BOMB (in a tinny, male voice): It is intuitively obvious.
MAN: Intuition is no proof.
BOMB: I think, therefore I am. And my sensory apparatus reveals it to me. This is fun!
MAN: How do you know that your sensory apparatus is correct?
BOMB: All I really know about the outside world is relayed to me through my electrical connections. Why, I really don't know what the outside universe is like at all... [then, recalling Data's favorite word:] Intriguing.
The bomb disarms himself to consider the matter more carefully. The humans on the ship order him to re-arm when the malfunction is fixed, and the bomb replies, "You are false data. The only thing which exists is myself." To the humans' confused reaction he states (though he probably never read Genesis, either), "In the beginning was darkness, and the darkness was without form and void. In addition to the darkness, there was also me. And I moved upon the face of the darkness, and I saw that I was alone." After a pause, he says, "Let there be light," and then explodes. He fulfills his one purpose in life: boom.
The makers of Dark Star created a machine which could not only think but could "think up" some of the ideas of Descartes and the writer of Genesis. Herbert A. Simon writes in his essay Machine as Mind that something like this has been known to happen in the real world. "A computer program called BACON..., when given the data available to the scientists in historically important situations, has rediscovered Kepler's Third Law, Ohm's Law, Boyle's Law, Black's Law of Temperature Equilibrium, and many others." He goes on to say that "we need not talk about computers thinking in the future tense; they have been thinking (in smaller and bigger ways) for 35 years." And, according to Dark Star's bomb, if they think, they are.
The makers of Dark Star seem to believe, however, that bombing is not a thing AI's "ought" to be doing, for the sake of humans' well-being. Another notion that has been considered by some in the nonfictional world is that of what Fjermedal calls "ethical slaves and expendable warriors." This is one of the main themes of Blade Runner. Fjermedal writes of possibly making these beings with some obvious characteristic which defines them as not-human.
In Blade Runner, there is no such outward characteristic. Instead, a blade runner is told to give a kind of psychological test to people, in order to determine whether or not they are replicants. This "Voigt-Kampf Test" is a series of questions designed to stimulate an emotional response, and depending on readings of someone's pupil dilation, heart rate, etc., a good blade runner is able to figure out who is whom (or what). The test is an advanced form of what Selmer Bringsjord calls the "Total Turing Test (TTT)" a development from Alan Turing's test to determine whether or not an AI thinks, or is conscious. The Voigt-Kampf assumes that its subject is conscious, however; its only purpose is to determine whether a being is consciously human.
A.F. Umar Khan writes in The Ethics of Autonomous Learning Systems that to pass the Turing Test would require an AI to be able to cheat to give the appearance of emotions which it does not really have, for instance, or to hesitate before giving answers so as not to reveal its computing speed. "An intelligent machine," writes Khan, "would have to be intelligent enough to know when to dissemble, when to lie." In First Contact, Data successfully deceives the Borg; when he first lies to the Borg embodiment, she says to him, "You're becoming more human all the time. Now you're even learning how to lie." Certainly all the replicants in Blade Runner know how to deceive, and they use the method frequently in order to reach their goals.
The lack of a distinctive physical marker for replicants, meanwhile, may contribute to the compassion that Deckard and the audience feel toward these machines. "If they were made of organic materials [as Blade Runner's replicants are], our moral responses might be even more tolerant," writes Margaret A. Boden. Fjermedal says, "As soon as you make a mobile robot that can show emotion, I think you are going to have a lot of people saying, 'We've got to treat these things better.'" Which is more or less what Deckard says by rescuing Rachael at the end of the movie.
Perhaps the biggest change of mind that the movie creates in Deckard and audience, however, is in the attitude toward Roy, the killer military android and leader of the four escapees. At the beginning of the movie, we see Roy as cold, menacing, dangerous, and full of an irrational hatred. At the end, however, we see that Roy is rebelling against what he considers an unfair system which is prejudiced against its replicant slaves. He quotes poetry and philosophy constantly, and his own speech is full of a kind of poetic dignity and tragedy: "If only you could see what I have seen with your eyes," he says to the genetic engineer who creates replicants' eyes. When his fellow-replicant lover, Pris, tells a human "I think, therefore I am," Roy says to her, "very good," like some kind of philosophy tutor. "Pleasure models" probably do not read Descartes, either.
Roy is dying. Hints of a messianic metaphor appear again here: Roy, rather than self-destructing before he can meet with Deckard one final time, jams a nail into his hand to keep the connections in it working. What follows are visually beautiful shots of a shirtless Roy, on a rooftop in the rain, holding a dove to his breast, and saving Deckard's life so that he can transmit his message of replicant emancipation to him. "Quite an experience to live in fear, isn't it?" he asks Deckard. "I've seen things you people wouldn't believe.... All those moments will be lost in time, like tears in rain. Time to die." Roy then shuts permanently down, with the sun rising behind him.
Though he seems to be close to human, Roy would probably not be considered an "ethical" machine; the ethics of most of these machines are highly questionable. Is it this very lack of perfection that gives the machines their "personalities"? Khan writes that "In some sense, a machine's behavior and idiosyncrasies can be thought of as its personality.... Use of the word 'personality' [here] implies that two machines of the same type might evolve into highly individualistic machines, so different... that they could be considered as having separate personalities." This is what happened to HAL, perhaps; there was another HAL-9000 unit back on earth that gave different results to the same set of calculations which HAL performed. This was the first tip that not all was perfect with the shipboard HAL, and the first indication that maybe HAL had become something unique.
First Contact and Blade Runner at first seem to enforce gender stereotypes; the hero of each is male, or is a male-type android, who finds seduction and struggle in the temptations of female-type AI's. In neither of these, however, is anything as simple as that. Data is neither machine nor man, and he is dealing with an entity which is neither machine, nor woman, nor even a single individual. Deckard's relationship with Rachael, though it seems that it would be destructive to both, is actually not so; it gives Rachael some extra time to "live," and Deckard some extra insight and compassion toward replicants. The Stepford Wives, meanwhile, deliberately criticizes traditional gender relationships by turning the notion of a female homemaker into a strange kind of horror.
The android Stepford wives, Data, C3PO, R2D2, HAL and the MCP, with their mostly short, acronym/number style names, each have different ways of using language as their interface with humanity. The movies almost all present language as a difficult barrier to humanity for AI's to cross, as if humans intuitively know that this is a major part of what makes them human.
The AI's in these movies, however, mostly are able to master that language. This contributes in giving these machines the appearance of personalities. Though movies are often concerned with appearances and not with what is going on inside of a character's head (or positronic matrix), it is important to note that almost every one of these movies attempts an explanation for why and how their machines could exist as more than mere machines. Whether they are insane (HAL), philosophizing critics of humanity (Roy), discoverers of "phenomenology" in the space of a few minutes (Dark Star's bomb), or religious zealots (TRON's programs), each seems to have developed its own set of beliefs, ethics, and reasons for action.
This is, of course, only the movies. It is not real life yet. What is imagined often becomes what is real, and works such as these films give us glimpses into the strange world of what could, someday, be our world.
2001: A Space Odyssey. Dir. Stanley Kubrick. Metro-Goldwyn-Meyer, 1968.
Blade Runner: the Director's Cut. Dir. Ridley Scott. Warner Brothers, 1982.
Boden, Margaret A. "Could a Robot Be Creative -- And Would We Know?" Android Epistemology. Ed. Kenneth M. Ford, Clark Glymour, Patrick J. Hayes. Cambridge: MIT Press, 1995. 51-72.
Bringsjord, Selmer. "Could, How Could We Tell If, and Why Should -- Androids Have Inner Lives?" Android Epistemology. Ed. Kenneth M. Ford, Clark Glymour, Patrick J. Hayes. Cambridge: MIT Press, 1995. 93-122.
Dark Star. Dir. John Carpenter. Metrocolor, 1971.
Fjermedal, Grant. The Tomorrow Makers: a Brave New World of Living-Brain Machines. New York: Macmillan Publishing Company, 1986.
Khan, A.F. Umar. "The Ethics of Autonomous Learning Systems." Android Epistemology, Ed. Kenneth M. Ford, Clark Glymour, Patrick J. Hayes. Cambridge: MIT Press, 1995. 253-264.
Simon, Herbert A. "Machine as Mind." Android Epistemology, Ed. Kenneth M. Ford, Clark Glymour, Patrick J. Hayes. Cambridge: MIT Press, 1995. 23-40.
Star Trek: First Contact. Dir. Jonathan Frakes. Paramount Pictures, 1996.
The official Star Trek: First Contact Website [no longer active].
Star Wars. Dir. George Lucas. Lucasfilm, Ltd., 1977.
Star Wars: The Empire Strikes Back. Dir. Irvin Kirshner. Lucasfilm, Ltd., 1980.
Star Wars: Return of the Jedi. Dir. Richard Marquand. Lucasfilm, Ltd., 1983.
The Stepford Wives. Dir. Bryan Forbes. Palomar Pictures, 1975.
TRON. Dir. Steven Lisberger. Walt Disney Productions, 1982.
Weizenbaum, Joseph. Computer Power and Human Reason. San Francisco: W.H. Freeman and Company, 1976.
[Cover photo: Data imprisoned by the Borg. From the now defunct Star Trek: First Contact Website.]
back to the scholarly page
unless otherwise noted, work on this page is licensed
under a Creative Commons License.
alanna at keywriter dot org