How Now, HAL?

Book Review: HAL's Legacy: 2001 as Dream and Reality
Edited by David G. Stork
Foreward by Arthur C. Clarke
MIT Press, 1997
ISBN 0-262-19378-7
352 pages.
$22.50 USD (cloth)

Reviewed by David Porush

Science and the arts have always been locked in a robust, fertile feedback loop with each other. One of the surest genres to encourage this embrace has been science fiction film and literature. Science fiction film tends to lag behind fiction in its vision and relevance to real science, but every once in awhile a film comes along that expresses and visualizes the future or should we say the relationship between a culture and its idea of its own future - in a way that fiction could not. "Blade Runner" (1983) was one;"2001: A Space Odyssey" was another. Just as I remember the intricate and sensuous decay depicted in Ridley Scott's cinema more vividly than any literary evocation, so I remember the deep silences Kubrick deploys in a way forbidden in fiction, which always babbles from the page, even when it invokes silence.

It wasn't until the 1980s that science fiction began to be generally accepted in some university curricula, and even then only marginally. Disparaged as pulp or pop (in fiction) or grade B in cinema, sf was thought unworthy for study. But as often happens, the mood and interests of the culture at large overtook the academy, and the line between belle lettres and sf was blurred by such "canonical" pomo authors of the 1960s and 70s as Thomas Pynchon, Italo Calvino, John Barth, Donald Barthelme, Kurt Vonnegut, Jr., Joseph McElroy, Don Delillo, Jorge Luis Borges, Samuel Beckett and others in response to an increasingly technologized/scientized reality. What we learned from them was that science and technology drive our culture's dreaming as surely, or maybe more surely, than literary accretions. (Within a decade, these authors and others inspired a generation of scholars to found the Society for Literature and Science, in acknowledgment to the fact that scientific perspectives could shed more light on some literature than infra-disciplinary ones, and that in turn, literature had traditionally formulated potent critiques of - and resistance to - the hegemony of science and technology).

We find in the work of those literary authors a common concern with cybernetics and the growingly intimate, intricate, and mythologically dramatic relationship between humans and computers. Long before the universal prefix "cyber" became a nauseating cliché, these authors were telling us that we lived in a Cybernetic (not a mere Information) Age. "Cybernetics" signals a living, dynamic system or web of relations, whereas mere inert "information" leads only to a vista of endless ones and zeroes, a dispiriting datopia. Today, in the era of the Web, we think of ourselves as a cyberculture.

1968 was at the cusp of these literary events, so it is no wonder that Stanley Kubrick, one of the most visionary and experimental big movie directors, and Arthur C. Clarke, known for his sf realism (his 1948 suggestion that the geo-stationary ring around the Earth could be used for communication satellites led to a multi-trillion dollar industry that is just burgeoning; that belt is now known as the Clarke Belt) teamed up that year to express on film a weird mixture of mundane technicalism and mythological realism.

HAL, the superintelligent computer turned murderer in the movie "2001" would have been born on January 12, 1997. On the occasion of his birthday, MIT Press has issued an extraordinary collection of essays. Disguised as a coffee table book in praise of Stanley Kubrick's and Arthur C. Clarke's vision of the future of computers in their 1968 movie, in reality, HAL'S Legacy is a potent critique of AI - sometimes unwittingly so - from some of its major proponents. The volume is also a good bit of fun as it explores arcane cinematic details and allusions with the kind of precision only supernerds like these authors could, armed with highly technical knowledge. (My favorite is that in the chess game between HAL and astronaut Frank Poole (played by Gary Lockwood), Poole concedes the match to HAL even though HAL has made a blunder in his assessment of his own position, one of the first signs that HAL has a deep flaw in his programming.

As David Stork, editor of the book and an expert on computer speech, tells us, the explicit purposes of the volume are to analyze, discuss, and reflect on the original prophesies about intelligent computers made in 2001 and measure them against the technical progress made since then. The testimonies by the scientist authors of the essays in this volume close the feedback loop between science and literary culture, since it admits that just as fiction often reflects science, so do science and technology sometimes strive to make tangible the dreams in fiction. To put it more simply, Stork writes, the volume asks the question "Why don't we have AI?" even though HAL reflected and helped impel a growing and confident movement, thirty years old now, that a full blooded artificial intelligence was achievable by the end of this millennium. Just how fatigued these hopes are the authors of these essays themselves expose by measuring the reality of artificial intelligence projects today against the dream/nightmare of HAL.

As this volume shows, very specific advancements in AI research anticipated, paralleled, and in some cases were originally inspired by the movie itself. In one essay, David Kuck predicts, plotting along the trajectory of Moore's law (that the number of transistors fit in a given space - memory - doubles every 18 months) that within the next few years we are likely to build a machine with the raw computational power of HAL. In a very intelligent essay that combines technical material about computer chess strategy with a brief history of the relationship between computers and chess and a reflection on the nature of human vs. computer intelligence, Murray Campbell, one of the chief architects of Deep Blue, concludes that computers simply do not - and probably cannot -- "play chess the way humans do" [79], and that this difference is crucial. Chess machines accomplish their superiority by brute computational force, sorting through millions of bits of data and hundred of possible moves very quickly; human grand masters, by contrast, play "'trappy' chess." Trappy chess means the player is aware of his or her opponent's strengths and weaknesses and lays traps to use them against the opponent. This in turn, requires that the player be able to understand itself, to have self-consciousness. But as Campbell points out, even Deep Blue "is unable to appreciate its own moves."

The result is a system that gives the "appearance" of playing brilliant chess, beating brilliant human opponents, but in fact is playing a very different sort of game. It's still John Henry vs. the steam hammer. (My own artificial intelligence research stemmed from this premise. In building the "Gameworld" system at RPI, we explicitly tried to build a computer that would mimic human story-telling behavior in its output while explicitly creating software strategies that were non- literary: machine tricks, really.) Even though Deep Blue gave Gary Kasparov a run for his money, Campbell here admits that the triumph still poses no challenge to human intelligence. "Does a machine need to be intelligent to play chess"? asks Campbell on the first page of his essay. His answer is evidently, "No." I guess we could say that three decades of AI research has led to the creation of intelligence, alright, but of a very alien sort.

Similarly, Roger Schank in his essay shows how with a few simple tricks we can duplicate some of HAL's linguistic competence, but these would just be tricks. Admitting the extent to which he was inspired and challenged by the movie, he now in retrospect also admits to youthful over-confidence. "Thirty years of research on getting computers to process natural language," he writes, has taught me what I did not know in 1968: that understanding natural language depends a great deal more than simply understanding words" [172]. He, too , concludes that computers may be very good at giving the "illusion of intelligence," but in order to really be intelligent as we understand it humanly, computers have to understand what they are saying. In order for this to be so, Schank argues, HAL and any other imaginable computer, "would need a complete model of the world." Schank's illustration is telling: he chooses the word "watching" that describes part of HAL's mission: he is watching over them. It seems like such a simple word, but think of what HAL needs to understand about "watching" and its responsibilities in order to really fulfill that duty intelligently!

Schank's analysis shows that any imaginable computer using a simply logical-rational approach could not possibly understand all the nuances of the word watching in anything approaching even what my children's baby-sitter already knows, implicitly, when she comes over on a Saturday night to watch my children. In short, HAL cannot possibly fulfill his duties as watchman and exposes that even as he states, over confidently, that he "is the most reliable computer ever made." Indeed, this strikes to the very heart of the mythology of the computer that Clarke and Kubrick give us. It is the very inhuman certainty of infallibility that makes the computer so fallible, and virtually every other essay in this volume makes the same point in one fashion or another, implicitly or explicitly.

For instance, Ravishankar K. Iyer's essay, "'Foolproof and Incapable of Error?' Reliable Computing and Fault Tolerance," the most technical essay in the lot, analyzes how it is that the spaceboard computer who makes the boast of the title "runs amok" even while its twin down on earth continues to function flawlessly. The answer lies in the "Byzantine general's decision" - a notoriously difficult conundrum in which competing goals cannot be evaluated, so the computer is led to paralysis, breakdown or deep error. Variations on this scenario, by the way, are a favorite of computer-human mythologies. We see it in the famous Star Trek episode regarding the Return of Voyager, which in turn became the premise for the first Star Trek movie: the super computer is defeated by leading it to two competing and contrary conclusions. We can even trace it to the earlier cliches of Robby the Robot on "Lost in Space" who often uttered "that does not compute" when faced with trying to evaluate human situations.

HAL reads lips. In fact, it is this talent, unknown to the crew, which plays a big role in the plot, as crew members conspire to dismantle HAL, forcing the computer to murder them in order to preserve himself and therefore, in his twisted logic, the mission. David Stork writes about his work in trying to get computers to recognize speech by lip-reading ("speechreading by sight and sound"). He concludes that the problem is insurmountable with current conceptions of computing, because "it is limited by the problems of representing semantics, common sense, and world knowledge."

Doug Lenat, one of the leaders of the world-famous CYC AI research program, comes to a similar conclusion. "If you have the necessary common-sense knowledge" such as being able to parse the sentence 'deadly pastimes suggest adventurousness' and understand it implicitly even before it is uttered, then you can make inferences about which pastimes are deadly and which humans are adventurous. But "if you lack it, you can't solve the problem at all. Ever." [208]

HAL's eerie voice haunted my generation. He said the most horrible things in a most soothing baritone. Indeed, this sense of the sinister added to the feeling that HAL was insurmountably intelligent, inexorably superior to the crew. Can computers today speak naturalistically? The symmetrical problem of speech generation - such as those used in reading machines for the blind - explored by Joe Olive, comes to a similar conclusion. These AI systems do a pretty adequate job of uttering recognizable sounds after scanning individual words or short segments of text. But they are terrible at communicating subtleties of stress, intonation. Blind users have to learn to "read" the sounds in order to get any real use out of them.

HAL can see with the acuity not of a camera but of a human, perhaps even a superhuman: he not only registers every detail in his multiple fields of vision throughout the spaceship, he understands what he sees, integrates the images, compares them, and draws conclusions from them, to the extent that he renders judgment on Dave's progress in making sketches of his crew members. Azriel Roseberg, in his essay "Eyes for Computers," concludes that HAL would be very proud to have many of the capabilities of today's vision recognition systems. But he glosses the question of how much computers "understand" what they see.

A consistent theme emerges in these wonderfully intelligent and revealing essays, with the exceptions of those entries by Ray Kurzweil, who is on perpetual sales mode for a visionary and gleaming technofuture, and Azriel Rosenberg (what does it mean that the latter's first name is contained anagrammatically in the former's name?). And this theme goes beyond the overly simplistic resolution that computers don't have common sense. Rather, lurking in these generous self-critiques and assessments of the field lies a deeper challenge to the entire epistemology of rationalism itself. The computer is not just a wonderful tool and communications device: it is the ongoing test case of the entire Western project of rationalism, and the field of battle is for the human mind itself, dressed up these days in the word "cognition." Can the human mind and its various intelligences and facilities be modeled - indeed embodied - in a simply logical machine, operating according to reductively mechanistic principles? Thirty years after 2001 and billions, perhaps trillions of research and design dollars later, these authors, luminaries in the field, seem to be saying no, though they dare not even whisper the heresy, since they remain supplicants and priests in the Church of Reason, dependent on it for livelihood and sinecures. Stork captures the statement of faith, the metaphysical bedrock, of this postmodern fundamentalism: "Nothing in principle prevents us from creating artificial intelligence."

Well, if reason doesn't work, what then? Contained in the multiple failures of AI reset and its various cul de sacs, even as expressed by its most successful proponents, lies a hint of a wholly different view of things. There it is, in the unparsability of a single word absent its ecological context. There it is, lurking in the brute dumbness of Deep Blue as it sorts through a million possible moves without having a clue about what it's doing, unable to make up a single "trap" designed to snare a particular opponent in a particular context. There it is, in the inability of the computer to give proper inflection to the word because it cannot relate the word to its context in order to "get" (and then communicate) its meaning. There it is in the ongoing shift of AI in pursuit of Minsky's dream - an interview is included herein since he worked on the set of 2001 and is one of the fathers of current AI - of a "general intelligence computer."

What is this general intelligence? Why is it so elusive? Call it Porush's Limit: All AI is limited by the inability of a merely logical system to parse meaning from simple atoms of the total system. Only natural intelligence sees and understands the system and its individual parts simultaneously, in dynamic looping feedback among them. Look at the curve of lip, see at the face, understand the intention, "read" the curve of the lip. That's intelligence, and no artificial system founded on reason can get there from bits and on off switches and symbolic logic. Meaning, in short, emerges, from the self-contextualizing properties of system, and only the natural mind is equipped to appreciate this show. In fact, that's a pretty good definition of intelligence-in-the-world, a kind of ongoing dynamic becoming, navigating a space of multiple voices, messages, meanings, ambiguities and somehow, through a complex feedback looping between brain and world, bringing meaning into pulsing, stelliferous manifestation all in the soaring dome around us, to paraphrase Pynchon - or to suggest a kind of existential websurfing.

This admittedly fanciful description may yet also be the most evocative and accurate reflection of the processes of the brain (not the mind, but the more-than-mechanical organic brain) itself, which are neither simple nor linear, but self-contextualizing always already. That's the element which these AI experts always seem to be groping for in their critiques of AI's shortfall, and from the computer systems they build, however successful: it's a certain style, a style of knowing, of meaning-making. OK, now you can kick me out of church.

This mystical "general intelligence" to which Minsky appeals is so elusive because logical analysis and human intelligence -- and maybe the larger reality from which the brain itself is projected and from which human intelligence arises -- have so little to do with each other. In all postmodern fields -- physics, genetics, neurophysiology, mathematics and computing among them -- some limit is being or has already been approached, call it the limit of meaning. In physics, it is Heisenberg's Uncertainty Principle. In mathematics, Gödel's Incompleteness Theorem. In genetics, it is the inability to explain morphology and cell specialization -- the turning on and off of certain genes in certain regions of the bodies - from within DNA itself. (The amount of information needed to determine cell determination exceeds the amount of information contained within the organism's genome). In neurophysiology, it is apparent that the central nervous system is much more than a series of on-off switches, and that neurons float in a frothing, chaotic, complex bath of hormones - neurotransmitters like 5-hydroxytriptamine (serotonin) - so that nerve pathways evolve and the brain acts much more holistically. And so on through the sciences.

These limits are the limits of pure logic and analysis themselves. On the other side of the wall lies the new onto-epistemology of emergence, in which meaning inheres in the total system and emerges autopoetically, so that any single phoneme can only make sense in the context of the word, sentence, paragraph, story, world...ahh, so that's how I shall intone that phoneme, hit it hard, "how NICE." So that any single gene only makes sense in the context of the emergent, growing organism, feeding back information into the expression of the genes in the intimacy of the cell... ahh, so that's how I shall let this cell become a nerve in the dorsal raphe nucleus, ready to receive intimations from the mystical beyond. So that the firing of any single nerve carries no information in itself but can have a cascading butterfly effect on the meaning of the moment, and then, perhaps, thought itself...ahh, so now it's my turn to pop off, says the nerve speaking to the brain-mind which conducts this concert, this consort. OK, you get the message. So that any electron only obtains its position after some mind has observed it, absurd as it sounds, implying either multiple universes splitting off every time an electron obtains its position or an unnameable and nameless godmind inhering in this, our single common universe, sustaining it by "watching" it, much as HAL aspired - - but failed -- to on board his fateful mission: tirelessly, ubiquitously, and infallibly, down to every last electron-about-to-become.

One last anticlimactic note: the film which helped inspire the work of AI and these essays about them is an evocative and often beautiful meditation on the marriage of humans to their tools, that complex relationship. HAL is the closest to us in a long line of tools that began with the bone weapon and evolves eventually to the magical monolith of some presumably superior, though not necessarily benevolent, far-flung race. That meditation is filled not with happy thoughts but disruption, hallucination, excess, apocalyptic transformations, and violence, even beneath the calm veneer of the near future portrayed here. At the interface between human and tool, in that positive feedback loop between hard tech and soft body-mind, lies danger and violence for vulnerable flesh. This is no less true for the human-seeming computer than it was for the first bludgeon, as soothing and familiar as HAL's voice is. There's violence in the suppressed hyphen between inter and face.

In this context it seems funny, then, that HAL should be thought of as an inspiration, much as other dark techno-dystopias became the inspiration for new breakthroughs, like Gibson's Neuromancer and cyberspace. Something is getting lost in the translation, let's call it the forest for the trees. Kubrick and Clarke, despite the pains they took to "get things right" in their technological prediction, a form of celebration - are using the materially convincing portraits of technology to sound a warning, not least about conceding victory to our tools prematurely much as Frank concedes chess victory to HAL prematurely. In their own way, these authors are saying the same thing ex cathedra.

David Porush (porusd@rpi.edu) is a professor in the School of Humanities and Social Sciences at Rensselaer Polytechnic Institute. He is the author of The Soft Machine: Cybernetic Fiction and numerous essays about cyberculture. He is working on a book about the origin of the alphabet and virtual reality, Telepathy.

Copyright © 1997 by David Porush. All Rights Reserved.


Contents Archive Sponsors Studies Contact

CMC
Magazine

May 1997 http://www.december.com/cmc/mag/1997/may/porush.html


How Now, HAL?

Book Review: HAL's Legacy: 2001 as Dream and Reality
Edited by David G. Stork
Foreward by Arthur C. Clarke
MIT Press, 1997
ISBN 0-262-19378-7
352 pages.
$22.50 USD (cloth)

Reviewed by David Porush

Science and the arts have always been locked in a robust, fertile feedback loop with each other. One of the surest genres to encourage this embrace has been science fiction film and literature. Science fiction film tends to lag behind fiction in its vision and relevance to real science, but every once in awhile a film comes along that expresses and visualizes the future or should we say the relationship between a culture and its idea of its own future - in a way that fiction could not. "Blade Runner" (1983) was one;"2001: A Space Odyssey" was another. Just as I remember the intricate and sensuous decay depicted in Ridley Scott's cinema more vividly than any literary evocation, so I remember the deep silences Kubrick deploys in a way forbidden in fiction, which always babbles from the page, even when it invokes silence.

It wasn't until the 1980s that science fiction began to be generally accepted in some university curricula, and even then only marginally. Disparaged as pulp or pop (in fiction) or grade B in cinema, sf was thought unworthy for study. But as often happens, the mood and interests of the culture at large overtook the academy, and the line between belle lettres and sf was blurred by such "canonical" pomo authors of the 1960s and 70s as Thomas Pynchon, Italo Calvino, John Barth, Donald Barthelme, Kurt Vonnegut, Jr., Joseph McElroy, Don Delillo, Jorge Luis Borges, Samuel Beckett and others in response to an increasingly technologized/scientized reality. What we learned from them was that science and technology drive our culture's dreaming as surely, or maybe more surely, than literary accretions. (Within a decade, these authors and others inspired a generation of scholars to found the Society for Literature and Science, in acknowledgment to the fact that scientific perspectives could shed more light on some literature than infra-disciplinary ones, and that in turn, literature had traditionally formulated potent critiques of - and resistance to - the hegemony of science and technology).

We find in the work of those literary authors a common concern with cybernetics and the growingly intimate, intricate, and mythologically dramatic relationship between humans and computers. Long before the universal prefix "cyber" became a nauseating cliché, these authors were telling us that we lived in a Cybernetic (not a mere Information) Age. "Cybernetics" signals a living, dynamic system or web of relations, whereas mere inert "information" leads only to a vista of endless ones and zeroes, a dispiriting datopia. Today, in the era of the Web, we think of ourselves as a cyberculture.

1968 was at the cusp of these literary events, so it is no wonder that Stanley Kubrick, one of the most visionary and experimental big movie directors, and Arthur C. Clarke, known for his sf realism (his 1948 suggestion that the geo-stationary ring around the Earth could be used for communication satellites led to a multi-trillion dollar industry that is just burgeoning; that belt is now known as the Clarke Belt) teamed up that year to express on film a weird mixture of mundane technicalism and mythological realism.

HAL, the superintelligent computer turned murderer in the movie "2001" would have been born on January 12, 1997. On the occasion of his birthday, MIT Press has issued an extraordinary collection of essays. Disguised as a coffee table book in praise of Stanley Kubrick's and Arthur C. Clarke's vision of the future of computers in their 1968 movie, in reality, HAL'S Legacy is a potent critique of AI - sometimes unwittingly so - from some of its major proponents. The volume is also a good bit of fun as it explores arcane cinematic details and allusions with the kind of precision only supernerds like these authors could, armed with highly technical knowledge. (My favorite is that in the chess game between HAL and astronaut Frank Poole (played by Gary Lockwood), Poole concedes the match to HAL even though HAL has made a blunder in his assessment of his own position, one of the first signs that HAL has a deep flaw in his programming.

As David Stork, editor of the book and an expert on computer speech, tells us, the explicit purposes of the volume are to analyze, discuss, and reflect on the original prophesies about intelligent computers made in 2001 and measure them against the technical progress made since then. The testimonies by the scientist authors of the essays in this volume close the feedback loop between science and literary culture, since it admits that just as fiction often reflects science, so do science and technology sometimes strive to make tangible the dreams in fiction. To put it more simply, Stork writes, the volume asks the question "Why don't we have AI?" even though HAL reflected and helped impel a growing and confident movement, thirty years old now, that a full blooded artificial intelligence was achievable by the end of this millennium. Just how fatigued these hopes are the authors of these essays themselves expose by measuring the reality of artificial intelligence projects today against the dream/nightmare of HAL.

As this volume shows, very specific advancements in AI research anticipated, paralleled, and in some cases were originally inspired by the movie itself. In one essay, David Kuck predicts, plotting along the trajectory of Moore's law (that the number of transistors fit in a given space - memory - doubles every 18 months) that within the next few years we are likely to build a machine with the raw computational power of HAL. In a very intelligent essay that combines technical material about computer chess strategy with a brief history of the relationship between computers and chess and a reflection on the nature of human vs. computer intelligence, Murray Campbell, one of the chief architects of Deep Blue, concludes that computers simply do not - and probably cannot -- "play chess the way humans do" [79], and that this difference is crucial. Chess machines accomplish their superiority by brute computational force, sorting through millions of bits of data and hundred of possible moves very quickly; human grand masters, by contrast, play "'trappy' chess." Trappy chess means the player is aware of his or her opponent's strengths and weaknesses and lays traps to use them against the opponent. This in turn, requires that the player be able to understand itself, to have self-consciousness. But as Campbell points out, even Deep Blue "is unable to appreciate its own moves."

The result is a system that gives the "appearance" of playing brilliant chess, beating brilliant human opponents, but in fact is playing a very different sort of game. It's still John Henry vs. the steam hammer. (My own artificial intelligence research stemmed from this premise. In building the "Gameworld" system at RPI, we explicitly tried to build a computer that would mimic human story-telling behavior in its output while explicitly creating software strategies that were non- literary: machine tricks, really.) Even though Deep Blue gave Gary Kasparov a run for his money, Campbell here admits that the triumph still poses no challenge to human intelligence. "Does a machine need to be intelligent to play chess"? asks Campbell on the first page of his essay. His answer is evidently, "No." I guess we could say that three decades of AI research has led to the creation of intelligence, alright, but of a very alien sort.

Similarly, Roger Schank in his essay shows how with a few simple tricks we can duplicate some of HAL's linguistic competence, but these would just be tricks. Admitting the extent to which he was inspired and challenged by the movie, he now in retrospect also admits to youthful over-confidence. "Thirty years of research on getting computers to process natural language," he writes, has taught me what I did not know in 1968: that understanding natural language depends a great deal more than simply understanding words" [172]. He, too , concludes that computers may be very good at giving the "illusion of intelligence," but in order to really be intelligent as we understand it humanly, computers have to understand what they are saying. In order for this to be so, Schank argues, HAL and any other imaginable computer, "would need a complete model of the world." Schank's illustration is telling: he chooses the word "watching" that describes part of HAL's mission: he is watching over them. It seems like such a simple word, but think of what HAL needs to understand about "watching" and its responsibilities in order to really fulfill that duty intelligently!

Schank's analysis shows that any imaginable computer using a simply logical-rational approach could not possibly understand all the nuances of the word watching in anything approaching even what my children's baby-sitter already knows, implicitly, when she comes over on a Saturday night to watch my children. In short, HAL cannot possibly fulfill his duties as watchman and exposes that even as he states, over confidently, that he "is the most reliable computer ever made." Indeed, this strikes to the very heart of the mythology of the computer that Clarke and Kubrick give us. It is the very inhuman certainty of infallibility that makes the computer so fallible, and virtually every other essay in this volume makes the same point in one fashion or another, implicitly or explicitly.

For instance, Ravishankar K. Iyer's essay, "'Foolproof and Incapable of Error?' Reliable Computing and Fault Tolerance," the most technical essay in the lot, analyzes how it is that the spaceboard computer who makes the boast of the title "runs amok" even while its twin down on earth continues to function flawlessly. The answer lies in the "Byzantine general's decision" - a notoriously difficult conundrum in which competing goals cannot be evaluated, so the computer is led to paralysis, breakdown or deep error. Variations on this scenario, by the way, are a favorite of computer-human mythologies. We see it in the famous Star Trek episode regarding the Return of Voyager, which in turn became the premise for the first Star Trek movie: the super computer is defeated by leading it to two competing and contrary conclusions. We can even trace it to the earlier cliches of Robby the Robot on "Lost in Space" who often uttered "that does not compute" when faced with trying to evaluate human situations.

HAL reads lips. In fact, it is this talent, unknown to the crew, which plays a big role in the plot, as crew members conspire to dismantle HAL, forcing the computer to murder them in order to preserve himself and therefore, in his twisted logic, the mission. David Stork writes about his work in trying to get computers to recognize speech by lip-reading ("speechreading by sight and sound"). He concludes that the problem is insurmountable with current conceptions of computing, because "it is limited by the problems of representing semantics, common sense, and world knowledge."

Doug Lenat, one of the leaders of the world-famous CYC AI research program, comes to a similar conclusion. "If you have the necessary common-sense knowledge" such as being able to parse the sentence 'deadly pastimes suggest adventurousness' and understand it implicitly even before it is uttered, then you can make inferences about which pastimes are deadly and which humans are adventurous. But "if you lack it, you can't solve the problem at all. Ever." [208]

HAL's eerie voice haunted my generation. He said the most horrible things in a most soothing baritone. Indeed, this sense of the sinister added to the feeling that HAL was insurmountably intelligent, inexorably superior to the crew. Can computers today speak naturalistically? The symmetrical problem of speech generation - such as those used in reading machines for the blind - explored by Joe Olive, comes to a similar conclusion. These AI systems do a pretty adequate job of uttering recognizable sounds after scanning individual words or short segments of text. But they are terrible at communicating subtleties of stress, intonation. Blind users have to learn to "read" the sounds in order to get any real use out of them.

HAL can see with the acuity not of a camera but of a human, perhaps even a superhuman: he not only registers every detail in his multiple fields of vision throughout the spaceship, he understands what he sees, integrates the images, compares them, and draws conclusions from them, to the extent that he renders judgment on Dave's progress in making sketches of his crew members. Azriel Roseberg, in his essay "Eyes for Computers," concludes that HAL would be very proud to have many of the capabilities of today's vision recognition systems. But he glosses the question of how much computers "understand" what they see.

A consistent theme emerges in these wonderfully intelligent and revealing essays, with the exceptions of those entries by Ray Kurzweil, who is on perpetual sales mode for a visionary and gleaming technofuture, and Azriel Rosenberg (what does it mean that the latter's first name is contained anagrammatically in the former's name?). And this theme goes beyond the overly simplistic resolution that computers don't have common sense. Rather, lurking in these generous self-critiques and assessments of the field lies a deeper challenge to the entire epistemology of rationalism itself. The computer is not just a wonderful tool and communications device: it is the ongoing test case of the entire Western project of rationalism, and the field of battle is for the human mind itself, dressed up these days in the word "cognition." Can the human mind and its various intelligences and facilities be modeled - indeed embodied - in a simply logical machine, operating according to reductively mechanistic principles? Thirty years after 2001 and billions, perhaps trillions of research and design dollars later, these authors, luminaries in the field, seem to be saying no, though they dare not even whisper the heresy, since they remain supplicants and priests in the Church of Reason, dependent on it for livelihood and sinecures. Stork captures the statement of faith, the metaphysical bedrock, of this postmodern fundamentalism: "Nothing in principle prevents us from creating artificial intelligence."

Well, if reason doesn't work, what then? Contained in the multiple failures of AI reset and its various cul de sacs, even as expressed by its most successful proponents, lies a hint of a wholly different view of things. There it is, in the unparsability of a single word absent its ecological context. There it is, lurking in the brute dumbness of Deep Blue as it sorts through a million possible moves without having a clue about what it's doing, unable to make up a single "trap" designed to snare a particular opponent in a particular context. There it is, in the inability of the computer to give proper inflection to the word because it cannot relate the word to its context in order to "get" (and then communicate) its meaning. There it is in the ongoing shift of AI in pursuit of Minsky's dream - an interview is included herein since he worked on the set of 2001 and is one of the fathers of current AI - of a "general intelligence computer."

What is this general intelligence? Why is it so elusive? Call it Porush's Limit: All AI is limited by the inability of a merely logical system to parse meaning from simple atoms of the total system. Only natural intelligence sees and understands the system and its individual parts simultaneously, in dynamic looping feedback among them. Look at the curve of lip, see at the face, understand the intention, "read" the curve of the lip. That's intelligence, and no artificial system founded on reason can get there from bits and on off switches and symbolic logic. Meaning, in short, emerges, from the self-contextualizing properties of system, and only the natural mind is equipped to appreciate this show. In fact, that's a pretty good definition of intelligence-in-the-world, a kind of ongoing dynamic becoming, navigating a space of multiple voices, messages, meanings, ambiguities and somehow, through a complex feedback looping between brain and world, bringing meaning into pulsing, stelliferous manifestation all in the soaring dome around us, to paraphrase Pynchon - or to suggest a kind of existential websurfing.

This admittedly fanciful description may yet also be the most evocative and accurate reflection of the processes of the brain (not the mind, but the more-than-mechanical organic brain) itself, which are neither simple nor linear, but self-contextualizing always already. That's the element which these AI experts always seem to be groping for in their critiques of AI's shortfall, and from the computer systems they build, however successful: it's a certain style, a style of knowing, of meaning-making. OK, now you can kick me out of church.

This mystical "general intelligence" to which Minsky appeals is so elusive because logical analysis and human intelligence -- and maybe the larger reality from which the brain itself is projected and from which human intelligence arises -- have so little to do with each other. In all postmodern fields -- physics, genetics, neurophysiology, mathematics and computing among them -- some limit is being or has already been approached, call it the limit of meaning. In physics, it is Heisenberg's Uncertainty Principle. In mathematics, Gödel's Incompleteness Theorem. In genetics, it is the inability to explain morphology and cell specialization -- the turning on and off of certain genes in certain regions of the bodies - from within DNA itself. (The amount of information needed to determine cell determination exceeds the amount of information contained within the organism's genome). In neurophysiology, it is apparent that the central nervous system is much more than a series of on-off switches, and that neurons float in a frothing, chaotic, complex bath of hormones - neurotransmitters like 5-hydroxytriptamine (serotonin) - so that nerve pathways evolve and the brain acts much more holistically. And so on through the sciences.

These limits are the limits of pure logic and analysis themselves. On the other side of the wall lies the new onto-epistemology of emergence, in which meaning inheres in the total system and emerges autopoetically, so that any single phoneme can only make sense in the context of the word, sentence, paragraph, story, world...ahh, so that's how I shall intone that phoneme, hit it hard, "how NICE." So that any single gene only makes sense in the context of the emergent, growing organism, feeding back information into the expression of the genes in the intimacy of the cell... ahh, so that's how I shall let this cell become a nerve in the dorsal raphe nucleus, ready to receive intimations from the mystical beyond. So that the firing of any single nerve carries no information in itself but can have a cascading butterfly effect on the meaning of the moment, and then, perhaps, thought itself...ahh, so now it's my turn to pop off, says the nerve speaking to the brain-mind which conducts this concert, this consort. OK, you get the message. So that any electron only obtains its position after some mind has observed it, absurd as it sounds, implying either multiple universes splitting off every time an electron obtains its position or an unnameable and nameless godmind inhering in this, our single common universe, sustaining it by "watching" it, much as HAL aspired - - but failed -- to on board his fateful mission: tirelessly, ubiquitously, and infallibly, down to every last electron-about-to-become.

One last anticlimactic note: the film which helped inspire the work of AI and these essays about them is an evocative and often beautiful meditation on the marriage of humans to their tools, that complex relationship. HAL is the closest to us in a long line of tools that began with the bone weapon and evolves eventually to the magical monolith of some presumably superior, though not necessarily benevolent, far-flung race. That meditation is filled not with happy thoughts but disruption, hallucination, excess, apocalyptic transformations, and violence, even beneath the calm veneer of the near future portrayed here. At the interface between human and tool, in that positive feedback loop between hard tech and soft body-mind, lies danger and violence for vulnerable flesh. This is no less true for the human-seeming computer than it was for the first bludgeon, as soothing and familiar as HAL's voice is. There's violence in the suppressed hyphen between inter and face.

In this context it seems funny, then, that HAL should be thought of as an inspiration, much as other dark techno-dystopias became the inspiration for new breakthroughs, like Gibson's Neuromancer and cyberspace. Something is getting lost in the translation, let's call it the forest for the trees. Kubrick and Clarke, despite the pains they took to "get things right" in their technological prediction, a form of celebration - are using the materially convincing portraits of technology to sound a warning, not least about conceding victory to our tools prematurely much as Frank concedes chess victory to HAL prematurely. In their own way, these authors are saying the same thing ex cathedra.

David Porush (porusd@rpi.edu) is a professor in the School of Humanities and Social Sciences at Rensselaer Polytechnic Institute. He is the author of The Soft Machine: Cybernetic Fiction and numerous essays about cyberculture. He is working on a book about the origin of the alphabet and virtual reality, Telepathy.

Copyright © 1997 by David Porush. All Rights Reserved.


Contents Archive Sponsors Studies Contact

CMC
Magazine

May 1997 http://www.december.com/cmc/mag/1997/may/porush.html


How Now, HAL?

Book Review: HAL's Legacy: 2001 as Dream and Reality
Edited by David G. Stork
Foreward by Arthur C. Clarke
MIT Press, 1997
ISBN 0-262-19378-7
352 pages.
$22.50 USD (cloth)

Reviewed by David Porush

Science and the arts have always been locked in a robust, fertile feedback loop with each other. One of the surest genres to encourage this embrace has been science fiction film and literature. Science fiction film tends to lag behind fiction in its vision and relevance to real science, but every once in awhile a film comes along that expresses and visualizes the future or should we say the relationship between a culture and its idea of its own future - in a way that fiction could not. "Blade Runner" (1983) was one;"2001: A Space Odyssey" was another. Just as I remember the intricate and sensuous decay depicted in Ridley Scott's cinema more vividly than any literary evocation, so I remember the deep silences Kubrick deploys in a way forbidden in fiction, which always babbles from the page, even when it invokes silence.

It wasn't until the 1980s that science fiction began to be generally accepted in some university curricula, and even then only marginally. Disparaged as pulp or pop (in fiction) or grade B in cinema, sf was thought unworthy for study. But as often happens, the mood and interests of the culture at large overtook the academy, and the line between belle lettres and sf was blurred by such "canonical" pomo authors of the 1960s and 70s as Thomas Pynchon, Italo Calvino, John Barth, Donald Barthelme, Kurt Vonnegut, Jr., Joseph McElroy, Don Delillo, Jorge Luis Borges, Samuel Beckett and others in response to an increasingly technologized/scientized reality. What we learned from them was that science and technology drive our culture's dreaming as surely, or maybe more surely, than literary accretions. (Within a decade, these authors and others inspired a generation of scholars to found the Society for Literature and Science, in acknowledgment to the fact that scientific perspectives could shed more light on some literature than infra-disciplinary ones, and that in turn, literature had traditionally formulated potent critiques of - and resistance to - the hegemony of science and technology).

We find in the work of those literary authors a common concern with cybernetics and the growingly intimate, intricate, and mythologically dramatic relationship between humans and computers. Long before the universal prefix "cyber" became a nauseating cliché, these authors were telling us that we lived in a Cybernetic (not a mere Information) Age. "Cybernetics" signals a living, dynamic system or web of relations, whereas mere inert "information" leads only to a vista of endless ones and zeroes, a dispiriting datopia. Today, in the era of the Web, we think of ourselves as a cyberculture.

1968 was at the cusp of these literary events, so it is no wonder that Stanley Kubrick, one of the most visionary and experimental big movie directors, and Arthur C. Clarke, known for his sf realism (his 1948 suggestion that the geo-stationary ring around the Earth could be used for communication satellites led to a multi-trillion dollar industry that is just burgeoning; that belt is now known as the Clarke Belt) teamed up that year to express on film a weird mixture of mundane technicalism and mythological realism.

HAL, the superintelligent computer turned murderer in the movie "2001" would have been born on January 12, 1997. On the occasion of his birthday, MIT Press has issued an extraordinary collection of essays. Disguised as a coffee table book in praise of Stanley Kubrick's and Arthur C. Clarke's vision of the future of computers in their 1968 movie, in reality, HAL'S Legacy is a potent critique of AI - sometimes unwittingly so - from some of its major proponents. The volume is also a good bit of fun as it explores arcane cinematic details and allusions with the kind of precision only supernerds like these authors could, armed with highly technical knowledge. (My favorite is that in the chess game between HAL and astronaut Frank Poole (played by Gary Lockwood), Poole concedes the match to HAL even though HAL has made a blunder in his assessment of his own position, one of the first signs that HAL has a deep flaw in his programming.

As David Stork, editor of the book and an expert on computer speech, tells us, the explicit purposes of the volume are to analyze, discuss, and reflect on the original prophesies about intelligent computers made in 2001 and measure them against the technical progress made since then. The testimonies by the scientist authors of the essays in this volume close the feedback loop between science and literary culture, since it admits that just as fiction often reflects science, so do science and technology sometimes strive to make tangible the dreams in fiction. To put it more simply, Stork writes, the volume asks the question "Why don't we have AI?" even though HAL reflected and helped impel a growing and confident movement, thirty years old now, that a full blooded artificial intelligence was achievable by the end of this millennium. Just how fatigued these hopes are the authors of these essays themselves expose by measuring the reality of artificial intelligence projects today against the dream/nightmare of HAL.

As this volume shows, very specific advancements in AI research anticipated, paralleled, and in some cases were originally inspired by the movie itself. In one essay, David Kuck predicts, plotting along the trajectory of Moore's law (that the number of transistors fit in a given space - memory - doubles every 18 months) that within the next few years we are likely to build a machine with the raw computational power of HAL. In a very intelligent essay that combines technical material about computer chess strategy with a brief history of the relationship between computers and chess and a reflection on the nature of human vs. computer intelligence, Murray Campbell, one of the chief architects of Deep Blue, concludes that computers simply do not - and probably cannot -- "play chess the way humans do" [79], and that this difference is crucial. Chess machines accomplish their superiority by brute computational force, sorting through millions of bits of data and hundred of possible moves very quickly; human grand masters, by contrast, play "'trappy' chess." Trappy chess means the player is aware of his or her opponent's strengths and weaknesses and lays traps to use them against the opponent. This in turn, requires that the player be able to understand itself, to have self-consciousness. But as Campbell points out, even Deep Blue "is unable to appreciate its own moves."

The result is a system that gives the "appearance" of playing brilliant chess, beating brilliant human opponents, but in fact is playing a very different sort of game. It's still John Henry vs. the steam hammer. (My own artificial intelligence research stemmed from this premise. In building the "Gameworld" system at RPI, we explicitly tried to build a computer that would mimic human story-telling behavior in its output while explicitly creating software strategies that were non- literary: machine tricks, really.) Even though Deep Blue gave Gary Kasparov a run for his money, Campbell here admits that the triumph still poses no challenge to human intelligence. "Does a machine need to be intelligent to play chess"? asks Campbell on the first page of his essay. His answer is evidently, "No." I guess we could say that three decades of AI research has led to the creation of intelligence, alright, but of a very alien sort.

Similarly, Roger Schank in his essay shows how with a few simple tricks we can duplicate some of HAL's linguistic competence, but these would just be tricks. Admitting the extent to which he was inspired and challenged by the movie, he now in retrospect also admits to youthful over-confidence. "Thirty years of research on getting computers to process natural language," he writes, has taught me what I did not know in 1968: that understanding natural language depends a great deal more than simply understanding words" [172]. He, too , concludes that computers may be very good at giving the "illusion of intelligence," but in order to really be intelligent as we understand it humanly, computers have to understand what they are saying. In order for this to be so, Schank argues, HAL and any other imaginable computer, "would need a complete model of the world." Schank's illustration is telling: he chooses the word "watching" that describes part of HAL's mission: he is watching over them. It seems like such a simple word, but think of what HAL needs to understand about "watching" and its responsibilities in order to really fulfill that duty intelligently!

Schank's analysis shows that any imaginable computer using a simply logical-rational approach could not possibly understand all the nuances of the word watching in anything approaching even what my children's baby-sitter already knows, implicitly, when she comes over on a Saturday night to watch my children. In short, HAL cannot possibly fulfill his duties as watchman and exposes that even as he states, over confidently, that he "is the most reliable computer ever made." Indeed, this strikes to the very heart of the mythology of the computer that Clarke and Kubrick give us. It is the very inhuman certainty of infallibility that makes the computer so fallible, and virtually every other essay in this volume makes the same point in one fashion or another, implicitly or explicitly.

For instance, Ravishankar K. Iyer's essay, "'Foolproof and Incapable of Error?' Reliable Computing and Fault Tolerance," the most technical essay in the lot, analyzes how it is that the spaceboard computer who makes the boast of the title "runs amok" even while its twin down on earth continues to function flawlessly. The answer lies in the "Byzantine general's decision" - a notoriously difficult conundrum in which competing goals cannot be evaluated, so the computer is led to paralysis, breakdown or deep error. Variations on this scenario, by the way, are a favorite of computer-human mythologies. We see it in the famous Star Trek episode regarding the Return of Voyager, which in turn became the premise for the first Star Trek movie: the super computer is defeated by leading it to two competing and contrary conclusions. We can even trace it to the earlier cliches of Robby the Robot on "Lost in Space" who often uttered "that does not compute" when faced with trying to evaluate human situations.

HAL reads lips. In fact, it is this talent, unknown to the crew, which plays a big role in the plot, as crew members conspire to dismantle HAL, forcing the computer to murder them in order to preserve himself and therefore, in his twisted logic, the mission. David Stork writes about his work in trying to get computers to recognize speech by lip-reading ("speechreading by sight and sound"). He concludes that the problem is insurmountable with current conceptions of computing, because "it is limited by the problems of representing semantics, common sense, and world knowledge."

Doug Lenat, one of the leaders of the world-famous CYC AI research program, comes to a similar conclusion. "If you have the necessary common-sense knowledge" such as being able to parse the sentence 'deadly pastimes suggest adventurousness' and understand it implicitly even before it is uttered, then you can make inferences about which pastimes are deadly and which humans are adventurous. But "if you lack it, you can't solve the problem at all. Ever." [208]

HAL's eerie voice haunted my generation. He said the most horrible things in a most soothing baritone. Indeed, this sense of the sinister added to the feeling that HAL was insurmountably intelligent, inexorably superior to the crew. Can computers today speak naturalistically? The symmetrical problem of speech generation - such as those used in reading machines for the blind - explored by Joe Olive, comes to a similar conclusion. These AI systems do a pretty adequate job of uttering recognizable sounds after scanning individual words or short segments of text. But they are terrible at communicating subtleties of stress, intonation. Blind users have to learn to "read" the sounds in order to get any real use out of them.

HAL can see with the acuity not of a camera but of a human, perhaps even a superhuman: he not only registers every detail in his multiple fields of vision throughout the spaceship, he understands what he sees, integrates the images, compares them, and draws conclusions from them, to the extent that he renders judgment on Dave's progress in making sketches of his crew members. Azriel Roseberg, in his essay "Eyes for Computers," concludes that HAL would be very proud to have many of the capabilities of today's vision recognition systems. But he glosses the question of how much computers "understand" what they see.

A consistent theme emerges in these wonderfully intelligent and revealing essays, with the exceptions of those entries by Ray Kurzweil, who is on perpetual sales mode for a visionary and gleaming technofuture, and Azriel Rosenberg (what does it mean that the latter's first name is contained anagrammatically in the former's name?). And this theme goes beyond the overly simplistic resolution that computers don't have common sense. Rather, lurking in these generous self-critiques and assessments of the field lies a deeper challenge to the entire epistemology of rationalism itself. The computer is not just a wonderful tool and communications device: it is the ongoing test case of the entire Western project of rationalism, and the field of battle is for the human mind itself, dressed up these days in the word "cognition." Can the human mind and its various intelligences and facilities be modeled - indeed embodied - in a simply logical machine, operating according to reductively mechanistic principles? Thirty years after 2001 and billions, perhaps trillions of research and design dollars later, these authors, luminaries in the field, seem to be saying no, though they dare not even whisper the heresy, since they remain supplicants and priests in the Church of Reason, dependent on it for livelihood and sinecures. Stork captures the statement of faith, the metaphysical bedrock, of this postmodern fundamentalism: "Nothing in principle prevents us from creating artificial intelligence."

Well, if reason doesn't work, what then? Contained in the multiple failures of AI reset and its various cul de sacs, even as expressed by its most successful proponents, lies a hint of a wholly different view of things. There it is, in the unparsability of a single word absent its ecological context. There it is, lurking in the brute dumbness of Deep Blue as it sorts through a million possible moves without having a clue about what it's doing, unable to make up a single "trap" designed to snare a particular opponent in a particular context. There it is, in the inability of the computer to give proper inflection to the word because it cannot relate the word to its context in order to "get" (and then communicate) its meaning. There it is in the ongoing shift of AI in pursuit of Minsky's dream - an interview is included herein since he worked on the set of 2001 and is one of the fathers of current AI - of a "general intelligence computer."

What is this general intelligence? Why is it so elusive? Call it Porush's Limit: All AI is limited by the inability of a merely logical system to parse meaning from simple atoms of the total system. Only natural intelligence sees and understands the system and its individual parts simultaneously, in dynamic looping feedback among them. Look at the curve of lip, see at the face, understand the intention, "read" the curve of the lip. That's intelligence, and no artificial system founded on reason can get there from bits and on off switches and symbolic logic. Meaning, in short, emerges, from the self-contextualizing properties of system, and only the natural mind is equipped to appreciate this show. In fact, that's a pretty good definition of intelligence-in-the-world, a kind of ongoing dynamic becoming, navigating a space of multiple voices, messages, meanings, ambiguities and somehow, through a complex feedback looping between brain and world, bringing meaning into pulsing, stelliferous manifestation all in the soaring dome around us, to paraphrase Pynchon - or to suggest a kind of existential websurfing.

This admittedly fanciful description may yet also be the most evocative and accurate reflection of the processes of the brain (not the mind, but the more-than-mechanical organic brain) itself, which are neither simple nor linear, but self-contextualizing always already. That's the element which these AI experts always seem to be groping for in their critiques of AI's shortfall, and from the computer systems they build, however successful: it's a certain style, a style of knowing, of meaning-making. OK, now you can kick me out of church.

This mystical "general intelligence" to which Minsky appeals is so elusive because logical analysis and human intelligence -- and maybe the larger reality from which the brain itself is projected and from which human intelligence arises -- have so little to do with each other. In all postmodern fields -- physics, genetics, neurophysiology, mathematics and computing among them -- some limit is being or has already been approached, call it the limit of meaning. In physics, it is Heisenberg's Uncertainty Principle. In mathematics, Gödel's Incompleteness Theorem. In genetics, it is the inability to explain morphology and cell specialization -- the turning on and off of certain genes in certain regions of the bodies - from within DNA itself. (The amount of information needed to determine cell determination exceeds the amount of information contained within the organism's genome). In neurophysiology, it is apparent that the central nervous system is much more than a series of on-off switches, and that neurons float in a frothing, chaotic, complex bath of hormones - neurotransmitters like 5-hydroxytriptamine (serotonin) - so that nerve pathways evolve and the brain acts much more holistically. And so on through the sciences.

These limits are the limits of pure logic and analysis themselves. On the other side of the wall lies the new onto-epistemology of emergence, in which meaning inheres in the total system and emerges autopoetically, so that any single phoneme can only make sense in the context of the word, sentence, paragraph, story, world...ahh, so that's how I shall intone that phoneme, hit it hard, "how NICE." So that any single gene only makes sense in the context of the emergent, growing organism, feeding back information into the expression of the genes in the intimacy of the cell... ahh, so that's how I shall let this cell become a nerve in the dorsal raphe nucleus, ready to receive intimations from the mystical beyond. So that the firing of any single nerve carries no information in itself but can have a cascading butterfly effect on the meaning of the moment, and then, perhaps, thought itself...ahh, so now it's my turn to pop off, says the nerve speaking to the brain-mind which conducts this concert, this consort. OK, you get the message. So that any electron only obtains its position after some mind has observed it, absurd as it sounds, implying either multiple universes splitting off every time an electron obtains its position or an unnameable and nameless godmind inhering in this, our single common universe, sustaining it by "watching" it, much as HAL aspired - - but failed -- to on board his fateful mission: tirelessly, ubiquitously, and infallibly, down to every last electron-about-to-become.

One last anticlimactic note: the film which helped inspire the work of AI and these essays about them is an evocative and often beautiful meditation on the marriage of humans to their tools, that complex relationship. HAL is the closest to us in a long line of tools that began with the bone weapon and evolves eventually to the magical monolith of some presumably superior, though not necessarily benevolent, far-flung race. That meditation is filled not with happy thoughts but disruption, hallucination, excess, apocalyptic transformations, and violence, even beneath the calm veneer of the near future portrayed here. At the interface between human and tool, in that positive feedback loop between hard tech and soft body-mind, lies danger and violence for vulnerable flesh. This is no less true for the human-seeming computer than it was for the first bludgeon, as soothing and familiar as HAL's voice is. There's violence in the suppressed hyphen between inter and face.

In this context it seems funny, then, that HAL should be thought of as an inspiration, much as other dark techno-dystopias became the inspiration for new breakthroughs, like Gibson's Neuromancer and cyberspace. Something is getting lost in the translation, let's call it the forest for the trees. Kubrick and Clarke, despite the pains they took to "get things right" in their technological prediction, a form of celebration - are using the materially convincing portraits of technology to sound a warning, not least about conceding victory to our tools prematurely much as Frank concedes chess victory to HAL prematurely. In their own way, these authors are saying the same thing ex cathedra.

David Porush (porusd@rpi.edu) is a professor in the School of Humanities and Social Sciences at Rensselaer Polytechnic Institute. He is the author of The Soft Machine: Cybernetic Fiction and numerous essays about cyberculture. He is working on a book about the origin of the alphabet and virtual reality, Telepathy.

Copyright © 1997 by David Porush. All Rights Reserved.


Contents Archive Sponsors Studies Contact