Book: Essays. FSF Columns



Bruce Sterling

Essays. FSF Columns

OUTER CYBERSPACE

Dreaming of space-flight, and predicting its future, have always been favorite pastimes of science fiction. In my first science column for F&SF, I can't resist the urge to contribute a bit to this grand tradition.

A science-fiction writer in 1991 has a profound advantage over the genre's pioneers. Nowadays, space-exploration has a past as well as a future. "The conquest of space" can be judged today, not just by dreams, but by a real-life track record.

Some people sincerely believe that humanity's destiny lies in the stars, and that humankind evolved from the primordial slime in order to people the galaxy. These are interesting notions: mystical and powerful ideas with an almost religious appeal. They also smack a little of Marxist historical determinism, which is one reason why the Soviets found them particularly attractive.

Americans can appreciate mystical blue-sky rhetoric as well as anybody, but the philosophical glamor of "storming the cosmos" wasn't enough to motivate an American space program all by itself. Instead, the Space Race was a creation of the Cold War -- its course was firmly set in the late '50s and early '60s. Americans went into space *because* the Soviets had gone into space, and because the Soviets were using Sputnik and Yuri Gagarin to make a case that their way of life was superior to capitalism.

The Space Race was a symbolic tournament for the newfangled intercontinental rockets whose primary purpose (up to that point) had been as instruments of war. The Space Race was the harmless, symbolic, touch-football version of World War III. For this reason alone: that it did no harm, and helped avert a worse clash -- in my opinion, the Space Race was worth every cent. But the fact that it was a political competition had certain strange implications.

Because of this political aspect, NASA's primary product was never actual "space exploration." Instead, NASA produced public- relations spectaculars. The Apollo project was the premiere example. The astonishing feat of landing men on the moon was a tremendous public-relations achievement, and it pretty much crushed the Soviet opposition, at least as far as "space-racing" went.

On the other hand, like most "spectaculars," Apollo delivered rather little in the way of permanent achievement. There was flag- waving, speeches, and plaque-laying; a lot of wonderful TV coverage; and then the works went into mothballs. We no longer have the capacity to fly human beings to the moon. No one else seems particularly interested in repeating this feat, either; even though the Europeans, Indians, Chinese and Japanese all have their own space programs today. (Even the Arabs, Canadians, Australians and Indonesians have their own satellites now.)

In 1991, NASA remains firmly in the grip of the "Apollo Paradigm." The assumption was (and is) that only large, spectacular missions with human crews aboard can secure political support for NASA, and deliver the necessary funding to support its eleven-billion- dollar-a-year bureaucracy. "No Buck Rogers, no bucks."

The march of science -- the urge to actually find things out about our solar system and our universe -- has never been the driving force for NASA. NASA has been a very political animal; the space- science community has fed on its scraps.

Unfortunately for NASA, a few historical home truths are catching up with the high-tech white-knights.

First and foremost, the Space Race is over. There is no more need for this particular tournament in 1992, because the Soviet opposition is in abject ruins. The Americans won the Cold War. In 1992, everyone in the world knows this. And yet NASA is still running space-race victory laps.

What's worse, the Space Shuttle, one of which blew up in 1986, is clearly a white elephant. The Shuttle is overly complex, over- designed, the creature of bureaucratic decision-making which tried to provide all things for all constituents, and ended-up with an unworkable monster. The Shuttle was grotesquely over-promoted, and it will never fulfill the outrageous promises made for it in the '70s. It's not and never will be a "space truck." It's rather more like a Ming vase.

Space Station Freedom has very similar difficulties. It costs far too much, and is destroying other and more useful possibilities for space activity. Since the Shuttle takes up half NASA's current budget, the Shuttle and the Space Station together will devour most *all* of NASA's budget for *years to come* -- barring unlikely large-scale increases in funding.

Even as a political stage-show, the Space Station is a bad bet, because the Space Station cannot capture the public imagination. Very few people are honestly excited about this prospect. The Soviets *already have* a space station. They've had a space station for years now. Nobody cares about it. It never gets headlines. It inspires not awe but tepid public indifference. Rumor has it that the Soviets (or rather, the *former* Soviets) are willing to sell their "Space Station Peace" to any bidder for eight hundred million dollars, about one fortieth of what "Space Station Freedom" will cost -- and nobody can be bothered to buy it!

Manned space exploration itself has been oversold. Space- flight is simply not like other forms of "exploring." "Exploring" generally implies that you're going to venture out someplace, and tangle hand-to-hand with wonderful stuff you know nothing about. Manned space flight, on the other hand, is one of the most closely regimented of human activities. Most everything that is to happen on a manned space flight is already known far in advance. (Anything not predicted, not carefully calculated beforehand, is very likely to be a lethal catastrophe.)

Reading the personal accounts of astronauts does not reveal much in the way of "adventure" as that idea has been generally understood. On the contrary, the historical and personal record reveals that astronauts are highly trained technicians whose primary motivation is not to "boldly go where no one has gone before," but rather to do *exactly what is necessary* and above all *not to mess up the hardware.*

Astronauts are not like Lewis and Clark. Astronauts are the tiny peak of a vast human pyramid of earth-bound technicians and mission micro-managers. They are kept on a very tight (*necessarily* tight) electronic leash by Ground Control. And they are separated from the environments they explore by a thick chrysalis of space-suits and space vehicles. They don't tackle the challenges of alien environments, hand-to-hand -- instead, they mostly tackle the challenges of their own complex and expensive life-support machinery.

The years of manned space-flight have provided us with the interesting discovery that life in free-fall is not very good for people. People in free-fall lose calcium from their bones -- about half a percent of it per month. Having calcium leach out of one's bones is the same grim phenomenon that causes osteoporosis in the elderly -- "dowager's hump." It makes one's bones brittle. No one knows quite how bad this syndrome can get, since no one has been in orbit much longer than a year; but after a year, the loss of calcium shows no particular sign of slowing down. The human heart shrinks in free- fall, along with a general loss of muscle tone and muscle mass. This loss of muscle, over a period of months in orbit, causes astronauts and cosmonauts to feel generally run-down and feeble.

There are other syndromes as well. Lack of gravity causes blood to pool in the head and upper chest, producing the pumpkin- faced look familiar from Shuttle videos. Eventually, the body reacts to this congestion by reducing the volume of blood. The long-term effects of this are poorly understood. About this time, red blood cell production falls off in the bone marrow. Those red blood cells which are produced in free-fall tend to be interestingly malformed.

And then, of course, there's the radiation hazard. No one in space has been severely nuked yet, but if a solar flare caught a crew in deep space, the results could be lethal.

These are not insurmountable medical challenges, but they *are* real problems in real-life space experience. Actually, it's rather surprising that an organism that evolved for billions of years in gravity can survive *at all* in free-fall. It's a tribute to human strength and plasticity that we can survive and thrive for quite a while without any gravity. However, we now know what it would be like to settle in space for long periods. It's neither easy nor pleasant.

And yet, NASA is still committed to putting people in space. They're not quite sure why people should go there, nor what people will do in space once they're there, but they are bound and determined to do this despite all obstacles.

If there were big money to be made from settling people in space, that would be a different prospect. A commercial career in free-fall would probably be safer, happier, and more rewarding than, say, bomb-disposal, or test-pilot work, or maybe even coal-mining. But the only real moneymaker in space commerce (to date, at least) is the communications satellite industry. The comsat industry wants nothing to do with people in orbit.

Consider this: it costs $200 million to make one shuttle flight. For $200 million you can start your own communications satellite business, just like GE, AT&T, GTE and Hughes Aircraft. You can join the global Intelsat consortium and make a hefty 14% regulated profit in the telecommunications business, year after year. You can do quite well by "space commerce," thank you very much, and thousands of people thrive today by commercializing space. But the Space Shuttle, with humans aboard, costs $30 million a day! There's nothing you can make or do on the Shuttle that will remotely repay that investment. After years of Shuttle flights, there is still not one single serious commercial industry anywhere whose business it is to rent workspace or make products or services on the Shuttle.

The era of manned spectaculars is visibly dying by inches. It's interesting to note that a quarter of the top and middle management of NASA, the heroes of Apollo and its stalwarts of tradition, are currently eligible for retirement. By the turn of the century, more than three-quarters of the old guard will be gone.

This grim and rather cynical recital may seem a dismal prospect for space enthusiasts, but the situation's not actually all that dismal at all. In the meantime, unmanned space development has quietly continued apace. It's a little known fact that America's *military* space budget today is *twice the size* of NASA's entire budget! This is the poorly publicized, hush-hush, national security budget for militarily vital technologies like America's "national technical means of verification," i.e. spy satellites. And then there are military navigational aids like Navstar, a relatively obscure but very impressive national asset. The much-promoted Strategic Defence Initiative is a Cold War boondoggle, and SDI is almost surely not long for this world, in either budgets or rhetoric -- but both Navstar and spy satellites have very promising futures, in and/or out of the military. They promise and deliver solid and useful achievements, and are in no danger of being abandoned.

And communications satellites have come a very long way since Telstar; the Intelsat 6 model, for instance, can carry thirty thousand simultaneous phone calls plus three channels of cable television. There is enormous room for technical improvement in comsat technologies; they have a well-established market, much pent-up demand, and are likely to improve drastically in the future. (The satellite launch business is no longer a superpower monopoly; comsats are being launched by Chinese and Europeans. Newly independent Kazakhstan, home of the Soviet launching facilities at Baikonur, is anxious to enter the business.)

Weather satellites have proven vital to public safety and commercial prosperity. NASA or no NASA, money will be found to keep weather satellites in orbit and improve them technically -- not for reasons of national prestige or flag-waving status, but because it makes a lot of common sense and it really pays.

But a look at the budget decisions for 1992 shows that the Apollo Paradigm still rules at NASA. NASA is still utterly determined to put human beings in space, and actual space science gravely suffers for this decision. Planetary exploration, life science missions, and astronomical surveys (all unmanned) have been cancelled, or curtailed, or delayed in the1992 budget. All this, in the hope of continuing the big-ticket manned 50-billion-dollar Space Shuttle, and of building the manned 30-billion-dollar Space Station Freedom.

The dire list of NASA's sacrifices for 1992 includes an asteroid probe; an advanced x-ray astronomy facility; a space infrared telescope; and an orbital unmanned solar laboratory. We would have learned a very great deal from these projects (assuming that they would have actually worked). The Shuttle and the Station, in stark contrast, will show us very little that we haven't already seen.

There is nothing inevitable about these decisions, about this strategy. With imagination, with a change of emphasis, the exploration of space could take a very different course.

In 1951, when writing his seminal non-fiction work THE EXPLORATION OF SPACE, Arthur C. Clarke created a fine imaginative scenario of unmanned spaceflight.

"Let us imagine that such a vehicle is circling Mars," Clarke speculated. "Under the guidance of a tiny yet extremely complex electronic brain, the missile is now surveying the planet at close quarters. A camera is photographing the landscape below, and the resulting pictures are being transmitted to the distant Earth along a narrow radio beam. It is unlikely that true television will be possible, with an apparatus as small as this, over such ranges. The best that could be expected is that still pictures could be transmitted at intervals of a few minutes, which would be quite adequate for most purposes."

This is probably as close as a science fiction writer can come to true prescience. It's astonishingly close to the true-life facts of the early Mars probes. Mr. Clarke well understood the principles and possibilities of interplanetary rocketry, but like the rest of mankind in 1951, he somewhat underestimated the long-term potentials of that "tiny but extremely complex electronic brain" -- as well as that of "true television." In the 1990s, the technologies of rocketry have effectively stalled; but the technologies of "electronic brains" and electronic media are exploding exponentially.

Advances in computers and communications now make it possible to speculate on the future of "space exploration" along entirely novel lines. Let us now imagine that Mars is under thorough exploration, sometime in the first quarter of the twenty-first century. However, there is no "Martian colony." There are no three-stage rockets, no pressure-domes, no tractor-trailers, no human settlers.

Instead, there are hundreds of insect-sized robots, every one of them equipped not merely with "true television," but something much more advanced. They are equipped for *telepresence.* A human operator can see what they see, hear what they hear, even guide them about at will (granted, of course, that there is a steep transmission lag). These micro-rovers, crammed with cheap microchips and laser photo-optics, are so exquisitely monitored that one can actually *feel* the Martian grit beneath their little scuttling claws. Piloting one of these babies down the Valles Marineris, or perhaps some unknown cranny of the Moon -- now *that* really feels like "exploration." If they were cheap enough, you could dune-buggy them.

No one lives in space stations, in this scenario. Instead, our entire solar system is saturated with cheap monitoring devices. There are no "rockets" any more. Most of these robot surrogates weigh less than a kilogram. They are fired into orbit by small rail-guns mounted on high-flying aircraft. Or perhaps they're launched by laser-ignition: ground-based heat-beams that focus on small reaction-chambers and provide their thrust. They might even be literally shot into orbit by Jules Vernian "space guns" that use the intriguing, dirt-cheap technology of Gerald Bull's Iraqi "super-cannon." This wacky but promising technique would be utterly impractical for launching human beings, since the acceleration g-load would shatter every bone in their bodies; but these little machines are *tough.*

And small robots have many other advantages. Unlike manned craft, robots can go into harm's way: into Jupiter's radiation belts, or into the shrapnel-heavy rings of Saturn, or onto the acid-bitten smoldering surface of Venus. They stay on their missions, operational, not for mere days or weeks, but for decades. They are extensions, not of human population, but of human senses.

And because they are small and numerous, they should be cheap. The entire point of this scenario is to create a new kind of space-probe that is cheap, small, disposable, and numerous: as cheap and disposable as their parent technologies, microchips and video, while taking advantage of new materials like carbon-fiber, fiber- optics, ceramic, and artificial diamond.

The core idea of this particular vision is "fast, cheap, and out of control." Instead of gigantic, costly, ultra-high-tech, one-shot efforts like NASA's Hubble Telescope (crippled by bad optics) or NASA's Galileo (currently crippled by a flaw in its communications antenna) these micro-rovers are cheap, and legion, and everywhere. They get crippled every day; but it doesn't matter much; there are hundreds more, and no one's life is at stake. People, even quite ordinary people, *rent time on them* in much the same way that you would pay for satellite cable-TV service. If you want to know what Neptune looks like today, you just call up a data center and *have a look for yourself.*

This is a concept that would truly involve "the public" in space exploration, rather than the necessarily tiny elite of astronauts. This is a potential benefit that we might derive from abandoning the expensive practice of launching actual human bodies into space. We might find a useful analogy in the computer revolution: "mainframe" space exploration, run by a NASA elite in labcoats, is replaced by a "personal" space exploration run by grad students and even hobbyists.

In this scenario, "space exploration" becomes similar to other digitized, computer-assisted media environments: scientific visualization, computer graphics, virtual reality, telepresence. The solar system is saturated, not by people, but by *media coverage. Outer space becomes *outer cyberspace.*

Whether this scenario is "realistic" isn't clear as yet. It's just a science-fictional dream, a vision for the exploration of space: *circumsolar telepresence.* As always, much depends on circumstance, lucky accidents, and imponderables like political will. What does seem clear, however, is that NASA's own current plans are terribly far-fetched: they have outlived all contact with the political, economic, social and even technical realities of the 1990s. There is no longer any real point in shipping human beings into space in order to wave flags.

"Exploring space" is not an "unrealistic" idea. That much, at least, has already been proven. The struggle now is over why and how and to what end. True, "exploring space" is not as "important" as was the life-and-death Space Race struggle for Cold War pre- eminence. Space science cannot realistically expect to command the huge sums that NASA commanded in the service of American political prestige. That era is simply gone; it's history now.

However: astronomy does count. There is a very deep and genuine interest in these topics. An interest in the stars and planets is not a fluke, it's not freakish. Astronomy is the most ancient of human sciences. It's deeply rooted in the human psyche, has great historical continuity, and is spread all over the world. It has its own constituency, and if its plans were modest and workable, and played to visible strengths, they might well succeed brilliantly.

The world doesn't actually need NASA's billions to learn about our solar system. Real, honest-to-goodness "space exploration" never got more than a fraction of NASA's budget in the first place.

Projects of this sort would no longer be created by gigantic federal military-industrial bureaucracies. Micro-rover projects could be carried out by universities, astronomy departments, and small- scale research consortia. It would play from the impressive strengths of the thriving communications and computer tech of the nineties, rather than the dying, centralized, militarized, politicized rocket-tech of the sixties.

The task at hand is to create a change in the climate of opinion about the true potentials of "space exploration." Space exploration, like the rest of us, grew up in the Cold War; like the rest of us, it must now find a new way to live. And, as history has proven, science fiction has a very real and influential role in space exploration. History shows that true space exploration is not about budgets. It's about vision. At its heart it has always been about vision.

Let's create the vision.

BUCKYMANIA

Carbon, like every other element on this planet, came to us from outer space. Carbon and its compounds are well-known in galactic gas-clouds, and in the atmosphere and core of stars, which burn helium to produce carbon. Carbon is the sixth element in the periodic table, and forms about two-tenths of one percent of Earth's crust. Earth's biosphere (most everything that grows, moves, breathes, photosynthesizes, or reads F&SF) is constructed mostly of waterlogged carbon, with a little nitrogen, phosphorus and such for leavening.

There are over a million known and catalogued compounds of carbon: the study of these compounds, and their profuse and intricate behavior, forms the major field of science known as organic chemistry.

Since prehistory, "pure" carbon has been known to humankind in three basic flavors. First, there's smut (lampblack or "amorphous carbon"). Then there's graphite: soft, grayish-black, shiny stuff -- (pencil "lead" and lubricant). And third is that surpassing anomaly, "diamond," which comes in extremely hard translucent crystals.

Smut is carbon atoms that are poorly linked. Graphite is carbon atoms neatly linked in flat sheets. Diamond is carbon linked in strong, regular, three-dimensional lattices: tetrahedra, that form ultrasolid little carbon pyramids.

Today, however, humanity rejoices in possession of a fourth and historically unprecedented form of carbon. Researchers have created an entire class of these simon-pure carbon molecules, now collectively known as the "fullerenes." They were named in August 1985, in Houston, Texas, in honor of the American engineer, inventor, and delphically visionary philosopher, R. Buckminster Fuller.

"Buckminsterfullerene," or C60, is the best-known fullerene. It's very round, the roundest molecule known to science. Sporting what is technically known as "truncated icosahedral structure," C60 is the most symmetric molecule possible in three-dimensional Euclidean space. Each and every molecule of "Buckminsterfullerene" is a hollow, geodesic sphere of sixty carbon atoms, all identically linked in a spherical framework of twelve pentagons and twenty hexagons. This molecule looks exactly like a common soccerball, and was therefore nicknamed a "buckyball" by delighted chemists.

A free buckyball rotates merrily through space at one hundred million revolutions per second. It's just over one nanometer across. Buckminsterfullerene by the gross forms a solid crystal, is stable at room temperature, and is an attractive mustard-yellow color. A heap of crystallized buckyballs stack very much like pool balls, and are as soft as graphite. It's thought that buckyballs will make good lubricants -- something like molecular ball bearings.

When compressed, crystallized buckyballs squash and flatten readily, down to about seventy percent of their volume. They then refused to move any further and become extremely hard. Just *how* hard is not yet established, but according to chemical theory, compressed buckyballs may be considerably harder than diamond. They may make good shock absorbers, or good armor.

But this is only the beginning of carbon's multifarious oddities in the playful buckyball field. Because buckyballs are hollow, their carbon framework can be wrapped around other, entirely different atoms, forming neat molecular cages. This has already been successfully done with certain metals, creating the intriguing new class of "metallofullerites." Then there are buckyballs with a carbon or two knocked out of the framework, and replaced with metal atoms. This "doping" process yields a galaxy of so-called "dopeyballs." Some of these dopeyballs show great promise as superconductors. Other altered buckyballs seem to be organic ferromagnets.

A thin film of buckyballs can double the frequency of laser light passing through it. Twisted or deformed buckyballs might act as optical switches for future fiber-optic networks. Buckyballs with dangling branches of nickel, palladium, or platinum may serve as new industrial catalysts.

The electrical properties of buckyballs and their associated compounds are very unusual, and therefore very promising. Pure C60 is an insulator. Add three potassium atoms, and it becomes a low- temperature superconductor. Add three more potassium atoms, and it becomes an insulator again! There's already excited talk in industry of making electrical batteries out of buckyballs.

Then there are the "buckybabies:" C28, C32, C44, and C52. The lumpy, angular buckybabies have received very little study to date, and heaven only knows what they're capable of, especially when doped, bleached, twisted, frozen or magnetized. And then there are the *big* buckyballs: C240, C540, C960. Molecular models of these monster buckyballs look like giant chickenwire beachballs.

There doesn't seem to be any limit to the upper size of a buckyball. If wrapped around one another for internal support, buckyballs can (at least theoretically) accrete like pearls. A truly titanic buckyball might be big enough to see with the naked eye. Conceivably, it might even be big enough to kick around on a playing field, if you didn't mind kicking an anomalous entity with unknown physical properties.

Carbon-fiber is a high-tech construction material which has been seeing a lot of use lately in tennis rackets, bicycles, and high- performance aircraft. It's already the strongest fiber known. This makes the discovery of "buckytubes" even more striking. A buckytube is carbon-fiber with a difference: it's a buckyball extruded into a long continuous cylinder comprised of one single superstrong molecule.

C70, a buckyball cousin shaped like a rugby ball, seems to be useful in producing high-tech films of artificial diamond. Then there are "fuzzyballs" with sixty strands of hydrogen hair, "bunnyballs" with twin ears of butylpyridine, flourinated "teflonballs" that may be the slipperiest molecules ever produced.

This sudden wealth of new high-tech slang indicates the potential riches of this new and multidisciplinary field of study, where physics, electronics, chemistry and materials-science are all overlapping, right now, in an exhilirating microsoccerball scrimmage.

Today there are more than fifty different teams of scientists investigating buckyballs and their relations, including industrial heavy-hitters from AT&T, IBM and Exxon. SCIENCE magazine voted buckminsterfullerene "Molecule of the Year" in 1991. Buckyball papers have also appeared in NATURE, NEW SCIENTIST, SCIENTIFIC AMERICAN, even FORTUNE and BUSINESS WEEK. Buckyball breakthroughs are coming well-nigh every week, while the fax machines sizzle in labs around the world. Buckyballs are strange, elegant, beautiful, very intellectually sexy, and will soon be commercially hot.

In chemical terms, the discovery of buckminsterfullerene -- a carbon sphere -- may well rank with the discovery of the benzene ring -- a carbon ring -- in the 19th century. The benzene ring (C6H6) brought the huge field of aromatic chemistry into being, and with it a enormous number of industrial applications.

But what was this "discovery," and how did it come about?

In a sense, like carbon itself, buckyballs also came to us from outer space. Donald Huffman and Wolfgang Kratschmer were astrophysicists studying interstellar soot. Huffman worked for the University of Arizona in Tucson, Kratschmer for the Max Planck Institute in Heidelberg. In 1982, these two gentlemen were superheating graphite rods in a low-pressure helium atmosphere, trying to replicate possible soot-making conditions in the atmosphere of red-giant stars. Their experiment was run in a modest bell-jar zapping apparatus about the size and shape of a washing-machine. Among a great deal of black gunk, they actually manufactured miniscule traces of buckminsterfullerene, which behaved oddly in their spectrometer. At the time, however, they didn't realize what they had.



In 1985, buckministerfullerene surfaced again, this time in a high-tech laser-vaporization cluster-beam apparatus. Robert Curl and Richard Smalley, two professors of chemistry at Rice University in Houston, knew that a round carbon molecule was theoretically possible. They even knew that it was likely to be yellow in color. And in August 1985, they made a few nanograms of it, detected it with mass spectrometers, and had the honor of naming it, along with their colleagues Harry Kroto, Jim Heath and Sean O'Brien.

In 1985, however, there wasn't enough buckminsterfullerene around to do much more than theorize about. It was "discovered," and named, and argued about in scientific journals, and was an intriguing intellectual curiosity. But this exotic substance remained little more than a lab freak.

And there the situation languished. But in 1988, Huffman and Kratschmer, the astrophysicists, suddenly caught on: this "C60" from the chemists in Houston, was probably the very same stuff they'd made by a different process, back in 1982. Harry Kroto, who had moved to the University of Sussex in the meantime, replicated their results in his own machine in England, and was soon producing enough buckminsterfullerene to actually weigh on a scale, and measure, and purify!

The Huffman/Kratschmer process made buckminsterfullerene by whole milligrams. Wow! Now the entire arsenal of modern chemistry could be brought to bear: X-ray diffraction, crystallography, nuclear magnetic resonance, chromatography. And results came swiftly, and were published. Not only were buckyballs real, they were weird and wonderful.

In 1990, the Rice team discovered a yet simpler method to make buckyballs, the so-called "fullerene factory." In a thin helium atmosphere inside a metal tank, a graphite rod is placed near a graphite disk. Enough simple, brute electrical power is blasted through the graphite to generate an electrical arc between the disk and the tip of the rod. When the end of the rod boils off, you just crank the stub a little closer and turn up the juice. The resultant exotic soot, which collects on the metal walls of the chamber, is up to 45 percent buckyballs.

In 1990, the buckyball field flung open its stadium doors for anybody with a few gas-valves and enough credit for a big electric bill. These buckyball "factories" sprang up all over the world in 1990 and '91. The "discovery" of buckminsterfullerene was not the big kick- off in this particular endeavour. What really counted was the budget, the simplicity of manufacturing. It wasn't the intellectual breakthrough that made buckyballs a sport -- it was the cheap ticket in through the gates. With cheap and easy buckyballs available, the research scene exploded.

Sometimes Science, like other overglamorized forms of human endeavor, marches on its stomach.

As I write this, pure buckyballs are sold commercially for about $2000 a gram, but the market price is in free-fall. Chemists suggest that buckmisterfullerene will be as cheap as aluminum some day soon -- a few bucks a pound. Buckyballs will be a bulk commodity, like oatmeal. You may even *eat* them some day -- they're not poisonous, and they seem to offer a handy way to package certain drugs.

Buckminsterfullerene may have been "born" in an interstellar star-lab, but it'll become a part of everyday life, your life and my life, like nylon, or latex, or polyester. It may become more famous, and will almost certainly have far more social impact, than Buckminster Fuller's own geodesic domes, those glamorously high-tech structures of the 60s that were the prophetic vision for their molecule-size counterparts.

This whole exciting buckyball scrimmage will almost certainly bring us amazing products yet undreamt-of, everything from grease to superhard steels. And, inevitably, it will bring a concomitant set of new problems -- buckyball junk, perhaps, or bizarre new forms of pollution, or sinister military applications. This is the way of the world.

But maybe the most remarkable thing about this peculiar and elaborate process of scientific development is that buckyballs never were really "exotic" in the first place. Now that sustained attention has been brought to bear on the phenomenon, it appears that buckyballs are naturally present -- in tiny amounts, that is -- in almost any sooty, smoky flame. Buckyballs fly when you light a candle, they flew when Bogie lit a cigarette in "Casablanca," they flew when Neanderthals roasted mammoth fat over the cave fire. Soot we knew about, diamonds we prized -- but all this time, carbon, good ol' Element Six, has had a shocking clandestine existence. The "secret" was always there, right in the air, all around all of us.

But when you come right down to it, it doesn't really matter how we found out about buckyballs. Accidents are not only fun, but crucial to the so-called march of science, a march that often moves fastest when it's stumbling down some strange gully that no one knew existed. Scientists are human beings, and human beings are flexible: not a hard, rigidly locked crystal like diamond, but a resilient network. It's a legitimate and vital part of science to recognize the truth -- not merely when looking for it with brows furrowed and teeth clenched, but when tripping over it headlong.

Thanks to science, we did find out the truth. And now it's all different. Because now we know!

THINK OF THE PRESTIGE

The science of rocketry, and the science of weaponry, are sister sciences. It's been cynically said of German rocket scientist Wernher von Braun that "he aimed at the stars, and hit London."

After 1945, Wernher von Braun made a successful transition to American patronage and, eventually, to civilian space exploration. But another ambitious space pioneer -- an American citizen -- was not so lucky as von Braun, though his equal in scientific talent. His story, by comparison, is little known.

Gerald Vincent Bull was born in March 9, 1928, in Ontario, Canada. He died in 1990. Dr. Bull was the most brilliant artillery scientist of the twentieth century. Bull was a prodigiously gifted student, and earned a Ph.D. in aeronautical engineering at the age of 24.

Bull spent the 1950s researching supersonic aerodynamics in Canada, personally handcrafting some of the most advanced wind- tunnels in the world.

Bull's work, like that of his predecessor von Braun, had military applications. Bull found patronage with the Canadian Armament Research and Development Establishment (CARDE) and the Canadian Defence Research Board.

However, Canada's military-industrial complex lacked the panache, and the funding, of that of the United States. Bull, a visionary and energetic man, grew impatient with what he considered the pedestrian pace and limited imagination of the Canadians. As an aerodynamics scientist for CARDE, Bull's salary in 1959 was only $17,000. In comparison, in 1961 Bull earned $100,000 by consulting for the Pentagon on nose-cone research. It was small wonder that by the early 1960s, Bull had established lively professional relationships with the US Army's Ballistics Research Laboratory (as well as the Army's Redstone Arsenal, Wernher von Braun's own postwar stomping grounds).

It was the great dream of Bull's life to fire cannon projectiles from the earth's surface directly into outer space. Amazingly, Dr. Bull enjoyed considerable success in this endeavor. In 1961, Bull established Project HARP (High Altitude Research Project). HARP was an academic, nonmilitary research program, funded by McGill University in Montreal, where Bull had become a professor in the mechanical engineering department. The US Army's Ballistic Research Lab was a quiet but very useful co-sponsor of HARP; the US Army was especially generous in supplying Bull with obsolete military equipment, including cannon barrels and radar.

Project HARP found a home on the island of Barbados, downrange of its much better-known (and vastly better-financed) rival, Cape Canaveral. In Barbados, Bull's gigantic space-cannon fired its projectiles out to an ocean splashdown, with little risk of public harm. Its terrific boom was audible all over Barbados, but the locals were much pleased at their glamorous link to the dawning Space Age.

Bull designed a series of new supersonic shells known as the "Martlets." The Mark II Martlets were cylindrical finned projectiles, about eight inches wide and five feet six inches long. They weighed 475 pounds. Inside the barrel of the space-cannon, a Martlet was surrounded by a precisely machined wooden casing known as a "sabot." The sabot soaked up combustive energy as the projectile flew up the space-cannon's sixteen-inch, 118-ft long barrel. As it cleared the barrel, the sabot split and the precisely streamlined Martlet was off at over a mile per second. Each shot produced a huge explosion and a plume of fire gushing hundreds of feet into the sky.

The Martlets were scientific research craft. They were designed to carry payloads of metallic chaff, chemical smoke, or meteorological balloons. They sported telemetry antennas for tracing the flight.

By the end of 1965, the HARP project had fired over a hundred such missiles over fifty miles high, into the ionosphere -- the airless fringes of space. In November 19, 1966, the US Army's Ballistics Research Lab, using a HARP gun designed by Bull, fired a 185-lb Martlet missile one hundred and eleven miles high. This was, and remains, a world altitude record for any fired projectile. Bull now entertained ambitious plans for a Martlet Mark IV, a rocket-assisted projectile that would ignite in flight and drive itself into actual orbit.

Ballistically speaking, space cannon offer distinct advantages over rockets. Rockets must lift, not only their own weight, but the weight of their fuel and oxidizer. Cannon "fuel," which is contained within the gunbarrel, offers far more explosive bang for the buck than rocket fuel. Cannon projectiles are very accurate, thanks to the fixed geometry of the gun-barrel. And cannon are far simpler and cheaper than rockets.

There are grave disadvantages, of course. First, the payload must be slender enough to fit into a gun-barrel. The most severe drawback is the huge acceleration force of a cannon blast, which in the case of Bull's exotic arsenal could top 10,000 Gs. This rules out manned flights from the mouth of space-cannon. Jules Verne overlooked this unpoetic detail when he wrote his prescient tale of space artillery, FROM THE EARTH TO THE MOON (1865). (Dr Bull was fascinated by Verne, and often spoke of Verne's science fiction as one of the foremost inspirations of his youth.)

Bull was determined to put a cannon-round into orbit. This burning desire of his was something greater than any merely pragmatic or rational motive. The collapse of the HARP project in 1967 left Bull in command of his own fortunes. He reassembled the wreckage of his odd academic/military career, and started a commercial operation, "Space Research Corporation." In the years to follow, Bull would try hard to sell his space-cannon vision to a number of sponsors, including NATO, the Pentagon, Canada, China, Israel, and finally, Iraq.

In the meantime, the Vietnam War was raging. Bull's researches on projectile aerodynamics had made him, and his company Space Reseach Corporation, into a hot military-industrial property. In pursuit of space research, Bull had invented techniques that lent much greater range and accuracy to conventional artillery rounds. With Bull's ammunition, for instance, US Naval destroyers would be able to cruise miles off the shore of North Vietnam, destroying the best Russian-made shore batteries without any fear of artillery retaliation. Bull's Space Research Corporation was manufacturing the necessary long-range shells in Canada, but his lack of American citizenship was a hindrance in the Pentagon arms trade.

Such was Dr. Bull's perceived strategic importance that this hindrance was neatly avoided; with the sponsorship of Senator Barry Goldwater, Bull became an American citizen by act of Congress. This procedure was a rare honor, previously reserved only for Winston Churchill and the Marquis de Lafayette.

Despite this Senatorial fiat, however, the Navy arms deal eventually fell through. But although the US Navy scorned Dr. Bull's wares, others were not so short-sighted. Bull's extended-range ammunition, and the murderously brilliant cannon that he designed to fire it, found ready markets in Egypt, Israel, Holland, Italy, Britain, Canada, Venezuela, Chile, Thailand, Iran, South Africa, Austria and Somalia.

Dr. Bull created a strange private reserve on the Canadian- American border; a private arms manufactury with its own US and Canadian customs units. This arrangement was very useful, since the arms-export laws of the two countries differed, and SRC's military products could be shipped-out over either national border at will. In this distant enclave on the rural northern border of Vermont, the arms genius built his own artillery range, his own telemetry towers and launch-control buildings, his own radar tracking station, workshops, and machine shops. At its height, the Space Research Corporation employed over three hundred people at this site, and boasted some $15 million worth of advanced equipment.

The downfall of HARP had left Bull disgusted with the government-supported military-scientific establishment. He referred to government researchers as "clowns" and "cocktail scientists," and decided that his own future must lay in the vigorous world of free enterprise. Instead of exploring the upper atmosphere, Bull dedicated his ready intelligence to the refining of lethal munitions. Bull would not sell to the Soviets or their client states, whom he loathed; but he would sell to most anyone else. Bull's cannon are credited with being of great help to Jonas Savimbi's UNITA war in Angola; they were also extensively used by both sides in the Iran-Iraq war.

Dr. Gerald V. Bull, Space Researcher, had become a professional arms dealer. Dr. Bull was not a stellar success as an arms dealer, because by all accounts he had no real head for business. Like many engineers, Bull was obsessed not by entrepreneurial drive, but by the exhilirating lure of technical achievement. The atmosphere at Space Research Corporation was, by all accounts, very collegial; Bull as professor, employees as cherished grad-students. Bull's employees were fiercely loyal to him and felt that he was brilliantly gifted and could accomplish anything.

SRC was never as great a commercial success as Bull's technical genius merited. Bull stumbled badly in 1980. The Carter Administration, annoyed by Bull's extensive deals with the South African military, put Bull in prison for customs violation. This punishment, rather than bringing Bull "to his senses," affected him traumatically. He felt strongly that he had been singled out as a political scapegoat to satisfy the hypocritical, left-leaning, anti- apartheid bureaucrats in Washington. Bull spent seven months in an American prison, reading extensively, and, incidentally, successfully re-designing the prison's heating-plant. Nevertheless, the prison experience left Bull embittered and cynical. While still in prison, Bull was already accepting commercial approaches from the Communist Chinese, who proved to be among his most avid customers.

After his American prison sentence ended, Bull abandoned his strange enclave in the US-Canadian border to work full-time in Brussels, Belgium. Space Research Corporation was welcomed there, in Europe's foremost nexus of the global arms trade, a city where almost anything goes in the way of merchandising war.

In November 1987, Bull was politely contacted in Brussels by the Iraqi Embassy, and offered an all-expenses paid trip to Bagdad.

From 1980 to 1989, during their prolonged, lethal, and highly inconclusive war with Iran, Saddam Hussein's regime had spent some eighty billion dollars on weapons and weapons systems. Saddam Hussein was especially fond of his Soviet-supplied "Scud" missiles, which had shaken Iranian morale severely when fired into civilian centers during the so-called "War of the Cities." To Saddam's mind, the major trouble with his Scuds was their limited range and accuracy, and he had invested great effort in gathering the tools and manpower to improve the Iraqi art of rocketry.

The Iraqis had already bought many of Bull's 155-millimeter cannon from the South Africans and the Austrians, and they were most impressed. Thanks to Bull's design genius, the Iraqis actually owned better, more accurate, and longer-range artillery than the United States Army did.

Bull did not want to go to jail again, and was reluctant to break the official embargo on arms shipments to Iraq. He told his would-be sponsors so, in Bagdad, and the Iraqis were considerate of their guest's qualms. To Bull's great joy, they took his idea of a peaceful space cannon very seriously. "Think of the prestige," Bull suggested to the Iraqi Minister of Industry, and the thought clearly intrigued the Iraqi official.

The Israelis, in September 1988, had successfully launched their own Shavit rocket into orbit, an event that had much impressed, and depressed, the Arab League. Bull promised the Iraqis a launch system that could place dozens, perhaps hundreds, of Arab satellites into orbit. *Small* satellites, granted, and unmanned ones; but their launches would cost as little as five thousand dollars each. Iraq would become a genuine space power; a minor one by superpower standards, but the only Arab space power.

And even small satellites were not just for show. Even a minor space satellite could successfully perform certain surveillance activities. The American military had proved the usefulness of spy satellites to Saddam Hussein by passing him spysat intelligence during worst heat of the Iran-Iraq war.

The Iraqis felt they would gain a great deal of widely applicable, widely useful scientific knowledge from their association with Bull, whether his work was "peaceful" or not. After all, it was through peaceful research on Project HARP that Bull himself had learned techniques that he had later sold for profit on the arms market. The design of a civilian nose-cone, aiming for the stars, is very little different from that of one descending with a supersonic screech upon sleeping civilians in London.

For the first time in his life, Bull found himself the respected client of a generous patron with vast resources -- and with an imagination of a grandeur to match his own. By 1989, the Iraqis were paying Bull and his company five million dollars a year to redesign their field artillery, with much greater sums in the wings for "Project Babylon" -- the Iraqi space-cannon. Bull had the run of ominous weapons bunkers like the "Saad 16" missile-testing complex in north Iraq, built under contract by Germans, and stuffed with gray-market high-tech equipment from Tektronix, Scientific Atlanta and Hewlett- Packard.

Project Babylon was Bull's grandest vision, now almost within his grasp. The Iraqi space-launcher was to have a barrel five hundred feet long, and would weigh 2,100 tons. It would be supported by a gigantic concrete tower with four recoil mechanisms, these shock- absorbers weighing sixty tons each. The vast, segmented cannon would fire rocket-assisted projectiles the size of a phone booth, into orbit around the Earth.

In August 1989, a smaller prototype, the so-called "Baby Babylon," was constructed at a secret site in Jabal Hamrayn, in central Iraq. "Baby Babylon" could not have put payloads into orbit, but it would have had an international, perhaps intercontinental range. The prototype blew up on its first test-firing.

The Iraqis continued undaunted on another prototype super- gun, but their smuggling attempts were clumsy. Bull himself had little luck in maintaining the proper discretion for a professional arms dealer, as his own jailing had proved. When flattered, Bull talked; and when he talked, he boasted.

Word began to leak out within the so-called "intelligence community" that Bull was involved in something big; something to do with Iraq and with missiles. Word also reached the Israelis, who were very aware of Bull's scientific gifts, having dealt with him themselves, extensively.

The Iraqi space cannon would have been nearly useless as a conventional weapon. Five hundred feet long and completely immobile, it would have been easy prey for any Israeli F-15. It would have been impossible to hide, for any launch would thrown a column of flame hundreds of feet into the air, a blazing signal for any spy satellite or surveillance aircraft. The Babylon space cannon, faced with determined enemies, could have been destroyed after a single launch.

However, that single launch might well have served to dump a load of nerve gas, or a nuclear bomb, onto any capital in the world.

Bull wanted Project Babylon to be entirely peaceful; despite his rationalizations, he was never entirely at ease with military projects. What Bull truly wanted from his Project Babylon was *prestige.* He wanted the entire world to know that he, Jerry Bull, had created a working space program, more or less all by himself. He had never forgotten what it meant to world opinion to hear the Sputnik beeping overhead.

For Saddam Hussein, Project Babylon was more than any merely military weapon: it was a *political* weapon. The prestige Iraq might gain from the success of such a visionary leap was worth any number of mere cannon-fodder batallions. It was Hussein's ambition to lead the Arab world; Bull's cannon was to be a symbol of Iraqi national potency, a symbol that the long war with the Shi'ite mullahs had not destroyed Saddam's ambitions for transcendant greatness.

The Israelis, however, had already proven their willingness to thwart Saddam Hussein's ambitions by whatever means necessary. In 1981, they had bombed his Osirak nuclear reactor into rubble. In 1980, a Mossad hit-team had cut the throat of Iraqi nuclear scientist Yayha El Meshad, in a Paris hotel room.

On March 22, 1990, Dr. Bull was surprised at the door of his Brussels apartment. He was shot five times, in the neck and in the back of the head, with a silenced 7.65 millimeter automatic pistol.

His assassin has never been found.

FOR FURTHER READING:

ARMS AND THE MAN: Dr. Gerald Bull, Iraq, and the Supergun by William Lowther (McClelland- Bantam, Inc., Toronto, 1991)

BULL'S EYE: The Assassination and Life of Supergun Inventor Gerald Bull by James Adams (Times Books, New York, 1992)

ARTIFICIAL LIFE

The new scientific field of study called "Artificial Life" can be defined as "the attempt to abstract the logical form of life from its material manifestation."

So far, so good. But what is life?

The basic thesis of "Artificial Life" is that "life" is best understood as a complex systematic process. "Life" consists of relationships and rules and interactions. "Life" as a property is potentially separate from actual living creatures.

Living creatures (as we know them today, that is) are basically made of wet organic substances: blood and bone, sap and cellulose, chitin and ichor. A living creature -- a kitten, for instance -- is a physical object that is made of molecules and occupies space and has mass.

A kitten is indisputably "alive" -- but not because it has the "breath of life" or the "vital impulse" somehow lodged inside its body. We may think and talk and act as if the kitten "lives" because it has a mysterious "cat spirit" animating its physical cat flesh. If we were superstitious, we might even imagine that a healthy young cat had *nine* lives. People have talked and acted just this way for millennia.

But from the point-of-view of Artificial Life studies, this is a very halting and primitive way of conceptualizing what's actually going on with a living cat. A kitten's "life" is a *process, * with properties like reproduction, genetic variation, heredity, behavior, learning, the possession of a genetic program, the expression of that program through a physical body. "Life" is a thing that *does,* not a thing that *is* -- life extracts energy from the environment, grows, repairs damage, reproduces.

And this network of processes called "Life" can be picked apart, and studied, and mathematically modelled, and simulated with computers, and experimented upon -- outside of any creature's living body.

"Artificial Life" is a very young field of study. The use of this term dates back only to 1987, when it was used to describe a conference in Los Alamos New Mexico on "the synthesis and simulation of living systems." Artificial Life as a discipline is saturated by computer-modelling, computer-science, and cybernetics. It's conceptually similar to the earlier field of study called "Artificial Intelligence." Artificial Intelligence hoped to extract the basic logical structure of intelligence, to make computers "think." Artificial Life, by contrast, hopes to make computers only about as "smart" as an ant -- but as "alive" as a swarming anthill.

Artificial Life as a discipline uses the computer as its primary scientific instrument. Like telescopes and microscopes before them, computers are making previously invisible aspects of the world apparent to the human eye. Computers today are shedding light on the activity of complex systems, on new physical principles such as "emergent behavior," "chaos," and "self-organization."

For millennia, "Life" has been one of the greatest of metaphysical and scientific mysteries, but now a few novel and tentative computerized probes have been stuck into the fog. The results have already proved highly intriguing.

Can a computer or a robot be alive? Can an entity which only exists as a digital simulation be "alive"? If it looks like a duck, quacks like a duck, waddles like a duck, but it in fact takes the form of pixels on a supercomputer screen -- is it a duck? And if it's not a duck, then what on earth is it? What exactly does a thing have to do and be before we say it's "alive"?

It's surprisingly difficult to decide when something is "alive." There's never been a definition of "life," whether scientific, metaphysical, or theological, that has ever really worked. Life is not a clean either/or proposition. Life comes on a kind of scale, apparently, a kind of continuum -- maybe even, potentially, *several different kinds of continuum.*

One might take a pragmatic, laundry-list approach to defining life. To be "living," a thing must grow. Move. Reproduce. React to its environment. Take in energy, excrete waste. Nourish itself, die, and decay. Have a genetic code, perhaps, or be the result of a process of evolution. But there are grave problems with all of these concepts. All these things can be done today by machines or programs. And the concepts themselves are weak and subject to contradiction and paradox.

Are viruses "alive"? Viruses can thrive and reproduce, but not by themselves -- they have to use a victim cell in order to manufacture copies of themselves. Some dormant viruses can crystallize into a kind of organic slag that's dead for all practical purposes, and can stay that way indefinitely -- until the virus gets another chance at infection, and then the virus comes seething back.

How about a frozen human embryo? It can be just as dormant as a dormant virus, and certainly can't survive without a host, but it can become a living human being. Some people who were once frozen embryos may be reading this magazine right now! Is a frozen embryo "alive" -- or is it just the *potential* for life, a genetic life- program halted in mid-execution?

Bacteria are simple, as living things go. Most people however would agree that germs are "alive." But there are many other entities in our world today that act in lifelike fashion and are easily as complex as germs, and yet we don't call them "alive" -- except "metaphorically" (whatever *that* means).

How about a national government, for instance? A government can grow and adapt and evolve. It's certainly a very powerful entity that consumes resources and affects its environment and uses enormous amounts of information. When people say "Long Live France," what do they mean by that? Is the Soviet Union now "dead"?

Amoebas aren't "mortal" and don't age -- they just go right on splitting in half indefinitely. Does that mean that all amoebas are actually pieces of one super-amoeba that's three billion years old?

And where's the "life" in an ant-swarm? Most ants in a swarm never reproduce; they're sterile workers -- tools, peripherals, hardware. All the individual ants in a nest, even the queen, can die off one by one, but as long as new ants and new queens take their place, the swarm itself can go on "living" for years without a hitch or a stutter.

Questioning "life" in this way may seem so much nit-picking and verbal sophistry. After all, one may think, people can easily tell the difference between something living and dead just by having a good long look at it. And in point of fact, this seems to be the single strongest suit of "Artificial Life." It is very hard to look at a good Artificial Life program in action without perceiving it as, somehow, "alive."

Only living creatures perform the behavior known as "flocking." A gigantic wheeling flock of cranes or flamingos is one of the most impressive sights that the living world has to offer.

But the "logical form" of flocking can be abstracted from its "material manifestation" in a flocking group of actual living birds. "Flocking" can be turned into rules implemented on a computer. The rules look like this:

1. Stay with the flock -- try to move toward where it seems thickest.

2. Try to move at the same speed as the other local birds.

3. Don't bump into things, especially the ground or other birds.

In 1987, Craig Reynolds, who works for a computer-graphics company called Symbolics, implemented these rules for abstract graphic entities called "bird-oids" or "boids." After a bit of fine- tuning, the result was, and is, uncannily realistic. The darn things *flock!*

They meander around in an unmistakeably lifelike, lively, organic fashion. There's nothing "mechanical" or "programmed- looking" about their actions. They bumble and swarm. The boids in the middle shimmy along contentedly, and the ones on the fringes tag along anxiously jockeying for position, and the whole squadron hangs together, and wheels and swoops and maneuvers, with amazing grace. (Actually they're neither "anxious" nor "contented," but when you see the boids behaving in this lifelike fashion, you can scarcely help but project lifelike motives and intentions onto them.)

You might say that the boids simulate flocking perfectly -- but according to the hard-dogma position of A-Life enthusiasts, it's not "simulation" at all. This is real "flocking" pure and simple -- this is exactly what birds actually do. Flocking is flocking -- it doesn't matter if it's done by a whooping crane or a little computer-sprite.

Clearly the birdoids themselves aren't "alive" -- but it can be argued, and is argued, that they're actually doing something that is a genuine piece of the life process. In the words of scientist Christopher Langton, perhaps the premier guru of A-Life: "The most important thing to remember about A-Life is that the part that is artificial is not the life, but the materials. Real things happen. We observe real phenomena. It is real life in an artificial medium."

The great thing about studying flocking with boids, as opposed to say whooping cranes, is that the Artificial Life version can be experimented upon, in controlled and repeatable conditions. Instead of just *observing* flocking, a life-scientist can now *do* flocking. And not just flocks -- with a change in the parameters, you can study "schooling" and "herding" as well.

The great hope of Artificial Life studies is that Artificial Life will reveal previously unknown principles that directly govern life itself -- the principles that give life its mysterious complexity and power, its seeming ability to defy probability and entropy. Some of these principles, while still tentative, are hotly discussed in the field.

For instance: the principle of *bottom-up* initiative rather than *top-down* orders. Flocking demonstrates this principle well. Flamingos do not have blueprints. There is no squadron-leader flamingo barking orders to all the other flamingos. Each flamingo makes up its own mind. The extremely complex motion of a flock of flamingos arises naturally from the interactions of hundreds of independent birds. "Flocking" consists of many thousands of simple actions and simple decisions, all repeated again and again, each action and decision affecting the next in sequence, in an endless systematic feedback.

This involves a second A-Life principle: *local* control rather than *global* control. Each flamingo has only a vague notion of the behavior of the flock as a whole. A flamingo simply isn't smart enough to keep track of the entire "big picture," and in fact this isn't even necessary. It's only necessary to avoid bumping the guys right at your wingtips; you can safely ignore the rest.

Another principle: *simple* rules rather than *complex* ones. The complexity of flocking, while real, takes place entirely outside of the flamingo's brain. The individual flamingo has no mental conception of the vast impressive aerial ballet in which it happens to be taking part. The flamingo makes only simple decisions; it is never required to make complex decisions requiring a lot of memory or planning. *Simple* rules allow creatures as downright stupid as fish to get on with the job at hand -- not only successfully, but swiftly and gracefully.

And then there is the most important A-Life principle, also perhaps the foggiest and most scientifically controversial: *emergent* rather than *prespecified* behavior. Flamingos fly from their roosts to their feeding grounds, day after day, year in year out. But they will never fly there exactly the same way twice. They'll get there all right, predictable as gravity; but the actual shape and structure of the flock will be whipped up from scratch every time. Their flying order is not memorized, they don't have numbered places in line, or appointed posts, or maneuver orders. Their orderly behavior simply *emerges,* different each time, in a ceaselessly varying shuffle.

Ants don't have blueprints either. Ants have become the totem animals of Artificial Life. Ants are so 'smart' that they have vastly complex societies with actual *institutions* like slavery and and agriculture and aphid husbandry. But an individual ant is a profoundly stupid creature. Entomologists estimate that individual ants have only fifteen to forty things that they can actually "do." But if they do these things at the right time, to the right stimulus, and change from doing one thing to another when the proper trigger comes along, then ants as a group can work wonders.

There are anthills all over the world. They all work, but they're all different; no two anthills are identical. That's because they're built bottom-up and emergently. Anthills are built without any spark of planning or intelligence. An ant may feel the vague instinctive need to wall out the sunlight. It begins picking up bits of dirt and laying them down at random. Other ants see the first ant at work and join in; this is the A-Life principle known as "allelomimesis," imitating the others (or rather not so much "imitating" them as falling mechanically into the same instinctive pattern of behavior).

Sooner or later, a few bits of dirt happen to pile up together. Now there's a wall. The ant wall-building sub-program kicks into action. When the wall gets high enough, it's roofed over with dirt and spit. Now there's a tunnel. Do it again and again and again, and the structure can grow seven feet high, and be of such fantastic complexity that to draw it on an architect's table would take years. This emergent structure, "order out of chaos," "something out of nothing" -- appears to be one of the basic "secrets of life."

These principles crop up again and again in the practice of life- simulation. Predator-prey interactions. The effects of parasites and viruses. Dynamics of population and evolution. These principles even seem to apply to internal living processes, like plant growth and the way a bug learns to walk. The list of applications for these principles has gone on and on.

It's not hard to understand that many simple creatures, doing simple actions that affect one another, can easily create a really big mess. The thing that's *hard* to understand is that those same, bottom-up, unplanned, "chaotic" actions can and do create living, working, functional order and system and pattern. The process really must be seen to be believed. And computers are the instruments that have made us see it.

Most any computer will do. Oxford zoologist Richard Dawkins has created a simple, popular Artificial Life program for personal computers. It's called "The Blind Watchmaker," and demonstrates the inherent power of Darwinian evolution to create elaborate pattern and structure. The program accompanies Dr. Dawkins' 1986 book of the same title (quite an interesting book, by the way), but it's also available independently.

The Blind Watchmaker program creates patterns from little black-and-white branching sticks, which develop according to very simple rules. The first time you see them, the little branching sticks seem anything but impressive. They look like this:

Fig 1. Ancestral A-Life Stick-Creature

After a pleasant hour with Blind Watchmaker, I myself produced these very complex forms -- what Dawkins calls "Biomorphs."

Fig. 2 -- Six Dawkins Biomorphs

It's very difficult to look at such biomorphs without interpreting them as critters -- *something* alive-ish, anyway. It seems that the human eye is *trained by nature* to interpret the output of such a process as "life-like." That doesn't mean it *is* life, but there's definitely something *going on there.*

*What* is going on is the subject of much dispute. Is a computer-simulation actually an abstracted part of life? Or is it technological mimicry, or mechanical metaphor, or clever illusion?

We can model thermodynamic equations very well also, but an equation isn't hot, it can't warm us or burn us. A perfect model of heat isn't heat. We know how to model the flow of air on an airplane's wings, but no matter how perfect our simulations are, they don't actually make us fly. A model of motion isn't motion. Maybe "Life" doesn't exist either, without that real-world carbon-and-water incarnation. A-Life people have a term for these carbon-and-water chauvinists. They call them "carbaquists."

Artificial Life maven Rodney Brooks designs insect-like robots at MIT. Using A-Life bottom-up principles -- "fast, cheap, and out of control" -- he is trying to make small multi-legged robots that can behave as deftly as an ant. He and his busy crew of graduate students are having quite a bit of success at it. And Brooks finds the struggle over definitions beside the real point. He envisions a world in which robots as dumb as insects are everywhere; dumb, yes, but agile and successful and pragmatically useful. Brooks says: "If you want to argue if it's living or not, fine. But if it's sitting there existing twenty- four hours a day, three hundred sixty-five days of the year, doing stuff which is tricky to do and doing it well, then I'm going to be happy. And who cares what you call it, right?"

Ontological and epistemological arguments are never easily settled. However, "Artificial Life," whether it fully deserves that term or not, is at least easy to see, and rather easy to get your hands on. "Blind Watchmaker" is the A-Life equivalent of using one's computer as a home microscope and examining pondwater. Best of all, the program costs only twelve bucks! It's cheap and easy to become an amateur A-Life naturalist.

Because of the ubiquity of powerful computers, A-Life is "garage-band science." The technology's out there for almost anyone interested -- it's hacker-science. Much of A-Life practice basically consists of picking up computers, pointing them at something promising, and twiddling with the focus knobs until you see something really gnarly. *Figuring out what you've seen* is the tough part, the "real science"; this is where actual science, reproducible, falsifiable, formal, and rigorous, parts company from the intoxicating glamor of the intellectually sexy. But in the meantime, you have the contagious joy and wonder of just *gazing at the unknown* the primal thrill of discovery and exploration.

A lot has been written already on the subject of Artificial Life. The best and most complete journalistic summary to date is Steven Levy's brand-new book, ARTIFICIAL LIFE: THE QUEST FOR A NEW CREATION (Pantheon Books 1992).

The easiest way for an interested outsider to keep up with this fast-breaking field is to order books, videos, and software from an invaluable catalog: "Computers In Science and Art," from Media Magic. Here you can find the Proceedings of the first and second Artificial Life Conferences, where the field's most influential papers, discussions, speculations and manifestos have seen print.

But learned papers are only part of the A-Life experience. If you can see Artificial Life actually demonstrated, you should seize the opportunity. Computer simulation of such power and sophistication is a truly remarkable historical advent. No previous generation had the opportunity to see such a thing, much less ponder its significance. Media Magic offers videos about cellular automata, virtual ants, flocking, and other A-Life constructs, as well as personal software "pocket worlds" like CA Lab, Sim Ant, and Sim Earth. This very striking catalog is available free from Media Magic, P.O Box 507, Nicasio CA 94946.

"INTERNET" [aka "A Short History of the Internet"]

Some thirty years ago, the RAND Corporation, America's foremost Cold War think-tank, faced a strange strategic problem. How could the US authorities successfully communicate after a nuclear war?

Postnuclear America would need a command-and-control network, linked from city to city, state to state, base to base. But no matter how thoroughly that network was armored or protected, its switches and wiring would always be vulnerable to the impact of atomic bombs. A nuclear attack would reduce any conceivable network to tatters.

And how would the network itself be commanded and controlled? Any central authority, any network central citadel, would be an obvious and immediate target for an enemy missile. The center of the network would be the very first place to go.

RAND mulled over this grim puzzle in deep military secrecy, and arrived at a daring solution. The RAND proposal (the brainchild of RAND staffer Paul Baran) was made public in 1964. In the first place, the network would *have no central authority.* Furthermore, it would be *designed from the beginning to operate while in tatters.*

The principles were simple. The network itself would be assumed to be unreliable at all times. It would be designed from the get-go to transcend its own unreliability. All the nodes in the network would be equal in status to all other nodes, each node with its own authority to originate, pass, and receive messages. The messages themselves would be divided into packets, each packet separately addressed. Each packet would begin at some specified source node, and end at some other specified destination node. Each packet would wind its way through the network on an individual basis.

The particular route that the packet took would be unimportant. Only final results would count. Basically, the packet would be tossed like a hot potato from node to node to node, more or less in the direction of its destination, until it ended up in the proper place. If big pieces of the network had been blown away, that simply wouldn't matter; the packets would still stay airborne, lateralled wildly across the field by whatever nodes happened to survive. This rather haphazard delivery system might be "inefficient" in the usual sense (especially compared to, say, the telephone system) -- but it would be extremely rugged.

During the 60s, this intriguing concept of a decentralized, blastproof, packet-switching network was kicked around by RAND, MIT and UCLA. The National Physical Laboratory in Great Britain set up the first test network on these principles in 1968. Shortly afterward, the Pentagon's Advanced Research Projects Agency decided to fund a larger, more ambitious project in the USA. The nodes of the network were to be high-speed supercomputers (or what passed for supercomputers at the time). These were rare and valuable machines which were in real need of good solid networking, for the sake of national research-and-development projects.

In fall 1969, the first such node was installed in UCLA. By December 1969, there were four nodes on the infant network, which was named ARPANET, after its Pentagon sponsor.

The four computers could transfer data on dedicated high- speed transmission lines. They could even be programmed remotely from the other nodes. Thanks to ARPANET, scientists and researchers could share one another's computer facilities by long-distance. This was a very handy service, for computer-time was precious in the early '70s. In 1971 there were fifteen nodes in ARPANET; by 1972, thirty-seven nodes. And it was good.

By the second year of operation, however, an odd fact became clear. ARPANET's users had warped the computer-sharing network into a dedicated, high-speed, federally subsidized electronic post- office. The main traffic on ARPANET was not long-distance computing. Instead, it was news and personal messages. Researchers were using ARPANET to collaborate on projects, to trade notes on work, and eventually, to downright gossip and schmooze. People had their own personal user accounts on the ARPANET computers, and their own personal addresses for electronic mail. Not only were they using ARPANET for person-to-person communication, but they were very enthusiastic about this particular service -- far more enthusiastic than they were about long-distance computation.

It wasn't long before the invention of the mailing-list, an ARPANET broadcasting technique in which an identical message could be sent automatically to large numbers of network subscribers. Interestingly, one of the first really big mailing-lists was "SF- LOVERS," for science fiction fans. Discussing science fiction on the network was not work-related and was frowned upon by many ARPANET computer administrators, but this didn't stop it from happening.

Throughout the '70s, ARPA's network grew. Its decentralized structure made expansion easy. Unlike standard corporate computer networks, the ARPA network could accommodate many different kinds of machine. As long as individual machines could speak the packet-switching lingua franca of the new, anarchic network, their brand-names, and their content, and even their ownership, were irrelevant.

The ARPA's original standard for communication was known as NCP, "Network Control Protocol," but as time passed and the technique advanced, NCP was superceded by a higher-level, more sophisticated standard known as TCP/IP. TCP, or "Transmission Control Protocol," converts messages into streams of packets at the source, then reassembles them back into messages at the destination. IP, or "Internet Protocol," handles the addressing, seeing to it that packets are routed across multiple nodes and even across multiple networks with multiple standards -- not only ARPA's pioneering NCP standard, but others like Ethernet, FDDI, and X.25.

As early as 1977, TCP/IP was being used by other networks to link to ARPANET. ARPANET itself remained fairly tightly controlled, at least until 1983, when its military segment broke off and became MILNET. But TCP/IP linked them all. And ARPANET itself, though it was growing, became a smaller and smaller neighborhood amid the vastly growing galaxy of other linked machines.

As the '70s and '80s advanced, many very different social groups found themselves in possession of powerful computers. It was fairly easy to link these computers to the growing network-of- networks. As the use of TCP/IP became more common, entire other networks fell into the digital embrace of the Internet, and messily adhered. Since the software called TCP/IP was public-domain, and the basic technology was decentralized and rather anarchic by its very nature, it was difficult to stop people from barging in and linking up somewhere-or-other. In point of fact, nobody *wanted* to stop them from joining this branching complex of networks, which came to be known as the "Internet."

Connecting to the Internet cost the taxpayer little or nothing, since each node was independent, and had to handle its own financing and its own technical requirements. The more, the merrier. Like the phone network, the computer network became steadily more valuable as it embraced larger and larger territories of people and resources.

A fax machine is only valuable if *everybody else* has a fax machine. Until they do, a fax machine is just a curiosity. ARPANET, too, was a curiosity for a while. Then computer-networking became an utter necessity.

In 1984 the National Science Foundation got into the act, through its Office of Advanced Scientific Computing. The new NSFNET set a blistering pace for technical advancement, linking newer, faster, shinier supercomputers, through thicker, faster links, upgraded and expanded, again and again, in 1986, 1988, 1990. And other government agencies leapt in: NASA, the National Institutes of Health, the Department of Energy, each of them maintaining a digital satrapy in the Internet confederation.

The nodes in this growing network-of-networks were divvied up into basic varieties. Foreign computers, and a few American ones, chose to be denoted by their geographical locations. The others were grouped by the six basic Internet "domains": gov, mil, edu, com, org and net. (Graceless abbreviations such as this are a standard feature of the TCP/IP protocols.) Gov, Mil, and Edu denoted governmental, military and educational institutions, which were, of course, the pioneers, since ARPANET had begun as a high-tech research exercise in national security. Com, however, stood for "commercial" institutions, which were soon bursting into the network like rodeo bulls, surrounded by a dust-cloud of eager nonprofit "orgs." (The "net" computers served as gateways between networks.)

ARPANET itself formally expired in 1989, a happy victim of its own overwhelming success. Its users scarcely noticed, for ARPANET's functions not only continued but steadily improved. The use of TCP/IP standards for computer networking is now global. In 1971, a mere twenty-one years ago, there were only four nodes in the ARPANET network. Today there are tens of thousands of nodes in the Internet, scattered over forty-two countries, with more coming on-line every day. Three million, possibly four million people use this gigantic mother-of-all-computer-networks.

The Internet is especially popular among scientists, and is probably the most important scientific instrument of the late twentieth century. The powerful, sophisticated access that it provides to specialized data and personal communication has sped up the pace of scientific research enormously.

The Internet's pace of growth in the early 1990s is spectacular, almost ferocious. It is spreading faster than cellular phones, faster than fax machines. Last year the Internet was growing at a rate of twenty percent a *month.* The number of "host" machines with direct connection to TCP/IP has been doubling every year since 1988. The Internet is moving out of its original base in military and research institutions, into elementary and high schools, as well as into public libraries and the commercial sector.

Why do people want to be "on the Internet?" One of the main reasons is simple freedom. The Internet is a rare example of a true, modern, functional anarchy. There is no "Internet Inc." There are no official censors, no bosses, no board of directors, no stockholders. In principle, any node can speak as a peer to any other node, as long as it obeys the rules of the TCP/IP protocols, which are strictly technical, not social or political. (There has been some struggle over commercial use of the Internet, but that situation is changing as businesses supply their own links).

The Internet is also a bargain. The Internet as a whole, unlike the phone system, doesn't charge for long-distance service. And unlike most commercial computer networks, it doesn't charge for access time, either. In fact the "Internet" itself, which doesn't even officially exist as an entity, never "charges" for anything. Each group of people accessing the Internet is responsible for their own machine and their own section of line.

The Internet's "anarchy" may seem strange or even unnatural, but it makes a certain deep and basic sense. It's rather like the "anarchy" of the English language. Nobody rents English, and nobody owns English. As an English-speaking person, it's up to you to learn how to speak English properly and make whatever use you please of it (though the government provides certain subsidies to help you learn to read and write a bit). Otherwise, everybody just sort of pitches in, and somehow the thing evolves on its own, and somehow turns out workable. And interesting. Fascinating, even. Though a lot of people earn their living from using and exploiting and teaching English, "English" as an institution is public property, a public good. Much the same goes for the Internet. Would English be improved if the "The English Language, Inc." had a board of directors and a chief executive officer, or a President and a Congress? There'd probably be a lot fewer new words in English, and a lot fewer new ideas.

People on the Internet feel much the same way about their own institution. It's an institution that resists institutionalization. The Internet belongs to everyone and no one.

Still, its various interest groups all have a claim. Business people want the Internet put on a sounder financial footing. Government people want the Internet more fully regulated. Academics want it dedicated exclusively to scholarly research. Military people want it spy-proof and secure. And so on and so on.

All these sources of conflict remain in a stumbling balance today, and the Internet, so far, remains in a thrivingly anarchical condition. Once upon a time, the NSFnet's high-speed, high-capacity lines were known as the "Internet Backbone," and their owners could rather lord it over the rest of the Internet; but today there are "backbones" in Canada, Japan, and Europe, and even privately owned commercial Internet backbones specially created for carrying business traffic. Today, even privately owned desktop computers can become Internet nodes. You can carry one under your arm. Soon, perhaps, on your wrist.

But what does one *do* with the Internet? Four things, basically: mail, discussion groups, long-distance computing, and file transfers.

Internet mail is "e-mail," electronic mail, faster by several orders of magnitude than the US Mail, which is scornfully known by Internet regulars as "snailmail." Internet mail is somewhat like fax. It's electronic text. But you don't have to pay for it (at least not directly), and it's global in scope. E-mail can also send software and certain forms of compressed digital imagery. New forms of mail are in the works.

The discussion groups, or "newsgroups," are a world of their own. This world of news, debate and argument is generally known as "USENET. " USENET is, in point of fact, quite different from the Internet. USENET is rather like an enormous billowing crowd of gossipy, news-hungry people, wandering in and through the Internet on their way to various private backyard barbecues. USENET is not so much a physical network as a set of social conventions. In any case, at the moment there are some 2,500 separate newsgroups on USENET, and their discussions generate about 7 million words of typed commentary every single day. Naturally there is a vast amount of talk about computers on USENET, but the variety of subjects discussed is enormous, and it's growing larger all the time. USENET also distributes various free electronic journals and publications.

Both netnews and e-mail are very widely available, even outside the high-speed core of the Internet itself. News and e-mail are easily available over common phone-lines, from Internet fringe- realms like BITnet, UUCP and Fidonet. The last two Internet services, long-distance computing and file transfer, require what is known as "direct Internet access" -- using TCP/IP.

Long-distance computing was an original inspiration for ARPANET and is still a very useful service, at least for some. Programmers can maintain accounts on distant, powerful computers, run programs there or write their own. Scientists can make use of powerful supercomputers a continent away. Libraries offer their electronic card catalogs for free search. Enormous CD-ROM catalogs are increasingly available through this service. And there are fantastic amounts of free software available.

File transfers allow Internet users to access remote machines and retrieve programs or text. Many Internet computers -- some two thousand of them, so far -- allow any person to access them anonymously, and to simply copy their public files, free of charge. This is no small deal, since entire books can be transferred through direct Internet access in a matter of minutes. Today, in 1992, there are over a million such public files available to anyone who asks for them (and many more millions of files are available to people with accounts). Internet file-transfers are becoming a new form of publishing, in which the reader simply electronically copies the work on demand, in any quantity he or she wants, for free. New Internet programs, such as "archie," "gopher," and "WAIS," have been developed to catalog and explore these enormous archives of material.

The headless, anarchic, million-limbed Internet is spreading like bread-mold. Any computer of sufficient power is a potential spore for the Internet, and today such computers sell for less than $2,000 and are in the hands of people all over the world. ARPA's network, designed to assure control of a ravaged society after a nuclear holocaust, has been superceded by its mutant child the Internet, which is thoroughly out of control, and spreading exponentially through the post-Cold War electronic global village. The spread of the Internet in the 90s resembles the spread of personal computing in the 1970s, though it is even faster and perhaps more important. More important, perhaps, because it may give those personal computers a means of cheap, easy storage and access that is truly planetary in scale.

The future of the Internet bids fair to be bigger and exponentially faster. Commercialization of the Internet is a very hot topic today, with every manner of wild new commercial information- service promised. The federal government, pleased with an unsought success, is also still very much in the act. NREN, the National Research and Education Network, was approved by the US Congress in fall 1991, as a five-year, $2 billion project to upgrade the Internet "backbone." NREN will be some fifty times faster than the fastest network available today, allowing the electronic transfer of the entire Encyclopedia Britannica in one hot second. Computer networks worldwide will feature 3-D animated graphics, radio and cellular phone-links to portable computers, as well as fax, voice, and high- definition television. A multimedia global circus!

Or so it's hoped -- and planned. The real Internet of the future may bear very little resemblance to today's plans. Planning has never seemed to have much to do with the seething, fungal development of the Internet. After all, today's Internet bears little resemblance to those original grim plans for RAND's post- holocaust command grid. It's a fine and happy irony.

How does one get access to the Internet? Well -- if you don't have a computer and a modem, get one. Your computer can act as a terminal, and you can use an ordinary telephone line to connect to an Internet-linked machine. These slower and simpler adjuncts to the Internet can provide you with the netnews discussion groups and your own e-mail address. These are services worth having -- though if you only have mail and news, you're not actually "on the Internet" proper.

If you're on a campus, your university may have direct "dedicated access" to high-speed Internet TCP/IP lines. Apply for an Internet account on a dedicated campus machine, and you may be able to get those hot-dog long-distance computing and file-transfer functions. Some cities, such as Cleveland, supply "freenet" community access. Businesses increasingly have Internet access, and are willing to sell it to subscribers. The standard fee is about $40 a month -- about the same as TV cable service.

As the Nineties proceed, finding a link to the Internet will become much cheaper and easier. Its ease of use will also improve, which is fine news, for the savage UNIX interface of TCP/IP leaves plenty of room for advancements in user-friendliness. Learning the Internet now, or at least learning about it, is wise. By the turn of the century, "network literacy," like "computer literacy" before it, will be forcing itself into the very texture of your life.

For Further Reading:

The Whole Internet Catalog & User's Guide by Ed Krol. (1992) O'Reilly and Associates, Inc. A clear, non-jargonized introduction to the intimidating business of network literacy. Many computer- documentation manuals attempt to be funny. Mr. Krol's book is *actually* funny.

The Matrix: Computer Networks and Conferencing Systems Worldwide. by John Quarterman. Digital Press: Bedford, MA. (1990) Massive and highly technical compendium detailing the mind-boggling scope and complexity of our newly networked planet.

The Internet Companion by Tracy LaQuey with Jeanne C. Ryer (1992) Addison Wesley. Evangelical etiquette guide to the Internet featuring anecdotal tales of life-changing Internet experiences. Foreword by Senator Al Gore.

Zen and the Art of the Internet: A Beginner's Guide by Brendan P. Kehoe (1992) Prentice Hall. Brief but useful Internet guide with plenty of good advice on useful machines to paw over for data. Mr Kehoe's guide bears the singularly wonderful distinction of being available in electronic form free of charge. I'm doing the same with all my F&SF Science articles, including, of course, this one. My own Internet address is bruces@well.sf.ca.us.

"Magnetic Vision"

Here on my desk I have something that can only be described as

miraculous. It's a big cardboard envelope with nine thick sheets of

black plastic inside, and on these sheets are pictures of my own brain.

These images are "MRI scans" -- magnetic resonance imagery from

a medical scanner.

These are magnetic windows into the lightless realm inside my

skull. The meat, bone, and various gristles within my head glow gently

in crisp black-and-white detail. There's little of the foggy ghostliness

one sees with, say, dental x-rays. Held up against a bright light, or

placed on a diagnostic light table, the dark plastic sheets reveal veins,

arteries, various odd fluid-stuffed ventricles, and the spongey wrinkles

of my cerebellum. In various shots, I can see the pulp within my own

teeth, the roots of my tongue, the boney caverns of my sinuses, and the

nicely spherical jellies that are my two eyeballs. I can see that the

human brain really does come in two lobes and in three sections, and

that it has gray matter and white matter. The brain is a big whopping

gland, basically, and it fills my skull just like the meat of a walnut.

It's an odd experience to look long and hard at one's own brain.

Though it's quite a privilege to witness this, it's also a form of

narcissism without much historical parallel. Frankly, I don't think I

ever really believed in my own brain until I saw these images. At least,

I never truly comprehended my brain as a tangible physical organ, like

a knuckle or a kneecap. And yet here is the evidence, laid out

irrefutably before me, pixel by monochrome pixel, in a large variety of

angles and in exquisite detail. And I'm told that my brain is quite

healthy and perfectly normal -- anatomically at least. (For a science

fiction writer this news is something of a letdown.)

The discovery of X-rays in 1895, by Wilhelm Roentgen, led to the

first technology that made human flesh transparent. Nowadays, X-rays

can pierce the body through many different angles to produce a

graphic three-dimensional image. This 3-D technique, "Computerized

Axial Tomography" or the CAT-scan, won a Nobel Prize in 1979 for its

originators, Godfrey Hounsfield and Allan Cormack.

Sonography uses ultrasound to study human tissue through its

reflection of high-frequency vibration: sonography is a sonic window.

Magnetic resonance imaging, however, is a more sophisticated

window yet. It is rivalled only by the lesser-known and still rather

experimental PET-scan, or Positron Emission Tomography. PET-

scanning requires an injection of radioactive isotopes into the body so

that their decay can be tracked within human tissues. Magnetic

resonance, though it is sometimes known as Nuclear Magnetic

Resonance, does not involve radioactivity.

The phenomenon of "nuclear magnetic resonance" was

discovered in 1946 by Edward Purcell of Harvard, and Felix Block of

Stanford. Purcell and Block were working separately, but published

their findings within a month of one another. In 1952, Purcell and

Block won a joint Nobel Prize for their discovery.

If an atom has an odd number of protons and neutrons, it will

have what is known as a "magnetic moment:" it will spin, and its axis

will tilt in a certain direction. When that tilted nucleus is put into a

magnetic field, the axis of the tilt will change, and the nucleus will also

wobble at a certain speed. If radio waves are then beamed at the

wobbling nucleus at just the proper wavelength, they will cause the

wobbling to intensify -- this is the "magnetic resonance" phenomenon.

The resonant frequency is known as the Larmor frequency, and the

Larmor frequencies vary for different atoms.

Hydrogen, for instance, has a Larmor frequency of 42.58

megahertz. Hydrogen, which is a major constituent of water and of

carbohydrates such as fat, is very common in the human body. If radio

waves at this Larmor frequency are beamed into magnetized hydrogen

atoms, the hydrogen nuclei will absorb the resonant energy until they

reach a state of excitation. When the beam goes off, the hydrogen

nuclei will relax again, each nucleus emitting a tiny burst of radio

energy as it returns to its original state. The nuclei will also relax at

slightly different rates, depending on the chemical circumstances

around the hydrogen atom. Hydrogen behaves differently in different

kinds of human tissue. Those relaxation bursts can be detected, and

timed, and mapped.

The enormously powerful magnetic field within an MRI machine

can permeate the human body; but the resonant Larmor frequency is

beamed through the body in thin, precise slices. The resulting images

are neat cross-sections through the body. Unlike X-rays, magnetic

resonance doesn't ionize and possibly damage human cells. Instead, it

gently coaxes information from many different types of tissue, causing

them to emit tell-tale signals about their chemical makeup. Blood, fat,

bones, tendons, all emit their own characteristics, which a computer

then reassembles as a graphic image on a computer screen, or prints

out on emulsion-coated plastic sheets.

An X-ray is a marvelous technology, and a CAT-scan more

marvelous yet. But an X-ray does have limits. Bones cast shadows in X-

radiation, making certain body areas opaque or difficult to read. And X-

ray images are rather stark and anatomical; an X-ray image cannot

even show if the patient is alive or dead. An MRI scan, on the other

hand, will reveal a great deal about the composition and the health of

living tissue. For instance, tumor cells handle their fluids differently

than normal tissue, giving rise to a slightly different set of signals. The

MRI machine itself was originally invented as a cancer detector.

After the 1946 discovery of magnetic resonance, MRI techniques

were used for thirty years to study small chemical samples. However, a

cancer researcher, Dr. Raymond Damadian, was the first to build an MRI

machine large enough and sophisticated enough to scan an entire

human body, and then produce images from that scan. Many scientists,

most of them even, believed and said that such a technology was decades

away, or even technically impossible. Damadian had a tough,

prolonged struggle to find funding for for his visionary technique, and

he was often dismissed as a zealot, a crackpot, or worse. Damadian's

struggle and eventual triumph is entertainingly detailed in his 1985

biography, A MACHINE CALLED INDOMITABLE.

Damadian was not much helped by his bitter and public rivalry

with his foremost competitor in the field, Paul Lauterbur. Lauterbur,

an industrial chemist, was the first to produce an actual magnetic-

resonance image, in 1973. But Damadian was the more technologically

ambitious of the two. His machine, "Indomitable," (now in the

Smithsonian Museum) produced the first scan of a human torso, in 1977.

(As it happens, it was Damadian's own torso.) Once this proof-of-

concept had been thrust before a doubting world, Damadian founded a

production company, and became the father of the MRI scanner

industry.

By the end of the 1980s, medical MRI scanning had become a

major enterprise, and Damadian had won the National Medal of

Technology, along with many other honors. As MRI machines spread

worldwide, the market for CAT-scanning began to slump in comparison.

Today, MRI is a two-billion dollar industry, and Dr Damadian and his

company, Fonar Corporation, have reaped the fruits of success. (Some

of those fruits are less sweet than others: today Damadian and Fonar

Corp. are suing Hitachi and General Electric in federal court, for

alleged infringement of Damadian's patents.)

MRIs are marvelous machines -- perhaps, according to critics, a

little too marvelous. The magnetic fields emitted by MRIs are extremely

strong, strong enough to tug wheelchairs across the hospital floor, to

wipe the data off the magnetic strips in credit cards, and to whip a

wrench or screwdriver out of one's grip and send it hurtling across the

room. If the patient has any metal imbedded in his skin -- welders and

machinists, in particular, often do have tiny painless particles of

shrapnel in them -- then these bits of metal will be wrenched out of the

patient's flesh, producing a sharp bee-sting sensation. And in the

invisible grip of giant magnets, heart pacemakers can simply stop.

MRI machines can weigh ten, twenty, even one hundred tons.

And they're big -- the scanning cavity, in which the patient is inserted,

is about the size and shape of a sewer pipe, but the huge plastic hull

surrounding that cavity is taller than a man and longer than a plush

limo. A machine of that enormous size and weight cannot be moved

through hospital doors; instead, it has to be delivered by crane, and its

shelter constructed around it. That shelter must not have any iron

construction rods in it or beneath its floor, for obvious reasons. And yet

that floor had better be very solid indeed.

Superconductive MRIs present their own unique hazards. The

superconductive coils are supercooled with liquid helium.

Unfortunately there's an odd phenomenon known as "quenching," in

which a superconductive magnet, for reasons rather poorly understood,

will suddenly become merely-conductive. When a "quench" occurs, an

enormous amount of electrical energy suddenly flashes into heat,

which makes the liquid helium boil violently. The MRI's technicians

might be smothered or frozen by boiling helium, so it has to be vented

out the roof, requiring the installation of specialized vent-stacks.

Helium leaks, too, so it must be resupplied frequently, at considerable

expense.

The MRI complex also requires expensive graphic-processing

computers, CRT screens, and photographic hard-copy devices. Some

scanners feature elaborate telecommunications equipment. Like the

giant scanners themselves, all these associated machines require

power-surge protectors, line conditioners, and backup power supplies.

Fluorescent lights, which produce radio-frequency noise pollution, are

forbidden around MRIs. MRIs are also very bothered by passing CB

radios, paging systems, and ambulance transmissions. It is generally

considered a good idea to sheathe the entire MRI cubicle (especially the

doors, windows, electrical wiring, and plumbing) in expensive, well-

grounded sheet-copper.

Despite all these drawbacks, the United States today rejoices in

possession of some two thousand MRI machines. (There are hundreds in

other countries as well.) The cheaper models cost a solid million dollars

each; the top-of-the-line models, two million. Five million MRI scans

were performed in the United States last year, at prices ranging from

six hundred dollars, to twice that price and more.

In other words, in 1991 alone, Americans sank some five billion

dollars in health care costs into the miraculous MRI technology.

Today America's hospitals and diagnostic clinics are in an MRI

arms race. Manufacturers constantly push new and improved machines

into the market, and other hospitals feel a dire need to stay with the

state-of-the-art. They have little choice in any case, for the balky,

temperamental MRI scanners wear out in six years or less, even when

treated with the best of care.

Patients have little reason to refuse an MRI test, since insurance

will generally cover the cost. MRIs are especially good for testing for

neurological conditions, and since a lot of complaints, even quite minor

ones, might conceivably be neurological, a great many MRI scans are

performed. The tests aren't painful, and they're not considered risky.

Having one's tissues briefly magnetized is considered far less risky than

the fairly gross ionization damage caused by X-rays. The most common

form of MRI discomfort is simple claustrophobia. MRIs are as narrow as

the grave, and also very loud, with sharp mechanical clacking and

buzzing.

But the results are marvels to behold, and MRIs have clearly

saved many lives. And the tests will eliminate some potential risks to

the patient, and put the physician on surer ground with his diagnosis.

So why not just go ahead and take the test?

MRIs have gone ahead boldly. Unfortunately, miracles rarely

come cheap. Today the United States spends thirteen percent of its Gross

National Product on health care, and health insurance costs are

drastically outstripping the rate of inflation.

High-tech, high-cost resources such as MRIs generally go to to

the well-to-do and the well-insured. This practice has sad

repercussions. While some lives are saved by technological miracles --

and this is a fine thing -- other lives are lost, that might have been

rescued by fairly cheap and common public-health measures, such as

better nutrition, better sanitation, or better prenatal care. As advanced

nations go, the United States a rather low general life expectancy, and a

quite bad infant-death rate; conspicuously worse, for instance, than

Italy, Japan, Germany, France, and Canada.

MRI may be a true example of a technology genuinely ahead of

its time. It may be that the genius, grit, and determination of Raymond

Damadian brought into the 1980s a machine that might have been better

suited to the technical milieu of the 2010s. What MRI really requires for

everyday workability is some cheap, simple, durable, powerful

superconductors. Those are simply not available today, though they

would seem to be just over the technological horizon. In the meantime,

we have built thousands of magnetic windows into the body that will do

more or less what CAT-scan x-rays can do already. And though they do

it better, more safely, and more gently than x-rays can, they also do it

at a vastly higher price.

Damadian himself envisioned MRIs as a cheap mass-produced

technology. "In ten to fifteen years," he is quoted as saying in 1985,

"we'll be able to step into a booth -- they'll be in shopping malls or

department stores -- put a quarter in it, and in a minute it'll say you

need some Vitamin A, you have some bone disease over here, your blood

pressure is a touch high, and keep a watch on that cholesterol." A

thorough medical checkup for twenty-five cents in 1995! If one needed

proof that Raymond Damadian was a true visionary, one could find it

here.

Damadian even envisioned a truly advanced MRI machine

capable of not only detecting cancer, but of killing cancerous cells

outright. These machines would excite not hydrogen atoms, but

phosphorus atoms, common in cancer-damaged DNA. Damadian

speculated that certain Larmor frequencies in phosphorus might be

specific to cancerous tissue; if that were the case, then it might be

possible to pump enough energy into those phosphorus nuclei so that

they actually shivered loose from the cancer cell's DNA, destroying the

cancer cell's ability to function, and eventually killing it.

That's an amazing thought -- a science-fictional vision right out

of the Gernback Continuum. Step inside the booth -- drop a quarter --

and have your incipient cancer not only diagnosed, but painlessly

obliterated by invisible Magnetic Healing Rays.

Who the heck could believe a visionary scenario like that?

Some things are unbelievable until you see them with your own

eyes. Until the vision is sitting right there in front of you. Where it

can no longer be denied that they're possible.

A vision like the inside of your own brain, for instance.

SUPERGLUE

This is the Golden Age of Glue.

For thousands of years, humanity got by with natural glues like

pitch, resin, wax, and blood; products of hoof and hide and treesap

and tar. But during the past century, and especially during the past

thirty years, there has been a silent revolution in adhesion.

This stealthy yet steady technological improvement has been

difficult to fully comprehend, for glue is a humble stuff, and the

better it works, the harder it is to notice. Nevertheless, much of the

basic character of our everyday environment is now due to advanced

adhesion chemistry.

Many popular artifacts from the pre-glue epoch look clunky

and almost Victorian today. These creations relied on bolts, nuts,

rivets, pins, staples, nails, screws, stitches, straps, bevels, knobs, and

bent flaps of tin. No more. The popular demand for consumer

objects ever lighter, smaller, cheaper, faster and sleeker has led to

great changes in the design of everyday things.

Glue determines much of the difference between our

grandparent's shoes, with their sturdy leather soles, elaborate

stitching, and cobbler's nails, and the eerie-looking modern jogging-

shoe with its laminated plastic soles, fabric uppers and sleek foam

inlays. Glue also makes much of the difference between the big

family radio cabinet of the 1940s and the sleek black hand-sized

clamshell of a modern Sony Walkman.

Glue holds this very magazine together. And if you happen to

be reading this article off a computer (as you well may), then you

are even more indebted to glue; modern microelectronic assembly

would be impossible without it.

Glue dominates the modern packaging industry. Glue also has

a strong presence in automobiles, aerospace, electronics, dentistry,

medicine, and household appliances of all kinds. Glue infiltrates

grocery bags, envelopes, books, magazines, labels, paper cups, and

cardboard boxes; there are five different kinds of glue in a common

filtered cigarette. Glue lurks invisibly in the structure of our

shelters, in ceramic tiling, carpets, counter tops, gutters, wall siding,

ceiling panels and floor linoleum. It's in furniture, cooking utensils,

and cosmetics. This galaxy of applications doesn't even count the

vast modern spooling mileage of adhesive tapes: package tape,

industrial tape, surgical tape, masking tape, electrical tape, duct tape,

plumbing tape, and much, much more.

Glue is a major industrial industry and has been growing at

twice the rate of GNP for many years, as adhesives leak and stick

into areas formerly dominated by other fasteners. Glues also create

new markets all their own, such as Post-it Notes (first premiered in

April 1980, and now omnipresent in over 350 varieties).

The global glue industry is estimated to produce about twelve

billion pounds of adhesives every year. Adhesion is a $13 billion

market in which every major national economy has a stake. The

adhesives industry has its own specialty magazines, such as

Adhesives Age andSAMPE Journal; its own trade groups, like the

Adhesives Manufacturers Association, The Adhesion Society, and the

Adhesives and Sealant Council; and its own seminars, workshops and

technical conferences. Adhesives corporations like 3M, National

Starch, Eastman Kodak, Sumitomo, and Henkel are among the world's

most potent technical industries.

Given all this, it's amazing how little is definitively known

about how glue actually works -- the actual science of adhesion.

There are quite good industrial rules-of-thumb for creating glues;

industrial technicians can now combine all kinds of arcane

ingredients to design glues with well-defined specifications:

qualities such as shear strength, green strength, tack, electrical

conductivity, transparency, and impact resistance. But when it

comes to actually describing why glue is sticky, it's a different

matter, and a far from simple one.

A good glue has low surface tension; it spreads rapidly and

thoroughly, so that it will wet the entire surface of the substrate.

Good wetting is a key to strong adhesive bonds; bad wetting leads

to problems like "starved joints," and crannies full of trapped air,

moisture, or other atmospheric contaminants, which can weaken the

bond.

But it is not enough just to wet a surface thoroughly; if that

were the case, then water would be a glue. Liquid glue changes

form; it cures, creating a solid interface between surfaces that

becomes a permanent bond.

The exact nature of that bond is pretty much anybody's guess.

There are no less than four major physico-chemical theories about

what makes things stick: mechanical theory, adsorption theory,

electrostatic theory and diffusion theory. Perhaps molecular strands

of glue become physically tangled and hooked around irregularities

in the surface, seeping into microscopic pores and cracks. Or, glue

molecules may be attracted by covalent bonds, or acid-base

interactions, or exotic van der Waals forces and London dispersion

forces, which have to do with arcane dipolar resonances between

magnetically imbalanced molecules. Diffusion theorists favor the

idea that glue actually blends into the top few hundred molecules of

the contact surface.

Different glues and different substrates have very different

chemical constituents. It's likely that all of these processes may have

something to do with the nature of what we call "stickiness" -- that

everybody's right, only in different ways and under different

circumstances.

In 1989 the National Science Foundation formally established

the Center for Polymeric Adhesives and Composites. This Center's

charter is to establish "a coherent philosophy and systematic

methodology for the creation of new and advanced polymeric

adhesives" -- in other words, to bring genuine detailed scientific

understanding to a process hitherto dominated by industrial rules of

thumb. The Center has been inventing new adhesion test methods

involving vacuum ovens, interferometers, and infrared microscopes,

and is establishing computer models of the adhesion process. The

Center's corporate sponsors -- Amoco, Boeing, DuPont, Exxon,

Hoechst Celanese, IBM, Monsanto, Philips, and Shell, to name a few of

them -- are wishing them all the best.

We can study the basics of glue through examining one typical

candidate. Let's examine one well-known superstar of modern

adhesion: that wondrous and well-nigh legendary substance known

as "superglue." Superglue, which also travels under the aliases of

SuperBonder, Permabond, Pronto, Black Max, Alpha Ace, Krazy Glue

and (in Mexico) Kola Loka, is known to chemists as cyanoacrylate

(C5H5NO2).

Cyanoacrylate was first discovered in 1942 in a search for

materials to make clear plastic gunsights for the second world war.

The American researchers quickly rejected cyanoacrylate because

the wretched stuff stuck to everything and made a horrible mess. In

1951, cyanoacrylate was rediscovered by Eastman Kodak researchers

Harry Coover and Fred Joyner, who ruined a perfectly useful

refractometer with it -- and then recognized its true potential.

Cyanoacrylate became known as Eastman compound #910. Eastman

910 first captured the popular imagination in 1958, when Dr Coover

appeared on the "I've Got a Secret" TV game show and lifted host

Gary Moore off the floor with a single drop of the stuff.

This stunt still makes very good television and cyanoacrylate

now has a yearly commercial market of $325 million.

Cyanoacrylate is an especially lovely and appealing glue,

because it is (relatively) nontoxic, very fast-acting, extremely strong,

needs no other mixer or catalyst, sticks with a gentle touch, and does

not require any fancy industrial gizmos such as ovens, presses, vices,

clamps, or autoclaves. Actually, cyanoacrylate does require a

chemical trigger to cause it to set, but with amazing convenience, that

trigger is the hydroxyl ions in common water. And under natural

atmospheric conditions, a thin layer of water is naturally present on

almost any surface one might want to glue.

Cyanoacrylate is a "thermosetting adhesive," which means that

(unlike sealing wax, pitch, and other "hot melt" adhesives) it cannot

be heated and softened repeatedly. As it cures and sets,

cyanoacrylate becomes permanently crosslinked, forming a tough

and permanent polymer plastic.

In its natural state in its native Superglue tube from the

convenience store, a molecule of cyanoacrylate looks something like

this:

CN

/

CH2=C

\

COOR

The R is a variable (an "alkyl group") which slightly changes

the character of the molecule; cyanoacrylate is commercially

available in ethyl, methyl, isopropyl, allyl, butyl, isobutyl,

methoxyethyl, and ethoxyethyl cyanoacrylate esters. These

chemical variants have slightly different setting properties and

degrees of gooiness.

After setting or "ionic polymerization," however, Superglue

looks something like this:

CN CN CN

| | |

- CH2C -(CH2C)-(CH2C)- (etc. etc. etc)

| | |

COOR COOR COOR

The single cyanoacrylate "monomer" joins up like a series of

plastic popper-beads, becoming a long chain. Within the thickening

liquid glue, these growing chains whip about through Brownian

motion, a process technically known as "reptation," named after the

crawling of snakes. As the reptating molecules thrash, then wriggle,

then finally merely twitch, the once- thin and viscous liquid becomes

a tough mass of fossilized, interpenetrating plastic molecular

spaghetti.

And it is strong. Even pure cyanoacrylate can lift a ton with a

single square-inch bond, and one advanced elastomer-modified '80s

mix, "Black Max" from Loctite Corporation, can go up to 3,100 pounds.

This is enough strength to rip the surface right off most substrates.

Unless it's made of chrome steel, the object you're gluing will likely

give up the ghost well before a properly anchored layer of Superglue

will.

Superglue quickly found industrial uses in automotive trim,

phonograph needle cartridges, video cassettes, transformer

laminations, circuit boards, and sporting goods. But early superglues

had definite drawbacks. The stuff dispersed so easily that it

sometimes precipitated as vapor, forming a white film on surfaces

where it wasn't needed; this is known as "blooming." Though

extremely strong under tension, superglue was not very good at

sudden lateral shocks or "shear forces," which could cause the glue-

bond to snap. Moisture weakened it, especially on metal-to-metal

bonds, and prolonged exposure to heat would cook all the strength

out of it.

The stuff also coagulated inside the tube with annoying speed,

turning into a useless and frustrating plastic lump that no amount of

squeezing of pinpoking could budge -- until the tube burst and and

the thin slippery gush cemented one's fingers, hair, and desk in a

mummified membrane that only acetone could cut.

Today, however, through a quiet process of incremental

improvement, superglue has become more potent and more useful

than ever. Modern superglues are packaged with stabilizers and

thickeners and catalysts and gels, improving heat capacity, reducing

brittleness, improving resistance to damp and acids and alkalis.

Today the wicked stuff is basically getting into everything.

Including people. In Europe, superglue is routinely used in

surgery, actually gluing human flesh and viscera to replace sutures

and hemostats. And Superglue is quite an old hand at attaching fake

fingernails -- a practice that has sometimes had grisly consequences

when the tiny clear superglue bottle is mistaken for a bottle of

eyedrops. (I haven't the heart to detail the consequences of this

mishap, but if you're not squeamish you might try consulting The

Journal of the American Medical Association, May 2, 1990 v263 n17

p2301).

Superglue is potent and almost magical stuff, the champion of

popular glues and, in its own quiet way, something of an historical

advent. There is something pleasantly marvelous, almost Arabian

Nights-like, about a drop of liquid that can lift a ton; and yet one can

buy the stuff anywhere today, and it's cheap. There are many urban

legends about terrible things done with superglue; car-doors locked

forever, parking meters welded into useless lumps, and various tales

of sexual vengeance that are little better than elaborate dirty jokes.

There are also persistent rumors of real-life superglue muggings, in

which victims are attached spreadeagled to cars or plate-glass

windows, while their glue-wielding assailants rifle their pockets at

leisure and then stroll off, leaving the victim helplessly immobilized.

While superglue crime is hard to document, there is no

question about its real-life use for law enforcement. The detection

of fingerprints has been revolutionized with special kits of fuming

ethyl-gel cyanoacrylate. The fumes from a ripped-open foil packet of

chemically smoking superglue will settle and cure on the skin oils

left in human fingerprints, turning the smear into a visible solid

object. Thanks to superglue, the lightest touch on a weapon can

become a lump of plastic guilt, cementing the perpetrator to his

crime in a permanent bond.

And surely it would be simple justice if the world's first

convicted superglue mugger were apprehended in just this way.

"Creation Science"

In the beginning, all geologists and biologists were creationists.

This was only natural. In the early days of the Western scientific

tradition, the Bible was by far the most impressive and potent source

of historical and scientific knowledge.

The very first Book of the Bible, Genesis, directly treated

matters of deep geological import. Genesis presented a detailed

account of God's creation of the natural world, including the sea, the

sky, land, plants, animals and mankind, from utter nothingness.

Genesis also supplied a detailed account of a second event of

enormous import to geologists: a universal Deluge.

Theology was queen of sciences, and geology was one humble

aspect of "natural theology." The investigation of rocks and the

structure of the landscape was a pious act, meant to reveal the full

glory and intricacy of God's design. Many of the foremost geologists

of the 18th and 19th century were theologians: William Buckland,

John Pye Smith, John Fleming, Adam Sedgewick. Charles Darwin

himself was a one-time divinity student.

Eventually the study of rocks and fossils, meant to complement

the Biblical record, began to contradict it. There were published

rumblings of discontent with the Genesis account as early as the

1730s, but real trouble began with the formidable and direct

challenges of Lyell's uniformitarian theory of geology and his disciple

Darwin's evolution theory in biology. The painstaking evidence

heaped in Lyell's *Principles of Geology* and Darwin's *Origin of

Species* caused enormous controversy, but eventually carried the

day in the scientific community.

But convincing the scientific community was far from the end

of the matter. For "creation science," this was only the beginning.

Most Americans today are "creationists" in the strict sense of

that term. Polls indicate that over 90 percent of Americans believe

that the universe exists because God created it. A Gallup poll in

1991 established that a full 47 percent of the American populace

further believes that God directly created humankind, in the present

human form, less than ten thousand years ago.

So "creationism" is not the view of an extremist minority in our

society -- quite the contrary. The real minority are the fewer than

five percent of Americans who are strictly non-creationist. Rejecting

divine intervention entirely leaves one with few solid or comforting

answers, which perhaps accounts for this view's unpopularity.

Science offers no explanation whatever as to why the universe exists.

It would appear that something went bang in a major fashion about

fifteen billion years ago, but the scientific evidence for that -- the

three-degree background radiation, the Hubble constant and so forth

-- does not at all suggest *why* such an event should have happened

in the first place.

One doesn't necessarily have to invoke divine will to explain

the origin of the universe. One might speculate, for instance, that

the reason there is Something instead of Nothing is because "Nothing

is inherently unstable" and Nothingness simply exploded. There's

little scientific evidence to support such a speculation, however, and

few people in our society are that radically anti-theistic. The

commonest view of the origin of the cosmos is "theistic creationism,"

the belief that the Cosmos is the product of a divine supernatural

action at the beginning of time.

The creationist debate, therefore, has not generally been

between strictly natural processes and strictly supernatural ones, but

over *how much* supernaturalism or naturalism one is willing to

admit into one's worldview.

How does one deal successfully with the dissonance between

the word of God and the evidence in the physical world? Or the

struggle, as Stephen Jay Gould puts it, between the Rock of Ages and

the age of rocks?

Let us assume, as a given, that the Bible as we know it today is

divinely inspired and that there are no mistranslations, errors,

ellipses, or deceptions within the text. Let us further assume that

the account in Genesis is entirely factual and not metaphorical, poetic

or mythical.

Genesis says that the universe was created in six days. This

divine process followed a well-defined schedule.

Day 1. God created a dark, formless void of deep waters, then

created light and separated light from darkness.

Day 2. God established the vault of Heaven over the formless watery

void.

Day 3. God created dry land amidst the waters and established

vegetation on the land.

Day 4. God created the sun, the moon, and the stars, and set them

into the vault of heaven.

Day 5. God created the fish of the sea and the fowl of the air.

Day 6. God created the beasts of the earth and created one male and

one female human being.

On Day 7, God rested.

Humanity thus began on the sixth day of creation. Mankind is

one day younger than birds, two days younger than plants, and

slightly younger than mammals. How are we to reconcile this with

scientific evidence suggesting that the earth is over 4 billion years

old and that life started as a single-celled ooze some three billion

years ago?

The first method of reconciliation is known as "gap theory."

The very first verse of Genesis declares that God created the heaven

and the earth, but God did not establish "Day" and "Night" until the

fifth verse. This suggests that there may have been an immense

span of time, perhaps eons, between the creation of matter and life,

and the beginning of the day-night cycle. Perhaps there were

multiple creations and cataclysms during this period, accounting for

the presence of oddities such as trilobites and dinosaurs, before a

standard six-day Edenic "restoration" around 4,000 BC.

"Gap theory" was favored by Biblical scholar Charles Scofield,

prominent '30s barnstorming evangelist Harry Rimmer, and well-

known modern televangelist Jimmy Swaggart, among others.

The second method of reconciliation is "day-age theory." In

this interpretation, the individual "days" of the Bible are considered

not modern twenty-four hour days, but enormous spans of time.

Day-age theorists point out that the sun was not created until Day 4,

more than halfway through the process. It's difficult to understand

how or why the Earth would have a contemporary 24-hour "day"

without a Sun. The Beginning, therefore, likely took place eons ago,

with matter created on the first "day," life emerging on the third

"day," the fossil record forming during the eons of "days" four five

and six. Humanity, however, was created directly by divine fiat and

did not "evolve" from lesser animals.

Perhaps the best-known "day-age" theorist was William

Jennings Bryan, three-times US presidential candidate and a

prominent figure in the Scopes evolution trial in 1925.

In modern creation-science, however, both gap theory and

day-age theory are in eclipse, supplanted and dominated by "flood

geology." The most vigorous and influential creation-scientists

today are flood geologists, and their views (though not the only

views in creationist doctrine), have become synonymous with the

terms "creation science" and "scientific creationism."

"Flood geology" suggests that this planet is somewhere between

6,000 and 15,000 years old. The Earth was entirely lifeless until the

six literal 24-hour days that created Eden and Adam and Eve. Adam

and Eve were the direct ancestors of all human beings. All fossils,

including so-called pre-human fossils, were created about 3,000 BC

during Noah's Flood, which submerged the entire surface of the Earth

and destroyed all air-breathing life that was not in the Ark (with the

possible exception of air-breathing mammalian sea life). Dinosaurs,

which did exist but are probably badly misinterpreted by geologists,

are only slightly older than the human race and were co-existent

with the patriarchs of the Old Testament. Actually, the Biblical

patriarchs were contemporaries with all the creatures in the fossil

record, including trilobites, pterosaurs, giant ferns, nine-foot sea

scorpions, dragonflies two feet across, tyrannosaurs, and so forth.

The world before the Deluge had a very rich ecology.

Modern flood geology creation-science is a stern and radical

school. Its advocates have not hesitated to carry the war to their

theological rivals. The best known creation-science text (among

hundreds) is probably *The Genesis Flood: The Biblical Record and

its Scientific Implications* by John C. Whitcomb and Henry M.

Morris (1961). Much of this book's argumentative energy is devoted

to demolishing gap theory, and especially, the more popular and

therefore more pernicious day-age theory.

Whitcomb and Morris point out with devastating logic that

plants, created on Day Three, could hardly have been expected to

survive for "eons" without any daylight from the Sun, created on Day

Four. Nor could plants pollinate without bees, moths and butterflies

-- winged creatures that were products of Day Five.

Whitcomb and Morris marshal a great deal of internal Biblical

testimony for the everyday, non-metaphorical, entirely real-life

existence of Adam, Eve, Eden, and Noah's Flood. Jesus Christ Himself

refers to the reality of the Flood in Luke 17, and to the reality of

Adam, Eve, and Eden in Matthew 19.

Creationists have pointed out that without Adam, there is no

Fall; with no Fall, there is no Atonement for original sin; without

Atonement, there can be no Savior. To lack faith in the historical

existence and the crucial role of Adam, therefore, is necessarily to

lack faith in the historical existence and the crucial role of Jesus.

Taken on its own terms, this is a difficult piece of reasoning to refute,

and is typical of Creation-Science analysis.

To these creation-scientists, the Bible is very much all of a

piece. To begin pridefully picking and choosing within God's Word

about what one may or may not choose to believe is to risk an utter

collapse of faith that can only result in apostasy -- "going to the

apes." These scholars are utterly and soberly determined to believe

every word of the Bible, and to use their considerable intelligence to

prove that it is the literal truth about our world and our history as a

species.

Cynics might wonder if this activity were some kind of

elaborate joke, or perhaps a wicked attempt by clever men to garner

money and fame at the expense of gullible fundamentalist

supporters. Any serious study of the lives of prominent Creationists

establishes that this is simply not so. Creation scientists are not

poseurs or hypocrites. Many have spent many patient decades in

quite humble circumstances, often enduring public ridicule, yet still

working selflessly and doggedly in the service of their beliefs.

When they state, for instance, that evolution is inspired by Satan and

leads to pornography, homosexuality, and abortion, they are entirely

in earnest. They are describing what they consider to be clear and

evident facts of life.

Creation-science is not standard, orthodox, respectable science.

There is, and always has been, a lot of debate about what qualities an

orthodox and respectable scientific effort should possess. It can be

stated though that science should have at least two basic

requirements: (A) the scientist should be willing to follow the data

where it leads, rather than bending the evidence to fit some

preconceived rationale, and (B) explanations of phenomena should

not depend on unique or nonmaterial factors. It also helps a lot if

one's theories are falsifiable, reproducible by other researchers,

openly published and openly testable, and free of obvious internal

contradictions.

Creation-science does not fit that description at all. Creation-

science considers it sheer boneheaded prejudice to eliminate

miraculous, unique explanations of world events. After all, God, a

living and omnipotent Supreme Being, is perfectly capable of

directing mere human affairs into any direction He might please. To

simply eliminate divine intervention as an explanation for

phenomena, merely in order to suit the intellectual convenience of

mortal human beings, is not only arrogant and arbitrary, but absurd.

Science has accomplished great triumphs through the use of

purely naturalistic explanations. Over many centuries, hundreds of

scientists have realized that some questions can be successfully

investigated using naturalistic techniques. Questions that cannot be

answered in this way are not science, but instead are philosophy, art,

or theology. Scientists assume as a given that we live in a natural

universe that obeys natural laws.

It's conceivable that this assumption might not be the case.

The entire cognitive structure of science hinges on this assumption of

natural law, but it might not actually be true. It's interesting to

imagine the consequences for science if there were to be an obvious,

public, irrefutable violation of natural law.

Imagine that such a violation took place in the realm of

evolutionary biology. Suppose, for instance, that tonight at midnight

Eastern Standard Time every human being on this planet suddenly

had, not ten fingers, but twelve. Suppose that all our children were

henceforth born with twelve fingers also and we now found

ourselves a twelve-fingered species. This bizarre advent would

violate Neo-Darwinian evolution, many laws of human metabolism,

the physical laws of conservation of mass and energy, and quite a

few other such. If such a thing were to actually happen, we would

simply be wrong about the basic nature of our universe. We

thought we were living in a world where evolution occurred through

slow natural processes of genetic drift, mutation, and survival of the

fittest; but we were mistaken. Where the time had come for our

species to evolve to a twelve-fingered status, we simply did it in an

instant all at once, and that was that.

This would be a shock to the scientific worldview equivalent to

the terrible shock that the Christian worldview has sustained

through geology and Darwinism. If a shock of this sort were to strike

the scientific establishment, it would not be surprising to see

scientists clinging, quite irrationally, to their naturalist principles --

despite the fact that genuine supernaturalism was literally right at

hand. Bizarre rationalizations would surely flourish -- queer

"explanations" that the sixth fingers had somehow grown there

naturally without our noticing, or perhaps that the fingers were mere

illusions and we really had only ten after all, or that we had always

had twelve fingers and that all former evidence that we had once

had ten fingers were evil lies spread by wicked people to confuse us.

The only alternative would be to fully face the terrifying fact that a

parochial notion of "reality" had been conclusively toppled, thereby

robbing all meaning from the lives and careers of scientists.

This metaphor may be helpful in understanding why it is that

Whitcomb and Morris's *Genesis Flood* can talk quite soberly about

Noah storing dinosaurs in the Ark. They would have had to be

*young* dinosaurs, of course.... If we assume that one Biblical cubit

equals 17.5 inches, a standard measure, then the Ark had a volume

of 1,396,000 cubic feet, a carrying capacity equal to that of 522

standard railroad stock cars. Plenty of room!

Many other possible objections to the Ark story are met head-

on, in similar meticulous detail. Noah did not have to search the

earth for wombats, pangolins, polar bears and so on; all animals,

including the exotic and distant ones, were brought through divine

instinct to the site of the Ark for Noah's convenience. It seems

plausible that this divine intervention was, in fact, the beginning of

the migratory instinct in the animal kingdom. Similarly, hibernation

may have been created by God at this time, to keep the thousands of

animals quiet inside the Ark and also reduce the need for gigantic

animal larders that would have overtaxed Noah's crew of eight.

Evidence in the Biblical geneologies shows that pre-Deluge

patriarchs lived far longer than those after the Deluge, suggesting a

radical change in climate, and not for the better. Whitcomb and

Morris make the extent of that change clear by establishing that

before the Deluge it never rained. There had been no rainbows

before the Flood -- Genesis states clearly that the rainbow came into

existence as a sign of God's covenant with Noah. If we assume that

normal diffraction of sunlight by water droplets was still working in

pre-Deluge time (as seems reasonable), then this can only mean that

rainfall did not exist before Noah. Instead, the dry earth was

replenished with a kind of ground-hugging mist (Genesis 2:6).

The waters of the Flood came from two sources: the "fountains

of the great deep" and "the windows of heaven." Flood geologists

interpret this to mean that the Flood waters were subterranean and

also present high in the atmosphere. Before they fell to Earth by

divine fiat, the Flood's waters once surrounded the entire planet in a

"vapor canopy." When the time came to destroy his Creation, God

caused the vapor canopy to fall from outer space until the entire

planet was submerged. That water is still here today; the Earth in

Noah's time was not nearly so watery as it is today, and Noah's seas

were probably much shallower than ours. The vapor canopy may

have shielded the Biblical patriarchs from harmful cosmic radiation

that has since reduced human lifespan well below Methuselah's 969

years.

The laws of physics were far different in Eden. The Second

Law of Thermodynamics likely began with Adam's Fall. The Second

Law of Thermodynamics is strong evidence that the entire Universe

has been in decline since Adam's sin. The Second Law of

Thermodynamics may well end with the return of Jesus Christ.

Noah was a markedly heterozygous individual whose genes had

the entire complement of modern racial characteristics. It is a

fallacy to say that human embryos recapitulate our evolution as a

species. The bumps on human embryos are not actually relic gills,

nor is the "tail" on an embryo an actual tail -- it only resembles one.

Creatures cannot evolve to become more complex because this would

violate the Second Law of Thermodynamics. In our corrupt world,

creatures can only degenerate. The sedimentary rock record was

deposited by the Flood and it is all essentially the same age. The

reason the fossil record appears to show a course of evolution is

because the simpler and cruder organisms drowned first, and were

the first to sift out in the layers of rubble and mud.

Related so baldly and directly, flood geology may seem

laughable, but *The Genesis Flood* is not a silly or comic work. It is

five hundred pages long, and is every bit as sober, straightforward

and serious as, say, a college text on mechanical engineering.

*The Genesis Flood* has sold over 200,000 copies and gone

through 29 printings. It is famous all over the world. Today Henry

M. Morris, its co-author, is the head of the world's most influential

creationist body, the Institute for Creation Research in Santee,

California.

It is the business of the I.C.R. to carry out scientific research on

the physical evidence for creation. Members of the I.C.R. are

accredited scientists, with degrees from reputable mainstream

institutions. Dr. Morris himself has a Ph.D. in engineering and has

written a mainstream textbook on hydraulics. The I.C.R.'s monthly

newsletter, *Acts and Facts,* is distributed to over 100,000 people.

The Institute is supported by private donations and by income from

its frequent seminars and numerous well-received publications.

In February 1993, I called the Institute by telephone and had

an interesting chat with its public relations officer, Mr. Bill Hoesch.

Mr. Hoesch told me about two recent I.C.R. efforts in field research.

The first involves an attempt to demonstrate that lava flows at the

top and the bottom of Arizona's Grand Canyon yield incongruent

ages. If this were proved factual, it would strongly imply that the

thousands of layers of sedimentary rock in this world-famous mile-

deep canyon were in fact all deposited at the same time and that

conventional radiometric methods are, to say the least, gravely

flawed. A second I.C.R. effort should demonstrate that certain ice-

cores from Greenland, which purport to show 160 thousand years of

undisturbed annual snow layers, are in fact only two thousand years

old and have been misinterpreted by mainstream scientists.

Mr. Hoesch expressed some amazement that his Institute's

efforts are poorly and privately funded, while mainstream geologists

and biologists often receive comparatively enormous federal funding.

In his opinion, if the Institute for Creation Research were to receive

equivalent funding with their rivals in uniformitarian and

evolutionary so-called science, then creation-scientists would soon be

making valuable contributions to the nation's research effort.

Other creation scientists have held that the search for oil, gas,

and mineral deposits has been confounded for years by mistaken

scientific orthodoxies. They have suggested that successful flood-

geology study would revolutionize our search for mineral resources

of all kinds.

Orthodox scientists are blinded by their naturalistic prejudices.

Carl Sagan, whom Mr. Hoesch described as a "great hypocrite," is a

case in point. Carl Sagan is helping to carry out a well-funded

search for extraterrestrial life in outer space, despite the fact that

there is no scientific evidence whatsoever for extraterrestrial

intelligence, and there is certainly no mention in the Bible of any

rival covenant with another intelligent species. Worse yet, Sagan

boasts that he could detect an ordered, intelligent signal from space

from the noise and static of mere cosmic debris. But here on earth

we have the massively ordered and intelligently designed "signal"

called DNA, and yet Sagan publicly pretends that DNA is the result of

random processes! If Sagan used the same criteria to distinguish

intelligence from chance in the study of Earth life, as he does in his

search for extraterrestrial life, then he would have to become a

Creationist!

I asked Mr Hoesch what he considered the single most

important argument that his group had to make about scientific

creationism.

"Creation versus evolution is not science versus religion," he

told me. "It's the science of one religion versus the science of

another religion."

The first religion is Christianity; the second, the so-called

religion of Secular Humanism. Creation scientists consider this

message the single most important point they can make; far more

important than so-called physical evidence or the so-called scientific

facts. Creation scientists consider themselves soldiers and moral

entrepreneurs in a battle of world-views. It is no accident, to their

mind, that American schools teach "scientific" doctrines that are

inimical to fundamentalist, Bible-centered Christianity. It is not a

question of value-neutral facts that all citizens in our society should

quietly accept; it is a question of good versus evil, of faith versus

nihilism, of decency versus animal self-indulgence, and of discipline

versus anarchy. Evolution degrades human beings from immortal

souls created in God's Image to bipedal mammals of no more moral

consequence than other apes. People who do not properly value

themselves or others will soon lose their dignity, and then their

freedom.

Science education, for its part, degrades the American school

system from a localized, community-responsible, democratic

institution teaching community values, to an amoral indoctrination-

machine run by remote and uncaring elitist mandarins from Big

Government and Big Science.

Most people in America today are creationists of a sort. Most

people in America today care little if at all about the issue of creation

and evolution. Most people don't really care much if the world is six

billion years old, or six thousand years old, because it doesn't

impinge on their daily lives. Even radical creation-scientists have

done very little to combat the teaching of evolution in higher

education -- university level or above. They are willing to let Big

Science entertain its own arcane nonsense -- as long as they and

their children are left in peace.

But when world-views collide directly, there is no peace. The

first genuine counter-attack against evolution came in the 1920s,

when high-school education suddenly became far more widely

spread. Christian parents were shocked to hear their children

openly contradicting God's Word and they felt they were losing

control of the values taught their youth. Many state legislatures in

the USA outlawed the teaching of evolution in the 1920s.

In 1925, a Dayton, Tennessee high school teacher named John

Scopes deliberately disobeyed the law and taught evolution to his

science class. Scopes was accused of a crime and tried for it, and his

case became a national cause celebre. Many people think the

"Scopes Monkey Trial" was a triumph for science education, and it

was a moral victory in a sense, for the pro-evolution side

successfully made their opponents into objects of national ridicule.

Scopes was found guilty, however, and fined. The teaching of

evolution was soft-pedalled in high-school biology and geology texts

for decades thereafter.

A second resurgence of creationist sentiment took place in the

1960s, when the advent of Sputnik forced a reassessment of

American science education. Fearful of falling behind the Soviets in

science and technology, the federal National Science Foundation

commissioned the production of state-of-the-art biology texts in

1963. These texts were fiercely resisted by local religious groups

who considered them tantamount to state-supported promotion of

atheism.

The early 1980s saw a change of tactics as fundamentalist

activists sought equal time in the classroom for creation-science -- in

other words, a formal acknowledgement from the government that

their world-view was as legitimate as that of "secular humanism."

Fierce legal struggles in 1982, 1985 and 1987 saw the defeat of this

tactic in state courts and the Supreme Court.

This legal defeat has by no means put an end to creation-

science. Creation advocates have merely gone underground, no

longer challenging the scientific authorities directly on their own

ground, or the legal ground of the courts, but concentrating on grass-

roots organization. Creation scientists find their messages received

with attention and gratitude all over the Christian world.

Creation-science may seem bizarre, but it is no more irrational

than many other brands of cult archeology that find ready adherents

everywhere. All over the USA, people believe in ancient astronauts,

the lost continents of Mu, Lemuria or Atlantis, the shroud of Turin,

the curse of King Tut. They believe in pyramid power, Velikovskian

catastrophism, psychic archeology, and dowsing for relics. They

believe that America was the cradle of the human race, and that

PreColumbian America was visited by Celts, Phoenicians, Egyptians,

Romans, and various lost tribes of Israel. In the high-tech 1990s, in

the midst of headlong scientific advance, people believe in all sorts of

odd things. People believe in crystals and telepathy and astrology

and reincarnation, in ouija boards and the evil eye and UFOs.

People don't believe these things because they are reasonable.

They believe them because these beliefs make them feel better.

They believe them because they are sick of believing in conventional

modernism with its vast corporate institutions, its secularism, its

ruthless consumerism and its unrelenting reliance on the cold

intelligence of technical expertise and instrumental rationality.

They believe these odd things because they don't trust what they are

told by their society's authority figures. They don't believe that

what is happening to our society is good for them, or in their

interests as human beings.

The clash of world views inherent in creation-science has

mostly taken place in the United States. It has been an ugly clash in

some ways, but it has rarely been violent. Western society has had a

hundred and forty years to get used to Darwin. Many of the

sternest opponents of creation-science have in fact been orthodox

American Christian theologians and church officials, wary of a

breakdown in traditional American relations of church and state.

It may be that the most determined backlash will come not

from Christian fundamentalists, but from the legions of other

fundamentalist movements now rising like deep-rooted mushrooms

around the planet: from Moslem radicals both Sunni and Shi'ite, from

Hindu groups like Vedic Truth and Hindu Nation, from militant

Sikhs, militant Theravada Buddhists, or from a formerly communist

world eager to embrace half-forgotten orthodoxies. What loyalty do

these people owe to the methods of trained investigation that made

the West powerful and rich?

Scientists believe in rationality and objectivity -- even though

rationality and objectivity are far from common human attributes,

and no human being practices these qualities flawlessly. As it

happens, the scientific enterprise in Western society currently serves

the political and economic interests of scientists as human beings.

As a social group in Western society, scientists have successfully

identified themselves with the practice of rational and objective

inquiry, but this situation need not go on indefinitely. How would

scientists themselves react if their admiration for reason came into

direct conflict with their human institutions, human community, and

human interests?

One wonders how scientists would react if truly rational, truly

objective, truly nonhuman Artificial Intelligences were winning all

the tenure, all the federal grants, and all the Nobels. Suppose that

scientists suddenly found themselves robbed of cultural authority,

their halting efforts to understand made the object of public ridicule

in comparison to the sublime efforts of a new power group --

superbly rational computers. Would the qualities of objectivity and

rationality still receive such acclaim from scientists? Perhaps we

would suddenly hear a great deal from scientists about the

transcendant values of intuition, inspiration, spiritual understanding

and deep human compassion. We might see scientists organizing to

assure that the Pursuit of Truth should slow down enough for them

to keep up. We might perhaps see scientists struggling with mixed

success to keep Artificial Intelligence out of the schoolrooms. We

might see scientists stricken with fear that their own children were

becoming strangers to them, losing all morality and humanity as they

transferred their tender young brains into cool new racks of silicon

ultra-rationality -- all in the name of progress.

But this isn't science. This is only bizarre speculation.

For Further Reading:

THE CREATIONISTS by Ronald L. Numbers (Alfred A. Knopf, 1992).

Sympathetic but unsparing history of Creationism as movement and

doctrine.

THE GENESIS FLOOD: The Biblical Record and its Scientific

Implications by John C. Whitcomb and Henry M. Morris (Presbyterian

and Reformed Publishing Company, 1961). Best-known and most

often-cited Creationist text.

MANY INFALLIBLE PROOFS: Practical and Useful Evidences of

Christianity by Henry M. Morris (CLP Publishers, 1974). Dr Morris

goes beyond flood geology to offer evidence for Christ's virgin birth,

the physical transmutation of Lot's wife into a pillar of salt, etc.

CATALOG of the Institute for Creation Research (P O Box 2667, El

Cajon, CA 92021). Free catalog listing dozens of Creationist

publications.

CULT ARCHAEOLOGY AND CREATIONISM: Understanding

Pseudoscientific Beliefs About the Past edited by Francis B. Harrold

and Raymond A. Eve (University of Iowa Press, 1987). Indignant

social scientists tie into highly nonconventional beliefs about the

past.

"Robotica '93"

We are now seven years away from the twenty-first century. Where are all our robots?

A faithful reader of SF from the 1940s and '50s might be surprised to learn that we're not hip-deep in robots by now. By this time, robots ought to be making our breakfasts, fetching our newspapers, and driving our atomic-powered personal helicopters. But this has not come to pass, and the reason is simple.

We don't have any robot brains.

The challenge of independent movement and real-time perception in a natural environment has simply proved too daunting for robot technology. We can build pieces of robots in plenty. We have thousands of robot arms in 1993. We have workable robot wheels and even a few workable robot legs. We have workable sensors for robots and plenty of popular, industrial, academic and military interest in robotics. But a workable robot brain remains beyond us.

For decades, the core of artificial-intelligence research has involved programming machines to build elaborate symbolic representations of the world. Those symbols are then manipulated, in the hope that this will lead to a mechanical comprehension of reality that can match the performance of organic brains.

Success here has been very limited. In the glorious early days of AI research, it seemed likely that if a machine could be taught to play chess at grandmaster level, then a "simple" task like making breakfast would be a snap. Alas, we now know that advanced reasoning skills have very little to do with everyday achievements such as walking, seeing, touching and listening. If humans had to "reason out" the process of getting up and walking out the front door through subroutines and logical deduction, then we'd never budge from the couch. These are things we humans do "automatically," but that doesn't make them easy -- they only seem easy to us because we're organic. For a robot, "advanced" achievements of the human brain, such as logic and mathematical skill, are relatively easy to mimic. But skills that even a mouse can manage brilliantly are daunting in the extreme for machines.

In 1993, we have thousands of machines that we commonly call "robots." We have robot manufacturing companies and national and international robot trade associations. But in all honesty, those robots of 1993 scarcely deserve the name. The term "robot" was invented in 1921 by the Czech playwright Karel Capek, for a stage drama. The word "robot" came from the Czech term for "drudge" or "serf." Capek's imaginary robots were made of manufactured artificial flesh, not metal, and were very humanlike, so much so that they could actually have sex and reproduce (after exterminating the humans that created them). Capek's "robots" would probably be called "androids" today, but they established the general concept for robots: a humanoid machine.

If you look up the term "robot" in a modern dictionary, you'll find that "robots" are supposed to be machines that resemble human beings and do mechanical, routine tasks in response to commands.

Robots of this classic sort are vanishingly scarce in 1993. We simply don't have any proper brains for them, and they can scarcely venture far off the drawing board without falling all over themselves. We do, however, have enormous numbers of mechanical robot arms in daily use today. The robot industry in 1993 is mostly in the business of retailing robot arms.

There's a rather narrow range in modern industry for robot arms. The commercial niche for robotics is menaced by cheap human manual labor on one side and by so-called "hard automation" on the other. This niche may be narrow, but it's nevertheless very real; in the US alone, it's worth about 500 million dollars a year. Over the past thirty years, a lot of useful technological lessons have been learned in the iron-arms industry.

Japan today possesses over sixty percent of the entire world population in robots. Japanese industry won this success by successfully ignoring much of the glamorized rhetoric of classic robots and concentrating on actual workaday industrial uses for a brainless robot arm. European and American manufacturers, by contrast, built overly complex, multi-purpose, sophisticated arms with advanced controllers and reams of high-level programming code. As a result, their reliability was poor, and in the grueling environment of the assembly line, they frequently broke down. Japanese robots were less like the SF concept of robots, and therefore flourished rather better in the real world. The simpler Japanese robots were highly reliable, low in cost, and quick to repay their investment.

Although Americans own many of the basic patents in robotics, today there are no major American robot manufacturers. American robotics concentrates on narrow, ultra-high-tech, specialized applications and, of course, military applications. The robot population in the United States in 1992 was about 40,000, most of them in automobile manufacturing. Japan by contrast has a whopping 275,000 robots (more or less, depending on the definition). Every First World economy has at least some machines they can proudly call robots; Germany about 30,000, Italy 9,000 or so, France around 13,000, Britain 8,000 and so forth. Surprisingly, there are large numbers in Poland and China.

Robot arms have not grown much smarter over the years. Making them smarter has so far proved to be commercially counterproductive. Instead, robot arms have become much better at their primary abilities: repetition and accuracy. Repetition and accuracy are the real selling- points in the robot arm biz. A robot arm was once considered a thing of loveliness if it could reliably shove products around to within a tenth of an inch or so. Today, however, robots have moved into microchip assembly, and many are now fantastically accurate. IBM's "fine positioner," for instance, has a gripper that floats on a thin layer of compressed air and moves in response to computer-controlled electromagnetic fields. It has an accuracy of two tenths of a micron. One micron is one millionth of a meter. On this scale, grains of dust loom like monstrous boulders.

CBW Automation's T-190 model arm is not only accurate, but wickedly fast. This arm plucks castings from hot molds in less than a tenth of a second, repeatedly whipping the products back and forth from 0 to 30 miles per hour in half the time it takes to blink.

Despite these impressive achievements, however, most conventional robot arms in 1993 have very pronounced limits. Few robot arms can move a load heavier than 10 kilograms without severe problems in accuracy. The links and joints within the arm flex in ways difficult to predict, especially as wear begins to mount. Of course it's possible to stiffen the arm with reinforcements, but then the arm itself becomes ungainly and full of unpredictable inertia. Worse yet, the energy required to move a heavier arm adds to manufacturing costs. Thanks to this surprising flimsiness in a machine's metal arm, the major applications for industrial robots today are welding, spraying, coating, sealing, and gluing. These are activities that involve a light and steady movement of relatively small amounts of material.

Robots thrive in the conditions known in the industry as "The 3 D's": Dirty, Dull, and Dangerous. If it's too hot, too cold, too dark, too cramped, or, best of all, if it's toxic and/or smells really bad, then a robot may well be just your man for the job!

When it comes to Dirty, Dull and Dangerous, few groups in the world can rival the military. It's natural therefore that military-industrial companies such as Grumman, Martin Marietta and Westinghouse are extensively involved in modern military-robotics. Robot weaponry and robot surveillance fit in well with modern US military tactical theory, which emphasizes "force multipliers" to reduce US combat casualties and offset the relative US weakness in raw manpower.

In a recent US military wargame, the Blue or Friendly commander was allowed to fortify his position with experimental smart mines, unmanned surveillance planes, and remote-controlled unmanned weapons platforms. The Red or Threat commander adamantly refused to take heavy casualties by having his men battle mere machinery. Instead, the Threat soldiers tried clumsily to maneuver far around the flanks so as to engage the human soldiers in the Blue Force. In response, though, the Blue commander simply turned off the robots and charged into the disordered Red force, clobbering them.

This demonstrates that "dumb machines" needn't be very smart at all to be of real military advantage. They don't even necessarily have to be used in battle -- the psychological advantage alone is very great. The US military benefits enormously if can exchange the potential loss of mere machinery for suffering and damaged morale in the human enemy.

Among the major robotics initiatives in the US arsenal today are Navy mine-detecting robots, autonomous surveillance aircraft, autonomous surface boats, and remotely-piloted "humvee" land vehicles that can carry and use heavy weaponry. American tank commanders are especially enthused about this idea, especially for lethally dangerous positions like point-tank in assaults on fortified positions.

None of these military "robots" look at all like a human being. They don't have to look human, and in fact work much better if they don't. And they're certainly not programmed to obey Asimov's Three Laws of Robotics. If they had enough of a "positronic brain" to respect the lives of their human masters, then they'd be useless.

Recently there's been a remarkable innovation in the "no-brain" approach to robotics. This is the robotic bug. Insects have been able to master many profound abilities that frustrate even the "smartest" artificial intelligences. MIT's famous Insect Lab is a world leader in this research, building tiny and exceedingly "stupid" robots that can actually rove and scamper about in rough terrain with impressively un-robot-like ease.

These bug robots are basically driven by simple programs of "knee-jerk reflexes." Robot bugs have no centralized intelligence and no high-level programming. Instead, they have a decentralized network of simple abilities that are only loosely coordinated. These robugs have no complex internal models, and no comprehensive artificial "understanding" of their environment. They're certainly not human-looking, and they can't follow spoken orders. It's been suggested though that robot bugs might be of considerable commercial use, perhaps cleaning windows, scavenging garbage, or repeatedly vacuuming random tiny paths through the carpet until they'd cleaned the whole house.

If you owned robot bugs, you'd likely never see them. They'd come with the house, just like roaches or termites, and they'd emerge only at night. But instead of rotting your foundation and carrying disease, they'd modestly tidy up for you.

Today robot bugs are being marketed by IS Robotics of Cambridge, MA, which is selling them for research and also developing a home robotic vacuum cleaner.

A swarm of bugs is a strange and seemingly rather far-fetched version of the classic "household robot." But the bug actually seems rather more promising than the standard household robot in 1993, such as the Samsung "Scout-About." This dome-topped creation, which weighs 16 lbs and is less than a foot high, is basically a mobile home-security system. It rambles about the house on its limited battery power, sensing for body-heat, sudden motion, smoke, or the sound of breaking glass. Should anything untoward occur, Scout-About calls the police and/or sets off alarms. It costs about a thousand dollars. Sales of home-security robots have been less than stellar. It appears that most people with a need for such a device would still rather get themselves a dog.

There is an alternative to the no-brain approach in contemporary robotics. That's to use the brain of a human being, remotely piloting a robot body. The robot then becomes "the tele-operated device." Tele-operated robots face much the same series of career opportunities as their brainless cousins -- Dirty, Dull and Dangerous. In this case, though, the robot may be able to perform some of the Dull parts on its own, while the human pilot successfully avoids the Dirt and Danger. Many applications for military robotics are basically tele-operation, where a machine can maintain itself in the field but is piloted by human soldiers during important encounters. Much the same goes for undersea robotics, which, though not a thriving field, does have niches in exploration, oceanography, underwater drilling-platform repair, and underwater cable inspection. The wreck of the *Titanic* was discovered and explored through such a device.

One of the most interesting new applications of tele- operated robotics is in surgical tele-operations. Surgery is, of course, a notoriously delicate and difficult craft. It calls for the best dexterity humans can manage -- and then some. A table-mounted iron arm can be of great use in surgery, because of its swiftness and its microscopic precision. Unlike human surgeons, a robot arm can grip an instrument and hold it in place for hours, then move it again swiftly at a moment's notice without the least tremor. Robot arms today, such as the ROBODOC Surgical Assistant System, are seeing use in hip replacement surgery.

Often the tele-operated robot's grippers are tiny and at the end of a long flexible cable. The "laparoscope" is a surgical cable with a tiny light, camera and cutters at one end. It's inserted through a small hole in the patient's abdominal wall. The use of laparoscopes is becoming common, since their use much reduces the shock and trauma of major surgery.

"Laparoscopy" usually requires two human surgeons, though; one to cut, and one to guide the cable and camera. There are obvious potential problems here from missed communications or simple human exhaustion. With Britain's "Laparobot," however, a single surgeon can control the camera angle through a radio-transmitting headband. If he turns his head, the laparoscope camera pans; if he raises or lowers his head it tilts up and down, and if he leans in, then it zooms. And he still has his hands free to control the blades. The Laparobot is scheduled for commercial production in late 1993.

Tele-operation has made remarkable advances recently with the advent of fiber-optics and high-speed computer networking. However, tele-operation still has very little to do with the classic idea of a human-shaped robot that can understand and follow orders. Periodically, there are attempts to fit the human tele-operator into a human- shaped remote shell -- something with eyes and arms, something more traditionally robotlike. And yet, the market for such a machine has never really materialized. Even the military, normally not disturbed by commercial necessity, has never made this idea work (though not from lack of trying).

The sensory abilities of robots are still very primitive. Human hands have no less than twenty different kinds of nerve fiber. Eight kinds of nerve control muscles, blood vessels and sweat-glands, while the other twelve kinds sense aspects of pain, temperature, texture, muscle condition and the angles of knuckles and joints. No remote-controlled robot hand begins to match this delicate and sophisticated sensory input.

If robot hands this good existed, they would obviously do very well as medical prosthetics. It's still questionable whether there would be a real-world use and real-world market for a remotely-controlled tele-operated humanlike robot. There are many industrial uses for certain separate aspects of humanity -- our grip, our vision, our propensity for violence -- but few for a mechanical device with the actual shape and proportions of a human being.

It seems that our fascination with humanoid robots has little to do with industry, and everything to do with society. Robots are appealing for social reasons. Robots are romantic and striking. Robots have good image.

Even "practical" industrial robots, mere iron arms, have overreached themselves badly in many would-be applications. There have been waves of popular interest and massive investment in robotics, but even during its boom years, the robot industry has not been very profitable. In the mid-1980s there were some 300 robot manufacturers; today there are less than a hundred. In many cases, robot manufacturers survive because of deliberate government subsidy. For a nation to own robots is like owning rocketships or cyclotrons; robots are a symbol of national technological prowess. Robots mark a nation as possessing advanced First World status.

Robots are prestige items. In Japan, robots can symbolize the competition among Japanese firms. This is why Japanese companies sometimes invent oddities such as "Monsieur," a robot less than a centimeter across, or a Japanese boardroom robot that can replace chairs after a meeting. (Of course one can find human office help to replace chairs at very little cost and with great efficiency. But the Japanese office robot replaces chairs with an accuracy of millimeters!)

It makes a certain sense to subsidize robots. Robots support advanced infrastructure through their demand-pull in electronics, software, sensor technology, materials science, and precision engineering. Spin-offs from robotics can vitalize an economy, even if the robots themselves turn out to be mostly decorative. Anyway, if worst comes to worst, robots have always made excellent photo-op backgrounds for politicians.

Robots truly thrive as entertainers. This is where robots began -- on the stage, in Mr. Capek's play in 1921. The best-known contemporary robot entertainers are probably "Crow" and "Tom Servo" from the cable television show MYSTERY SCIENCE THEATER 3000. These wisecracking characters who lampoon bad SF films are not "real robots," but only puppets in hardshelled drag; but Crow and Tom are actors, and actors should be forgiven a little pretense. Disney "animatronic" robots have a long history and still have a strong appeal. Lately, robot dinosaurs, robot prehistoric mammals, and robot giant insects have proved to be enormous crowd-draws, scaring the bejeezus out of small children (and, if truth be told, their parents). Mark Pauline's "Survival Research Laboratories" has won an international reputation for its violent and catastrophic robot performance-art. In Austin Texas, the Robot Group has won a city arts grant to support its robot blimps and pneumatically-controlled junk-creations.

Man-shaped robots are romantic. They have become symbols of an early attitude toward technology which, in a more suspicious and cynical age, still has its own charm and appeal. In 1993, "robot nostalgia" has become a fascinating example of how high-tech dreams of the future can, by missing their target, define their own social period. Today, fabulous prices are paid at international antique toy collections for children's toy robots from the '40s and '50s. These whirring, blinking creatures with their lithographed tin and folded metal tabs exert a powerful aesthetic pull on their fanciers. A mint-in- the-box Robby Robot from 1956, complete with his Space Patrol Moon Car, can bring over four thousand dollars at an auction at Christie's. Thunder Robot, a wondrous creation with machine-gun arms, flashing green eyes, and whirling helicopter blades over its head, is worth a whopping nine grand.

Perhaps we like robots better in 1993 because we can't have them in real life. In today's world, any robot politely and unquestioningly "obeying human orders" in accord with Asimov's Three Laws of Robotics would face severe difficulties. If it were worth even half of what the painted-tin Thunder Robot is worth, then a robot streetsweeper, doorman or nanny would probably be beaten sensorless and carjacked by a gang of young human unemployables. It's a long way back to yesterday's tomorrows.

"Watching the Clouds"

In the simmering depths of a Texas summer, there are few things more soothing than sprawling on a hillside and watching the clouds roll by. Summer clouds are especially bright and impressive in Texas, for reasons we will soon come to understand-- and anyhow, during a Texas summer, any activity more strenuous than lying down, staring at clouds, and chewing a grass-stem may well cause heat-stroke.

By the early nineteenth century, the infant science of meteorology had freed itself from the ancient Aristotelian dogma of vapors, humors, and essences. It was known that the atmosphere was made up of several different gases. The behavior of gases in changing conditions of heat, pressure and density was fairly well understood. Lightning was known to be electricity, and while electricity itself remained enormously mysterious, it was under intense study. Basic weather instruments -- the thermometer, barometer, rain gauge, and weathervane -- were becoming ever more accurate, and were increasingly cheap and available.

And, perhaps most importantly, a network of amateur natural philosophers were watching the clouds, and systematically using instruments to record the weather.

Farmers and sailors owed their lives and livelihoods to their close study of the sky, but their understanding was folkloric, not basic. Their rules of thumb were codified in hundreds of folk weather-proverbs. "When clouds appear like rocks and towers/ the earth's refreshed with frequent showers." "Mackerel skies and mares' tails/ make tall ships carry low sails." This beats drowning at sea, but it can't be called a scientific understanding.

Things changed with the advent of Luke Howard, "the father of British meteorology." Luke Howard was not a farmer or sailor -- he was a Quaker chemist. Luke Howard was born in metropolitan London in 1772, and he seems to have spent most of his life indoors in the big city, conducting the everyday business of his chemist's shop.

Luke Howard wasn't blessed with high birth or a formal education, but he was a man of lively and inquiring mind. While he respected folk weather-wisdom, he also regarded it, correctly, as "a confused mass of simple aphorisms." He made it his life's avocation to set that confusion straight.

Luke Howard belonged to a scientific amateur's club in London known as the Askesian Society. It was thanks to these amateur interests that Howard became acquainted with the Linnaean System. Linnaeus, an eighteenth-century Swedish botanist, had systematically ranked and classified the plants and animals, using the international language of scholarship, Latin. This highly useful act of classification and organization was known as "modification" in the scientific terminology of the time.

Though millions of people had watched, admired, and feared clouds for tens of thousands of years, it was Luke Howard's particular stroke of genius to recognize that clouds might also be classified.

In 1803, the thirty-one-year-old Luke Howard presented a learned paper to his fellow Askesians, entitled "On the Modifications of Clouds, and On the Principles of Their Production, Suspension, and Destruction."

Howard's speculative "principles" have not stood the test of time. Like many intellectuals of his period, Howard was utterly fascinated by "electrical fluid," and considered many cloud shapes to be due to static electricity. Howard's understanding of thermodynamics was similarly halting, since, like his contemporaries, he believed heat to be an elastic fluid called Caloric.

However, Howard's "modifications" -- cirrus, cumulus, and stratus -- have lasted very successfully to the present day and are part of the bedrock of modern meteorology. Howard's scholarly reputation was made by his "modifications," and he was eventually invited to join the prestigious Royal Society. Luke Howard became an author, lecturer, editor, and meteorological instrument- maker, and a learned correspondent with superstars of nineteenth-century scholarship such as Dalton and Goethe. Luke Howard became the world's recognized master of clouds. In order to go on earning a living, though, the father of British meteorology wisely remained a chemist.

Thanks to Linnaeus and his disciple Howard, cloud language abounds in elegant Latin constructions. The "genera" of clouds are cirrus, cirrocumulus, cirrostratus; altocumulus, altostratus, nimbostratus; stratocumulus, cumulus and cumulonimbus.

Clouds can also be classified into "species," by their peculiarities in shape and internal structure. A glance through the World Meteorological Organization's official *International Cloud Atlas* reveals clouds called: fibratus, uncinus, spissatus, castellanus, floccus, stratiformus, nebulosus, lenticularis, fractus, humilis, mediocris, congestus, calvus, and capillatus.

As if that weren't enough, clouds can be further divvied-up into "varieties," by their "special characteristics of arrangement and transparency": intortus, vertebratus, undulatus, radiatus, lacunosis, duplicatus, translucidus, perlucidus and opacis.

And, as a final scholastic fillip, there are the nine supplementary features and appended minor cloud forms: incus, mammatus, virga, praecipitatio, arcus, tuba,pileus, vella, and pannus.

Luke Howard had quite a gift for precise language, and sternly defended his use of scholar's Latin to other amateurs who would have preferred plain English. However elegant his terms, though, Howard's primary insight was simple. He recognized that most clouds come in two basic types: "cumulus" and "stratus," or heaps and layers.

Heaps are commoner than layers. Heaps are created by local rising air, while layers tend to sprawl flatly across large areas.

Water vapor is an invisible gas. It's only when the vapor condenses, and begins to intercept and scatter sunlight as liquid droplets or solid ice crystals, that we can see and recognize a "cloud." Great columns and gushes of invisible vapor continue to enter and leave the cloud throughout its lifetime, condensing within it and evaporating at its edges. This is one reason why clouds are so mutable -- clouds are something like flames, wicking along from candles we can't see.

Who can see the wind? But even when we can't feel wind, the air is always in motion. The Earth spins ponderously beneath its thin skin of atmosphere, dragging air with it by gravity, and arcing wind across its surface with powerful Coriolis force. The strength of sunlight varies between pole and equator, powering gigantic Hadley Cells that try to equalize the difference. Mountain ranges heave air upward, and then drop it like bobsleds down their far slopes. The sunstruck continents simmer like frying pans, and the tropical seas spawn giant whirlpools of airborne damp.

Water vapor moves and mixes freely with all of these planetary surges, just like the atmosphere's other trace constituents. Water vapor, however, has a unique quality -- at Earth's temperatures, water can become solid, liquid or gas. These changes in form can store, or release, enormous amounts of heat. Clouds can power themselves by steam.

A Texas summer cumulus cloud is the child of a rising thermal, from the sun-blistered Texan earth. Heated air expands. Expanding air becomes buoyant, and rises. If no overlying layer of stable air stops it from rising, the invisible thermal will continue to rise, and cool, until it reaches the condensation level. The condensation level is what gives cumulus clouds their flat bases -- to Luke Howard, the condensation level was colorfully known as "the Vapour Plane." Depending on local heat and humidity, the condensation level may vary widely in height, but it's always up there somewhere.

At this point, the cloud's internal steam-engine kicks in. Billions of vapor molecules begin to cling to the enormous variety of trash that blesses our atmosphere: bits of ash and smoke from volcanoes and forest-fires, floating spores and pollen-grains, chips of sand and dirt kicked up by wind-gusts, airborne salt from bubbles bursting in the ocean, meteoric dust sifting down from space. As the vapor clings to these "condensation nuclei," it condenses, and liquefies, and it gives off heat.

This new gush of heat causes the air to expand once again, and propels it upward in a rising tower, topped by the trademark cauliflower bubbles of the summer cumulus.

If it's not disturbed by wind, hot dry air will cool about ten degrees centigrade for every kilometer that it rises above the earth. This rate of cooling is known to Luke Howard's modern-day colleagues as the Dry Adiabatic Lapse Rate. Hot *damp* air, however, cools in the *Wet* Adiabatic Lapse Rate, only about six degrees per kilometer of height. This four-degree difference in energy -- caused by the "latent heat" of the wet air -- is known in storm-chasing circles as "the juice."

When bodies of wet and dry air collide along what is known as "the dryline," the juice kicks in with a vengeance, and things can get intense. Every spring, in the High Plains of Texas and Oklahoma, dry air from the center of the continent tackles damp surging warm fronts from the soupy Gulf of Mexico. The sprawling plains that lie beneath the dryline are aptly known as "Tornado Alley."

A gram of condensing water-vapor has about 600 calories of latent heat in it. One cubic meter of hot damp air can carry up to three grams of water vapor. Three grams may not seem like much, but there are plenty of cubic meters in a cumulonimbus thunderhead, which tends to be about ten thousand meters across and can rise eleven thousand meters into the sky, forming an angry, menacing anvil hammered flat across the bottom of the stratosphere.

The resulting high winds, savage downbursts, lashing hail and the occasional city-wrecking tornado can be wonderfully dramatic and quite often fatal. However, in terms of the Earth's total heat-budget, these local cumulonimbus fireworks don't compare in total power to the gentle but truly vast stratus clouds. Stratus tends to be the product of air gently rising across great expanses of the earth, air that is often merely nudged upward, at a few centimeters per second, over a period of hours. Vast weather systems can slowly pump up stratus clouds in huge sheets, layer after layer of flat overcast that sometimes covers a quarter of North America.

Fog is also a stratus cloud, usually created by warm air's contact with the cold night earth. Sometimes a gentle uplift of moving air, oozing up the long slope from the Great Plains to the foot of the Rockies, can produce vast blanketing sheets of ground-level stratus fog that cover entire states.

As it grows older, stratus cloud tends to break up into dapples or billows. The top of the stratus layer cools by radiation into space, while the bottom of the cloud tends to warm by intercepting the radiated heat from the earth. This gentle radiant heat creates a mild, slow turbulence that breaks the solid stratus into thousands of leopard-spots, or with the aid of a little wind, perhaps into long billows and parallel rolls. Thicker, lowlying stratus may not break-up enough to show clear sky, but simply become a dispiriting mass of gloomy gray knobs and lumps that can last for days on end, during a quiet winter.

When vapor condenses into droplets, it gives off latent heat and rises. The cooler air from the heights, shoved aside by the ascending warm air, tends to fall. If the falling air drags some captured droplets of water with it, those droplets will evaporate on the way down. This makes the downdraft cooler and denser, and speeds its descent. It's "the juice" again, but in reverse. If there's enough of this steam-power set-loose, it will create vertically circulating masses of air, or "convection cells."

Downdraft winds are invisible, but they are a vital part of the cloud system. In a patchy summer sky, downdrafts fill the patches between the clouds -- downdrafts *are* the patches. They tear droplets from the edges of clouds and consume them.

Most clouds never manage to rain or snow. They simply use the vapor-water cycle as a mechanism to carry and dissipate excess heat, doing the Earth's quiet business of entropy.

Clouds also scour the sky; they are the atmosphere's cleaning agents. A good rain always makes the air seem fresh and clean, but even clouds that never rain can nevertheless clean up billions of dust particles. Tiny droplets carry their dust nuclei with them as they collide with one another inside the cloud, and combine into large drops of water. Even if this drop then evaporates and never falls as rain, the many dust particles inside it will congeal thorough adhesion into a good-sized speck, which will eventually settle to earth on its own.

For a drop of water to fall successfully to earth, it has to increase in size by about one million times, from the micron width of a damp condensation nucleus, to the hefty three millimeters of an honest raindrop. A raindrop can grow by condensation about to a tenth of a millimeter, but after this scale is reached, condensation alone will no longer do the job, and the drop has to rely on collision and capture.

Warm damp air rising within a typical rainstorm generally moves upward at about a meter per second. Drizzle falls about one centimeter per second and so is carried up with the wind, but as drops grow, their rate of descent increases. Eventually the larger drops are poised in midair, struggling to fall, as tiny droplets are swept up past them and against them. The drop will collide and fuse with some of the droplets in its path, until it grows too large for the draft to support. If it is then caught in a cool downdraft, it may survive to reach the earth as rain. Sometimes the sheer mass of rain can overpower the updraft, through accumulating weight and the cooling power of its own evaporation.

Raindrops can also grow as ice particles at the frigid tops of tall clouds. "Sublimation" is the process of water vapor directly changing from water to ice. If the air is cold enough, ice crystals grow much faster in saturated air than a water droplet does. An ice crystal in damp supercooled air can grow to raindrop size in only ten minutes. An upper-air snowflake, if it melts during its long descent, falls as rain.

Truly violent updrafts to great heights can create hail. Violent storms can create updrafts as fast as thirty meters a second, fast enough to buoy up the kind of grapefruit-sized hail that sometimes kills livestock and punches holes right through roofs. Some theorists believe that the abnormally fat raindrops, often the first signs of an approaching thundershower, are thin scatterings of thoroughly molten hail.

Rain is generally fatal to a cumulonimbus cloud, causing the vital loss of its "juice." The sharp, clear outlines of its cauliflower top become smudgy and sunken. The bulges flatten, and the crevasses fill in. If there are strong winds at the heights, the top of the cloud can be flattened into an anvil, which, after rain sets in, can be torn apart into the long fibrous streaks of anvil cirrus. The lower part of the cloud subsides and dissolves away with the rain, and the upper part drifts away with the prevailing wind, slowly evaporating into broken ragged fragments, "fractocumulus."

However, if there is juice in plenty elsewhere, then a new storm tower may spring up on the old storm's flank. Systems of storm will therefore often propagate at an angle across the prevailing wind, bubbling up to the right or left edge of an advancing mass of clouds. There may be a whole line of such storms, bursting into life at one end, and collapsing into senescence at the other. The youngest tower, at the far edge of the storm-line, usually has the advantage of the strongest supply of juice, and is therefore often the most violent. Storm-chasers tend to cluster at the storm's trailing edge to keep a wary eye on "Tail-End Charlie."

Because of the energy it carries, water vapor is the most influential trace gas in the atmosphere. It's the only gas in the atmosphere that can vary so drastically, plentiful at some times and places, vanishing at others. Water vapor is also the most dramatic gas, because liquid water, cloud, is the only trace constituent in our atmosphere that we can actually see.

The air is mostly nitrogen -- about 78 percent. Oxygen is about 21 percent, argon one percent. The rest is neon, helium, krypton, hydrogen, xenon, ozone and just a bit of methane and carbon dioxide. Carbon dioxide, though vital to plant life, is a vanishingly small 0.03 percent of our atmosphere.

However, thanks to decades of hard work by billions of intelligent and determined human beings, the carbon dioxide in our atmosphere has increased by twenty percent in the last hundred years. During the next fifty years, the level of carbon dioxide in the atmosphere will probably double.

It's possible that global society might take coherent steps to stop this process. But if this process actually does take place, then we will have about as much chance to influence the subsequent course of events as the late Luke Howard.

Carbon dioxide traps heat. Since clouds are our atmosphere's primary heat-engines, doubling the carbon dioxide will likely do something remarkably interesting to our clouds. Despite the best efforts of whirring supercomputers at global atmospheric models around the world, nobody really knows what this might be. There are so many unknown factors in global climatology that our best speculations on the topic are probably not much more advanced, comparatively speaking, than the bold but mistaken theorizing of Luke Howard.

One thing seems pretty likely, though. Whatever our clouds may do, quite a few of the readers of this column will be around in fifty years to watch them.

"Spires on the Skyline"

Broadcast towers are perhaps the single most obvious technological artifact of modern life. At a naive glance, they seem to exist entirely for their own sake. Nobody lives in them. There's nothing stored in them, and they don't offer shelter to anyone or anything. They're skeletal, forbidding structures that are extremely tall and look quite dangerous. They stand, usually, on the highest ground available, so they're pretty hard not to notice. What's more, they're brightly painted and/or covered with flashing lights.

And then there are those *things* attached to them. Antennas of some kind, presumably, but they're nothing like the normal, everyday receiving antennas you might have at home: a simple telescoping rod for a radio, a pair of rabbit ears for a TV. These elaborate, otherworldly appurtenances resemble big drums, or sea urchin spines, or antlers.

In this column, we're going to demystify broadcast towers, and talk about what they do, and why they look that way, and how they've earned their peculiar right to loom eerily on the skyline of every urban center in America.

We begin with the electromagnetic spectrum. Towers have everything to do with the electromagnetic spectrum. Basically, they colonize the spectrum. They legally settle various patches of it, and they use their homestead in the spectrum to make money for their owners and users.

The electromagnetic spectrum is an important natural resource. Unlike most things we think of as "resources," the spectrum is immaterial and intangible. Odder still, it is limited, and yet, it is not exhaustible. Usage of the spectrum is controlled worldwide by an international body known as the International Telecommunications Union (ITU), and controlled within the United States by an agency called the Federal Communications Commission (FCC).

Electromagnetic radiation comes in a wide variety of flavors. It's usually discussed in terms of frequency and wavelength, which are interchangeable terms. All electromagnetic radiation moves at one uniform speed, the speed of light. If the frequency of the wave is higher, then the length of the wave must by necessity become shorter.

Waves are measured in hertz. One hertz is one cycle of frequency per second, named after Heinrich Hertz, a nineteenth-century German physicist who was the first in history to deliberately send a radio signal.

The International Telecommunications Union determines the legally possible uses of the spectrum from 9,000 hertz (9 kilohertz) to 400,000,000,000 hertz (400 gigahertz). This vast legal domain extends from extremely low frequency radio waves up to extremely high frequency microwaves. The behavior of electromagnetic radiation varies considerably along this great expanse of frequency. As frequency rises, the reach of the signal deteriorates; the signal travels less easily, and is more easily absorbed and scattered by rain, clouds, and foliage.

After electromagnetic radiation leaves the legal domain of the ITU, its behavior becomes even more remarkable, as it segues into infrared, then visible light, then ultraviolet, Xrays, gamma rays and cosmic rays.

From the point of view of physics, there's a strangely arbitrary quality to the political decisions of the ITU. For instance, it would seem very odd if there were an international regulatory body deciding who could license and use the color red. Visible colors are a form of electromagnetism, just like radio and microwaves. "Red" is a small piece of the electromagnetic spectrum which happens to be perceivable by the human eye, and yet it would seem shocking if somebody claimed exclusive use of that frequency. The spectrum really isn't a "territory" at all, and can't really be "owned," even though it can be, and is, literally auctioned off to private bidders by national governments for very large sums. Politics and commerce don't matter to the photons. But they matter plenty to the people who build and use towers.

The ITU holds regular international meetings, the World Administrative Radio Conferences, in which various national players jostle over spectrum usage. This is an odd and little-recognized species of diplomacy, but the United States takes it with utter seriousness, as do other countries. The resultant official protocols of global spectrum usage closely resemble international trade documents, or maybe income-tax law. They are very arcane, very specific, and absolutely riddled with archaisms, loopholes, local exceptions and complex wheeler-dealings that go back decades. Everybody and his brother has some toehold in the spectrum: ship navigation, aircraft navigation, standard time signals, various amateur ham radio bands, industrial remote-control radio bands, ship- to-shore telephony, microwave telephone relays, military and civilian radars, police radio dispatch, radio astronomy, satellite frequencies, kids' radio-controlled toys, garage-door openers, and on and on.

The spectrum has been getting steadily more crowded for decades. Once a broad and lonely frontier, inhabited mostly by nutty entrepreneurs and kids with crystal sets, it is now a thriving, uncomfortably crowded metropolis. In the past twenty years especially, there has been phenomenal growth in the number of machines spewing radio and microwave signals into space. New services keep springing up: telephones in airplanes, wireless electronic mail, mobile telephones, "personal communication systems," all of them fiercely demanding elbow-room.

AM radio, FM radio, and television all have slices of the spectrum. They stake and hold their claim with towers. Towers have evolved to fit their specialized environment: a complex interplay of financial necessity, the laws of physics, and government regulation.

Towers could easily be a lot bigger than they are. They're made of sturdy galvanized steel, and the principles of their construction are well-understood. Given four million dollars, it would be a fairly simple matter to build a broadcast tower 4,000 feet high. In practice, however, you won't see towers much over 2,100 feet in the United States, because the FCC deliberately stunts them. A broadcast antenna atop a 4000-ft tower would hog the spectrum over too large a geographical area.

Almost every large urban antenna-tower, the kind you might see in everyday life, belongs to some commercial entity. Military and scientific-research antennas are more discreet, usually located in remote enclaves. Furthermore, they just don't look like commercial antennas. Military communication equipment is not subject to commercial restraints and has a characteristic appearance: rugged, heavy-duty, clunky, serial-numbered, basically Soviet-looking. Scientific instruments are designed to gather data with an accuracy to the last possible decimal point. They may look frazzled, but they rarely look simple. Broadcast tower equipment by contrast is designed to make money, so it looks cheerfully slimmed-down and mass-produced and gimcrack.

Of course, a commercial antenna must obey the laws of physics like other antennas, and has been designed to do that, but its true primary function is generating optimal revenue on capital investment. Towers and their antennas cost as little as possible, consonant with optimal coverage of the market area, and the likelihood of avoiding federal prosecution for sloppy practices. Modern antennas are becoming steadily more elaborate, so as to use thinner slices of spectrum and waste less radiative power. More elaborate design also reduces the annoyance of stray, unwanted signals, so-called "electromagnetic pollution."

Towers fall under the aegis of not one but two powerful bureaucracies, the FCC and the FAA, or Federal Aviation Administration. The FAA is enormously fond of massive air-traffic radar antennas, but dourly regards broadcast antennas as a "menace to air navigation." This is the main reason why towers are so flauntingly obvious. If towers were painted sky-blue they'd be almost invisible, but they're not allowed this. Towers are hazards to the skyways, and therefore they are striped in glaring "aviation white" and gruesome "international orange," as if they were big traffic sawhorses.

Both the FCC and FAA are big outfits that have been around quite a while. They may be slow and cumbersome, but they pretty well know the name of the game. Safety failures in tower management can draw savage fines of up to a hundred thousand dollars a day. FCC regional offices have mandatory tower inspection quotas, and worse yet, the fines on offenders go tidily right into the FCC's budget.

That orange and white paint costs a lot. It also peels off every couple of years, and has to be replaced, by hand. Depending on the size of the tower, it's sometimes possible to get away with using navigation- hazard lights instead of paint, especially if the lights strobe. The size of the lights, and their distribution on the tower structure, and their wattage, and even their rate and method of flashing are all spelled out in grinding detail by the FCC and FAA.

In the real world -- and commercial towers are very real-world structures -- lights aren't that much of an advantage over paint. The bulbs burn out, for one thing. Rain shorts out the line. Ice freezes solid on the high upper reaches of the tower, plummets off in big thirty- pound chunks, cracking the lights off (not to mention cracking the lower-mounted antennas, the hoods and windshields of utility trucks, and the skulls of unlucky technicians). The lights' power sometimes fails entirely.

And people shoot the lights and steal them. In the real world, people shoot towers all the time. Something about towers -- their dominating size, their lonely locales, or maybe it's that color-scheme and that pesky blinking -- seems to provoke an element of trigger-happy lunacy in certain people. Bullet damage is a major hassle for the tower owner and renter.

People, especially drunken undergraduates in college towns, often climb the towers and steal the hazard lights as trophies. If you visit the base of a tower, you will usually find it surrounded with eight-foot, padlocked galvanized fencing and a mean coil of sharp razor-wire. But that won't stop an active guy with a pickup, a ladder, and a six-pack under his belt.

The people who physically build and maintain towers refer to themselves as "tower hands." Tower engineers and designers refer to these people as "riggers." The suit- and-tie folks who actually own broadcasting stations refer to them as "tower monkeys." Tower hands are blue-collar industrial workers, mostly agile young men, mostly nonunionized. They're a special breed. Not everybody can calmly climb 2,000 feet into their air with a twenty- pound tool-belt of ohmmeters, wattmeters, voltage meters, and various wrenches, clamps, screwdrivers and specialized cutting tools. Some people get used to this and come to enjoy it, but those who don't get used to it, *never* get used to it.

While 2,000 feet in the air, these unsung knights of the airwaves must juggle large, unwieldy antennas. Quite often they work when the station is off the air -- in the midnight darkness, using helmet-mounted coal-miners' lamps. And it's hot up there on the tower, or freezing, or wet, and almost always windy.

The commonest task in the tower-hand's life is painting. It's done with "paint-mitts," big soppy gloves dipped in paint, which are stroked over every structural element in the tower, rather like grooming a horse. It takes a strong man a full day to paint a hundred feet of an average tower. (Rip-off hustlers posing as tower-hands can paint towers at "bargain rates" with amazing cheapness and speed. The rascals -- there are some in every business -- paint only the *underside* of the tower, the parts visible from the ground.)

Spray-on paint can be faster than hand-work, but with even the least breeze, paint sprayed 2,000 feet up will carry hundreds of yards to splatter the roofs, walls, and cars of angry civilians with vivid "international orange." There simply isn't much calm air 2,000 feet up in the sky. High-altitude wind doesn't have to deal with ground-level friction, so wind-speed roughly doubles about every thousand feet.

Building towers is known in the trade as "stacking steel." The towers are shipped in pieces, then bolted or welded into segments, either on-site or at the shop. The rigid sections are hauled skyward with a winch-driven 'load line,' and kept from swaying by a second steel cable, the 'tag-line.' Each section is bootstrapped up above the top of the tower, through the use of a tower- mounted crane, called the 'gin pole.' The gin pole has a 360-degree revolving device at its very top, the 'rooster head.' Each new section is deliberately hauled up, spun deftly around on the rooster head, stacked on top of all the previous sections, and securely bolted into place. Then the tower hands detach the gin pole, climb the section they just stacked, mount the ginpole up at the top again, and repeat the process till they're done.

Tower construction is a mature industry; there have not been many innovations in the last forty years. There's nothing new about galvanized steel; it's not high- tech, but it's plenty sturdy, it's easy to work and weld, and it gets the job done. The job's not cheap. In today's market, galvanized steel towers tend to cost about a million dollars per thousand feet of height.

Towers come in two basic varieties, self-supporting and guyed. The self-supporting towers are heavier and more expensive, their feet broadly splayed across the earth. Despite their slender spires, the guyed towers actually require more room. The bottom of a guyed tower is tapered and quite slender, often a narrow patch of industrial steel not much bigger than the top of a child's school-desk. But the foundations for those guy cables stretch out over a vast area, sometimes 100 percent of the tower's height, in three or four different directions. It's possible to draw the cables in toward the tower's base, but that increases the "download" on the tower structure.

Towers are generally built as lightly as possible, commensurate with the strain involved. But the strain is very considerable. Towers themselves are heavy. They need to be sturdy enough to have tower-hands climbing any part of them, at any time, safely.

Small towers sometimes use their bracing bars as natural step-ladders, but big towers have a further burden. It takes a strong man, with a clear head, 3/4 of an hour to climb a thousand feet, so any tower over that size definitely requires an elevator. That brings the full elaborate rigging of guide rails, driving mechanism, hoisting cables, counterweights, rope guards, and cab controls, all of which add to the weight and strain on the structure. Even with an elevator, one still needs a ladder for detail work. Tower hands, who have a very good head for heights, prefer their ladders out on the open air, where there are fewer encumbrances, and they can get the job done in short order. However, station engineers and station personnel, who sometimes need to whip up the tower to replace a lightbulb or such, rather prefer a ladder that's nestled inside the tower, so the structure itself forms a natural safety cage.

Besides the weight of the tower, its elevator, the power cables, the waveguides, the lights, and the antennas, there is also the grave risk of ice. Ice forms very easily on towers, much like the icing of an aircraft wing. An ice-storm can add hugely to a tower's weight, and towers must be designed for that eventuality.

Lightning is another prominent hazard, and although towers are well-grounded, lightning can be freakish and often destroys vulnerable antennas and wiring.

But the greatest single threat to a tower is wind- load. Wind has the advantage of leverage; it can attack a tower from any direction, anywhere along its length, and can twist it, bend it, shake it, pound it, and build up destructive resonant vibrations.

Towers and their antennas are built to avoid resisting wind. The structural elements are streamlined. Often the antennas have radomes, plastic weatherproof covers of various shapes. The plastic radome is transparent to radio and microwave emissions; it protects the sensitive antenna and also streamlines it to avoid wind-load.

An antenna is an interface between an electrical system and the complex surrounding world of moving electromagnetic fields. Antennas come in a bewildering variety of shapes, sizes and functions. The Andrew Corporation, prominent American tower builders and equipment specialists, sells over six hundred different models of antennas.

Antennas are classified in four basic varieties: current elements, travelling-wave antennas, antenna arrays, and radiating-aperture antennas. Elemental antennas tend to be low in the frequency range, travelling-wave antennas rather higher, arrays a bit higher yet, and aperture antennas deal with high-frequency microwaves. Antennas are designed to meet certain performance parameters: frequency, radiation pattern, gain, impedance, bandwidth, polarization, and noise temperature.

Elemental antennas are not very "elemental." They were pretty elemental back in the days of Guglielmo Marconi, the first to make any money broadcasting, but Marconi's radiant day of glory was in 1901, and his field of "Marconi wireless" has enjoyed most of a long century of brilliant innovation and sustained development. Monopole antennas are pretty elemental -- just a big metal rod, spewing out radiation in all directions -- but they quickly grow more elaborate. There are doublets and dipoles and loops; slots, stubs, rods, whips; biconal antennas, spheroidal antennas, microstrip radiators.

Then there's the travelling-wave antennas: rhombic, slotted waveguides, spirals, helices, slow wave, fast wave, leaky wave.

And the arrays: broadside, endfire, planar, circular, multiplicative, beacon, et al.

And aperture variants: the extensive microwave clan. The reflector family: single, dual, paraboloid, spherical, cylindrical, off-set, multi-beam, contoured, hybrid, tracking.... The horn family: pyramidal, sectoral, conical, biconical, box, hybrid, ridged. The lens family: metal lens, dielectric lens, Luneberg lens. Plus backfire aperture, short dielectric rods, and parabolic horns.

Electromagnetism is a difficult phenomenon. The behavior of photons doesn't make much horse sense, and is highly counterintuitive. Even the bedrock of electromagnetic understanding, Maxwell's equations, require one to break into specialized notation, and the integral calculus follows with dreadful speed. To put it very simply: antennas come in different shapes and sizes because they are sending signals of different quality, in fields of different three-dimensional shape.

Wavelength is the most important determinant of antenna size. Low frequency radiation has a very long wavelength and works best with a very long antenna. AM broadcasting is low frequency, and in AM broadcasting the tower *is* the antenna. The AM tower itself is mounted on a block of insulation. Power is pumped into the entire tower and the whole shebang radiates. These low-frequency radio waves can bounce off the ionosphere and go amazing distances.

Microwaves, however, are much farther up the spectrum. Microwave radiation has a short wavelength and behaves more like light. This is why microwave antennas come as lenses and dishes, rather like the lens and retina of a human eye.

An array antenna is a group of antennas which interreact in complex fashion, bouncing and shaping the radiation they emit. The upshot is a directional beam.

"Coverage is coverage," as the tower-hands say, so very often several different companies, or even several different industries, will share towers, bolting their equipment up and down the structure, rather like oysters, limpets and barnacles all settling on the same reef.

Here's a brief naturalist's description of some of the mechanical organisms one is likely to see on a broadcast tower.

First -- the largest and most obvious -- are things that look like big drums. These are microwave dishes under their protective membranes of radome. They may be flat on both sides, in which case they are probably two parabolic dishes mounted back-to-back. They may be flat on one side, or they may bulge out on both sides so that they resemble a flying saucer. If they are mounted so that the dish faces out horizontally, then they are relays of some kind, perhaps local telephone or a microwave long- distance service. They might be a microwave television- feed to a broadcast TV network affiliate, or a local cable-TV system. They don't broadcast for public reception, because the microwave beams from these focused dishes are very narrow. Somewhere in the distance, probably within 30 miles, is another relay in the chain.

A tower may well have several satellite microwave dishes. These will be down near the base of the tower, hooked to the tower by cable and pointed almost straight up. These satellite dishes are generally much bigger than relay microwave dishes. They're too big to fit on a tower, and there's no real reason to put them them on a tower anyway; they'll scarcely get much closer to an orbiting satellite by rising a mere 2,000 feet.

Often, small microwave dishes made of metal slats are mounted to the side of the tower. These slat dishes are mostly empty space, so they're less electronically efficient than a smooth metal dish would be. However, a smooth metal dish, being cupshaped, acts just like the cup on a wind-gauge, so if a strong wind-gust hits it, it will strain the tower violently. Slotted dishes are lighter,cheaper and safer.

Then there are horns. Horns are also microwave emitters. Horns have a leg-thick, hollow tube called a wave-guide at the bottom. The waveguide supplies the microwave radiation through a hollow metallic pipe, and the horn reflects this blast of microwave radiation off an interior reflector, into a narrow beam of the proper "phase," "aperture," and "directivity." Horn antennas are narrow at the bottom and spread out at the top, like acoustic horns. Some are conical, others rectangular. They tend to be mounted vertically inside the tower structure. The "noise" of the horn comes out the side of the horn, not its end, however.

One may see a number of white poles, mounted vertically, spaced parallel and rather far apart, attached to the tower but well away from it. On big towers, these poles might be half-way up; on shorter towers, they're at the top. Sometimes the vertical poles are mounted on the rim of a square or triangular platform, with catwalks for easy access by tower hands. These are antennas for land mobile radio services: paging, cellular phones, cab dispatch, and express mail services.

The tops of towers may well be thick, pipelike, featureless cylinders. These are generally TV broadcast antennas encased in a long cylindrical radome, and topped off with an aircraft beacon.

Very odd things grow from the sides of towers. One sometimes sees a tall vertically mounted rack of metal curlicues that look like a stack of omega signs. These are tubular ring antennas with one knobby stub pointing upward, one stub downward, in an array of up to sixteen. These are FM radio transmitters.

Another array of flat metal rings is linked lengthwise by two long parallel rods. These are VHF television broadcast antennas.

Another species of FM antenna is particularly odd. These witchy-looking arrays stand well out from the side of the tower, on a rod with two large, V-shaped pairs of arms. One V is out at the end of the rod, canted backward, and the other is near the butt of the rod, canted forward. The two V's are twisted at angles to one another, so that from the ground the ends of the V's appear to overlap slightly, forming a broken square. The arms are of hollow brass tubing, and they come in long sets down the side of the tower. The whole array resembles a line of children's jacks that have all been violently stepped on.

The four arms of each antenna are quarter-wavelength arms, two driven and two parasitic, so that their FM radiation is in 90-degree quadrature with equal amplitudes and a high aperture efficiency. Of course, that's easy for *you* to say...

In years to come, the ecology of towers will probably change greatly. This is due to the weird phenomenon known as the "Great Media Exchange" or the "Negroponte Flip," after MIT media theorist Nicholas Negroponte. Broadcast services such as television are going into wired distribution by cable television, where a single "broadcast" can reach 60 percent of the American population and even reach far overseas. With a combination of cable television in cities and direct satellite broadcast rurally, what real need remains for television towers? In the meantime, however, services formerly transferred exclusively by wire, such as telephone and fax, are going into wireless, cellular, portable, applications, supported by an infrastructure of small neighborhood towers and rather modestly-sized antennas.

Antennas have a glowing future. The spectrum can only become more crowded, and the design of antennas can only become more sophisticated. It may well be, though, that another couple of decades will reduce the great steel spires of the skyline to relics. We have seen them every day of our lives, grown up with them as constant looming presences. But despite their steel and their size, their role in society may prove no more permanent than that of windmills or lighthouses. If we do lose them to the impetus of progress, our grandchildren will regard these great towers with a mixture of romance and incredulity, as the largest and most garish technological anomalies that the twentieth century ever produced.

"The New Cryptography"

Writing is a medium of communication and understanding, but there are times and places when one wants an entirely different function from writing: concealment and deliberate bafflement.

Cryptography, the science of secret writing, is almost as old as writing itself. The hieroglyphics of ancient Egypt were deliberately arcane: both writing and a cypher. Literacy in ancient Egypt was hedged about with daunting difficulty, so as to assure the elite powers of priest and scribe.

Ancient Assyria also used cryptography, including the unique and curious custom of "funerary cryptography." Assyrian tombs sometimes featured odd sets of cryptographic cuneiform symbols. The Assyrian passerby, puzzling out the import of the text, would mutter the syllables aloud, and find himself accidentally uttering a blessing for the dead. Funerary cryptography was a way to steal a prayer from passing strangers.

Julius Caesar lent his name to the famous "Caesar cypher," which he used to secure Roman military and political communications.

Modern cryptographic science is deeply entangled with the science of computing. In 1949, Claude Shannon, the pioneer of information theory, gave cryptography its theoretical foundation by establishing the "entropy" of a message and a formal measurement for the "amount of information" encoded in any stream of digital bits. Shannon's theories brought new power and sophistication to the codebreaker's historic efforts. After Shannon, digital machinery could pore tirelessly and repeatedly over the stream of encrypted gibberish, looking for repetitions, structures, coincidences, any slight variation from the random that could serve as a weak point for attack.

Computer pioneer Alan Turing, mathematician and proponent of the famous "Turing Test" for artificial intelligence, was a British cryptographer in the 1940s. In World War II, Turing and his colleagues in espionage used electronic machinery to defeat the elaborate mechanical wheels and gearing of the German Enigma code- machine. Britain's secret triumph over Nazi communication security had a very great deal to do with the eventual military triumph of the Allies. Britain's code-breaking triumph further assured that cryptography would remain a state secret and one of the most jealously guarded of all sciences.

After World War II, cryptography became, and has remained, one of the crown jewels of the American national security establishment. In the United States, the science of cryptography became the high-tech demesne of the National Security Agency (NSA), an extremely secretive bureaucracy that President Truman founded by executive order in 1952, one of the chilliest years of the Cold War.

Very little can be said with surety about the NSA. The very existence of the organization was not publicly confirmed until 1962. The first appearance of an NSA director before Congress was in 1975. The NSA is said to be based in Fort Meade, Maryland. It is said to have a budget much larger than that of the CIA, but this is impossible to determine since the budget of the NSA has never been a matter of public record. The NSA is said to the the largest single employer of mathematicians in the world. The NSA is estimated to have about 40,000 employees. The acronym NSA is aptly said to stand for "Never Say Anything."

The NSA almost never says anything publicly. However, the NSA's primary role in the shadow-world of electronic espionage is to protect the communications of the US government, and crack those of the US government's real, imagined, or potential adversaries. Since this list of possible adversaries includes practically everyone, the NSA is determined to defeat every conceivable cryptographic technique. In pursuit of their institutional goal, the NSA labors (in utter secrecy) to crack codes and cyphers and invent its own less breakable ones.

The NSA also tries hard to retard civilian progress in the science of cryptography outside its own walls. The NSA can suppress cryptographic inventions through the little-known but often-used Invention Secrecy Act of 1952, which allows the Commissioner of Patents and Trademarks to withhold patents on certain new inventions and to order that those inventions be kept secret indefinitely, "as the national interest requires." The NSA also seeks to control dissemination of information about cryptography, and to control and shape the flow and direction of civilian scientific research in the field.

Cryptographic devices are formally defined as "munitions" by Title 22 of the United States Code, and are subject to the same import and export restrictions as arms, ammunition and other instruments of warfare. Violation of the International Traffic in Arms Regulations (ITAR) is a criminal affair investigated and administered by the Department of State. It is said that the Department of State relies heavily on NSA expert advice in determining when to investigate and/or criminally prosecute illicit cryptography cases (though this too is impossible to prove).

The "munitions" classification for cryptographic devices applies not only to physical devices such as telephone scramblers, but also to "related technical data" such as software and mathematical encryption algorithms. This specifically includes scientific "information" that can be "exported" in all manner of ways, including simply verbally discussing cryptography techniques out loud. One does not have to go overseas and set up shop to be regarded by the Department of State as a criminal international arms trafficker. The security ban specifically covers disclosing such information to any foreign national anywhere, including within the borders of the United States.

These ITAR restrictions have come into increasingly harsh conflict with the modern realities of global economics and everyday real life in the sciences and academia. Over a third of the grad students in computer science on American campuses are foreign nationals. Strictly appled ITAR regulations would prevent communication on cryptography, inside an American campus, between faculty and students. Most scientific journals have at least a few foreign subscribers, so an exclusively "domestic" publication about cryptography is also practically impossible. Even writing the data down on a cocktail napkin could be hazardous: the world is full of photocopiers, modems and fax machines, all of them potentially linked to satellites and undersea fiber-optic cables.

In the 1970s and 1980s, the NSA used its surreptitious influence at the National Science Foundation to shape scientific research on cryptography through restricting grants to mathematicians. Scientists reacted mulishly, so in 1978 the Public Cryptography Study Group was founded as an interface between mathematical scientists in civilian life and the cryptographic security establishment. This Group established a series of "voluntary control" measures, the upshot being that papers by civilian researchers would be vetted by the NSA well before any publication.

This was one of the oddest situations in the entire scientific enterprise, but the situation was tolerated for years. Most US civilian cryptographers felt, through patriotic conviction, that it was in the best interests of the United States if the NSA remained far ahead of the curve in cryptographic science. After all, were some other national government's electronic spies to become more advanced than those of the NSA, then American government and military transmissions would be cracked and penetrated. World War II had proven that the consequences of a defeat in the cryptographic arms race could be very dire indeed for the loser.

So the "voluntary restraint" measures worked well for over a decade. Few mathematicians were so enamored of the doctrine of academic freedom that they were prepared to fight the National Security Agency over their supposed right to invent codes that could baffle the US government. In any case, the mathematical cryptography community was a small group without much real political clout, while the NSA was a vast, powerful, well-financed agency unaccountable to the American public, and reputed to possess many deeply shadowed avenues of influence in the corridors of power.

However, as the years rolled on, the electronic exchange of information became a commonplace, and users of computer data became intensely aware of their necessity for electronic security over transmissions and data. One answer was physical security -- protect the wiring, keep the physical computers behind a physical lock and key. But as personal computers spread and computer networking grew ever more sophisticated, widespread and complex, this bar-the-door technique became unworkable.

The volume and importance of information transferred over the Internet was increasing by orders of magnitude. But the Internet was a notoriously leaky channel of information -- its packet-switching technology meant that packets of vital information might be dumped into the machines of unknown parties at almost any time. If the Internet itself could not locked up and made leakproof -- and this was impossible by the nature of the system -- then the only secure solution was to encrypt the message itself, to make that message unusable and unreadable, even if it sometimes fell into improper hands.

Computers outside the Internet were also at risk. Corporate computers faced the threat of computer-intrusion hacking, from bored and reckless teenagers, or from professional snoops and unethical business rivals both inside and outside the company. Electronic espionage, especially industrial espionage, was intensifying. The French secret services were especially bold in this regard, as American computer and aircraft executives found to their dismay as their laptops went missing during Paris air and trade shows. Transatlantic commercial phone calls were routinely tapped by French government spooks seeking commercial advantage for French companies in the computer industry, aviation, and the arms trade. And the French were far from alone when it came to government-supported industrial espionage.

Protection of private civilian data from foreign government spies required that seriously powerful encryption techniques be placed into private hands. Unfortunately, an ability to baffle French spies also means an ability to baffle American spies. This was not good news for the NSA.

By 1993, encryption had become big business. There were one and half million copies of legal encryption software publicly available, including widely-known and commonly-used personal computer products such as Norton Utilities, Lotus Notes, StuffIt, and several Microsoft products. People all over the world, in every walk of life, were using computer encryption as a matter of course. They were securing hard disks from spies or thieves, protecting certain sections of the family computer from sticky-fingered children, or rendering entire laptops and portables into a solid mess of powerfully-encrypted Sanskrit, so that no stranger could walk off with those accidental but highly personal life- histories that are stored in almost every PowerBook.

People were no longer afraid of encryption. Encryption was no longer secret, obscure, and arcane; encryption was a business tool. Computer users wanted more encryption, faster, sleeker, more advanced, and better.

The real wild-card in the mix, however, was the new cryptography. A new technique arose in the 1970s: public-key cryptography. This was an element the codemasters of World War II and the Cold War had never foreseen.

Public-key cryptography was invented by American civilian researchers Whitfield Diffie and Martin Hellman, who first published their results in 1976.

Conventional classical cryptographic systems, from the Caesar cipher to the Nazi Enigma machine defeated by Alan Turing, require a single key. The sender of the message uses that key to turn his plain text message into cyphertext gibberish. He shares the key secretly with the recipients of the message, who use that same key to turn the cyphertext back into readable plain text.

This is a simple scheme; but if the key is lost to unfriendly forces such as the ingenious Alan Turing, then all is lost. The key must therefore always remain hidden, and it must always be fiercely protected from enemy cryptanalysts. Unfortunately, the more widely that key is distributed, the more likely it is that some user in on the secret will crack or fink. As an additional burden, the key cannot be sent by the same channel as the communications are sent, since the key itself might be picked-up by eavesdroppers.

In the new public-key cryptography, however, there are two keys. The first is a key for writing secret text, the second the key for reading that text. The keys are related to one another through a complex mathematical dependency; they determine one another, but it is mathematically extremely difficult to deduce one key from the other.

The user simply gives away the first key, the "public key," to all and sundry. The public key can even be printed on a business card, or given away in mail or in a public electronic message. Now anyone in the public, any random personage who has the proper (not secret, easily available) cryptographic software, can use that public key to send the user a cyphertext message. However, that message can only be read by using the second key -- the private key, which the user always keeps safely in his own possession.

Obviously, if the private key is lost, all is lost. But only one person knows that private key. That private key is generated in the user's home computer, and is never revealed to anyone but the very person who created it.

To reply to a message, one has to use the public key of the other party. This means that a conversation between two people requires four keys. Before computers, all this key-juggling would have been rather unwieldy, but with computers, the chips and software do all the necessary drudgework and number-crunching.

The public/private dual keys have an interesting alternate application. Instead of the public key, one can use one's private key to encrypt a message. That message can then be read by anyone with the public key, i.e,. pretty much everybody, so it is no longer a "secret" message at all. However, that message, even though it is no longer secret, now has a very valuable property: it is authentic. Only the individual holder of the private key could have sent that message.

This authentication power is a crucial aspect of the new cryptography, and may prove to be more socially important than secrecy. Authenticity means that electronic promises can be made, electronic proofs can be established, electronic contracts can be signed, electronic documents can be made tamperproof. Electronic impostors and fraudsters can be foiled and defeated -- and it is possible for someone you have never seen, and will never see, to prove his bona fides through entirely electronic means.

That means that economic relations can become electronic. Theoretically, it means that digital cash is possible -- that electronic mail, e-mail, can be joined by a strange and powerful new cousin, electronic cash, e- money.

Money that is made out of text -- encrypted text. At first consideration such money doesn't seem possible, since it is so far outside our normal experience. But look at this:

ASCII-picture of US dollar

This parody US banknote made of mere letters and numbers is being circulated in e-mail as an in-joke in network circles. But electronic money, once established, would be no more a joke than any other kind of money. Imagine that you could store a text in your computer and send it to a recipient; and that once gone, it would be gone from your computer forever, and registered infallibly in his. With the proper use of the new encryption and authentication, this is actually possible. Odder yet, it is possible to make the note itself an authentic, usable, fungible, transferrable note of genuine economic value, without the identity of its temporary owner ever being made known to anyone. This would be electronic cash -- like normal cash, anonymous -- but unlike normal cash, lightning-fast and global in reach.

There is already a great deal of electronic funds transfer occurring in the modern world, everything from gigantic currency-exchange clearinghouses to the individual's VISA and MASTERCARD bills. However, charge- card funds are not so much "money" per se as a purchase via proof of personal identity. Merchants are willing to take VISA and MASTERCARD payments because they know that they can physically find the owner in short order and, if necessary, force him to pay up in a more conventional fashion. The VISA and MASTERCARD user is considered a good risk because his identity and credit history are known.

VISA and MASTERCARD also have the power to accumulate potentially damaging information about the commercial habits of individuals, for instance, the video stores one patronizes, the bookstores one frequents, the restaurants one dines in, or one's travel habits and one's choice of company.

Digital cash could be very different. With proper protection from the new cryptography, even the world's most powerful governments would be unable to find the owner and user of digital cash. That cash would secured by a "bank" -- (it needn't be a conventional, legally established bank) -- through the use of an encrypted digital signature from the bank, a signature that neither the payer nor the payee could break.

The bank could register the transaction. The bank would know that the payer had spent the e-money, and the bank could prove that the money had been spent once and only once. But the bank would not know that the payee had gained the money spent by the payer. The bank could track the electronic funds themselves, but not their location or their ownership. The bank would guarantee the worth of the digital cash, but the bank would have no way to tie the transactions together.

The potential therefore exists for a new form of network economics made of nothing but ones and zeroes, placed beyond anyone's controls by the very laws of mathematics. Whether this will actually happen is anyone's guess. It seems likely that if it did happen, it would prove extremely difficult to stop.

Public-key cryptography uses prime numbers. It is a swift and simple matter to multiply prime numbers together and obtain a result, but it is an exceedingly difficult matter to take a large number and determine the prime numbers used to produce it. The RSA algorithm, the commonest and best-tested method in public-key cryptography, uses 256-bit and 258-bit prime numbers. These two large prime numbers ("p" and "q") are used to produce very large numbers ("d" and "e") so that (de-1) is divisible by (p-1) times (q-1). These numbers are easy to multiply together, yielding the public key, but extremely difficult to pull apart mathematically to yield the private key.

To date, there has been no way to mathematically prove that it is inherently difficult to crack this prime-number cipher. It might be very easy to do if one knew the proper advanced mathematical technique for it, and the clumsy brute-power techniques for prime-number factorization have been improving in past years. However, mathematicians have been working steadily on prime number factorization problems for many centuries, with few dramatic advances. An advance that could shatter the RSA algorithm would mean an explosive breakthrough across a broad front of mathematical science. This seems intuitively unlikely, so prime-number public keys seem safe and secure for the time being -- as safe and secure as any other form of cryptography short of "the one-time pad." (The one-time pad is a truly unbreakable cypher. Unfortunately it requires a key that is every bit as long as the message, and that key can only be used once. The one-time pad is solid as Gibraltar, but it is not much practical use.)

Prime-number cryptography has another advantage. The difficulty of factorizing numbers becomes drastically worse as the prime numbers become larger. A 56-bit key is, perhaps, not entirely outside the realm of possibility for a nationally supported decryption agency with large banks of dedicated supercomputers and plenty of time on their hands. But a 2,048 bit key would require every computer on the planet to number-crunch for hundreds of centuries.

Decrypting a public-keyed message is not so much a case of physical impossibility, as a matter of economics. Each key requires a huge computational effort to break it, and there are already thousands of such keys used by thousands of people. As a further blow against the decryptor, the users can generate new keys easily, and change them at will. This poses dire problems for the professional electronic spy.

The best-known public-key encryption technique, the RSA algorithm, was named after its inventors, Ronald L. Rivest, Adi Shamir and Leon Adleman. The RSA technique was invented in the United States in the late 1980s (although, as if to spite the international trade in arms regulations, Shamir himself is an Israeli). The RSA algorithm is patented in the United States by the inventors, and the rights to implement it on American computers are theoretically patented by an American company known as Public Key Partners. (Due to a patent technicality, the RSA algorithm was not successfully patented overseas.)

In 1991 an amateur encryption enthusiast named Phil Zimmerman wrote a software program called "Pretty Good Privacy" that used the RSA algorithm without permission. Zimmerman gave the program away on the Internet network via modem from his home in Colorado, because of his private conviction that the public had a legitimate need for powerful encryption programs at no cost (and, incidentally, no profit to the inventors of RSA). Since Zimmerman's action, "Pretty Good Privacy" or "PGP" has come into common use for encrypting electronic mail and data, and has won an avid international following. The original PGP program has been extensively improved by other software writers overseas, out of the reach of American patents or the influence of the NSA, and the PGP program is now widely available in almost every country on the planet -- or at least, in all those countries where floppy disks are common household objects.

Zimmerman, however, failed to register as an arms dealer when he wrote the PGP software in his home and made it publicly available. At this writing, Zimmerman is under federal investigation by the Office of Defense Trade Controls at the State Department, and is facing a possible criminal indictment as an arms smuggler. This despite the fact that Zimmerman was not, in fact, selling anything, but rather giving software away for free. Nor did he voluntarily "export" anything -- rather, people reached in from overseas via Internet links and retrieved Zimmerman's program from the United States under their own power and through their own initiative.

Even more oddly, Zimmerman's program does not use the RSA algorithm exclusively, but also depends on the perfectly legal DES or Data Encryption Standard. The Data Encryption Standard, which uses a 56-bit classical key, is an official federal government cryptographic technique, created by IBM with the expert help of the NSA. It has long been surmised, though not proven, that the NSA can crack DES at will with their legendary banks of Cray supercomputers. Recently a Canadian mathematician, Michael Wiener of Bell-Northern Research, published plans for a DES decryption machine that can purportedly crack 56-bit DES in a matter of hours, through brute force methods. It seems that the US Government's official 56- bit key -- insisted upon, reportedly, by the NSA -- is now too small for serious security uses.

The NSA, and the American law enforcement community generally, are unhappy with the prospect of privately owned and powerfully secure encryption. They acknowledge the need for secure communications, but they insist on the need for police oversight, police wiretapping, and on the overwhelming importance of national security interests and governmental supremacy in the making and breaking of cyphers.

This motive recently led the Clinton Administration to propose the "Clipper Chip" or "Skipjack," a government- approved encryption device to be placed in telephones. Sets of keys for the Clipper Chip would be placed in escrow with two different government agencies, and when the FBI felt the need to listen in on an encrypted telephone conversation, the FBI would get a warrant from a judge and the keys would be handed over.

Enthusiasts for private encryption have pointed out a number of difficulties with the Clipper Chip proposal. First of all, it is extremely unlikely that criminals, foreign spies, or terrorists would be foolish enough to use an encryption technique designed by the NSA and approved by the FBI. Second, the main marketing use for encryption is not domestic American encryption, but international encryption. Serious business users of serious encryption are far more alarmed by state-supported industrial espionage overseas, than they are about the safety of phone calls made inside the United States. They want encryption for communications made overseas to people overseas -- but few foreign business people would buy an encryption technology knowing that the US Government held the exclusive keys.

It is therefore likely that the Clipper Chip could never be successfully exported by American manufacturers of telephone and computer equipment, and therefore it could not be used internationally, which is the primary market for encryption. Machines with a Clipper Chip installed would become commercial white elephants, with no one willing to use them but American cops, American spies, and Americans with nothing to hide.

A third objection is that the Skipjack algorithm has been classified "Secret" by the NSA and is not available for open public testing. Skeptics are very unwilling to settle for a bland assurance from the NSA that the chip and its software are unbreakable except with the official keys.

The resultant controversy was described by Business Week as "Spy Vs Computer Nerd." A subterranean power- struggle has broken out over the mastery of cryptographic science, and over basic ownership of the electronic bit- stream.

Much is riding on the outcome.

Will powerful, full-fledged, state-of-the-art encryption belong to individuals, including such unsavory individuals as drug traffickers, child pornographers, black-market criminal banks, tax evaders, software pirates, and the possible future successors of the Nazis?

Or will the NSA and its allies in the cryptographic status-quo somehow succeed in stopping the march of scientific progress in cryptography, and in cramming the commercial crypto-genie back into the bottle? If so, what price will be paid by society, and what damage wreaked on our traditions of free scientific and technical inquiry?

One thing seems certain: cryptography, this most obscure and smothered of mathematical sciences, is out in the open as never before in its long history. Impassioned, radicalized cryptographic enthusiasts, often known as "cypherpunks," are suing the NSA and making it their business to spread knowledge of cryptographic techniques as widely as possible, "through whatever means necessary." Small in number, they nevertheless have daring, ingenuity, and money, and they know very well how to create a public stink. In the meantime, their more conventional suit-and-tie allies in the Software Publishers Association grumble openly that the Clipper Chip is a poorly-conceived fiasco, that cryptographic software is peddled openly all over the planet, and that "the US Government is succeeding only in crippling an American industry's exporting ability."

The NSA confronted the worst that America's adversaries had to offer during the Cold War, and the NSA prevailed. Today, however, the secret masters of cryptography find themselves confronting what are perhaps the two most powerful forces in American society: the computer revolution, and the profit motive. Deeply hidden from the American public through forty years of Cold War terror, the NSA itself is for the first time, exposed to open question and harrowing reassessment.

Will the NSA quietly give up the struggle, and expire as secretly and silently as it lived its forty-year Cold War existence? Or will this most phantomlike of federal agencies decide to fight for its survival and its scientific pre-eminence?

And if this odd and always-secret agency does choose to fight the new cryptography, then -- how?

"The Dead Collider"

It certainly seemed like a grand idea at the time, the time being 1982, one of the break-the-bank years of the early Reagan Administration.

The Europeans at CERN, possessors of the world's largest particle accelerator, were planning to pave their massive Swiss tunnel with new, superconducting magnets. This would kick the European atom-smasher, already powerful, up to a massive 10 trillion electron volts.

In raw power, this would boost the Europeans decisively past their American rivals. America's most potent accelerator in 1982, Fermilab in Illinois, could manage a meager 2 TeV. And Fermilab's Tevatron, though upgraded several times, was an aging installation.

A more sophisticated machine, ISABELLE at Brookhaven National Laboratory in New York, had been planned in 1979 as Fermilab's successor at the forefront of American particle physics. But by 1982, it was clear that ISABELLE's ultra-sophisticated superconducting magnets had severe design troubles. The state-of-the-art bungling at Brookhaven was becoming an open embarrassment to the American particle-physics community. And even if the young ISABELLE facility overcame those problems and got their magnets to run, ISABELLE was intended to sacrifice raw power for sophistication; at best, ISABELLE would yield a feeble .8 TeV.

In August 1982, Leon Lederman, then director of Fermilab, made a bold and visionary proposal. In a conference talk to high-energy physicists gathered in Colorado, Lederman proposed cancelling both ISABELLE and the latest Fermilab upgrade, in pursuit of a gigantic American particle accelerator that would utterly dwarf the best the Europeans had to offer, now or in the foreseeable future. He called it "The Machine in the Desert."

The "Desertron" (as Lederman first called it) would be the largest single scientific instrument in the world, employing a staff of more than two thousand people, plus students, teachers and various properly awestruck visiting scholars from overseas. It would be 20 times more powerful than Fermilab, and full sixty times more powerful than CERN circa 1982. The accelerator's 54 miles of deep tunnels, lined with hard- vacuum beamguides and helium- refrigerated giant magnets, would be fully the size of the Washington Beltway.

The cost: perhaps 3 billion dollars. It was thought that the cash- flush Japanese, who had been very envious of CERN for some time, would be willing to help the Americans in exchange for favored status at the complex.

The goal of the Desertron, or at least its target of choice, would be the Higgs scalar boson, a hypothetical subatomic entity theoretically responsible for the fact that other elementary particles have mass. The Higgs played a prominent part at the speculative edges of quantum theory's so-called "Standard Model," but its true nature and real properties were very much in doubt.

The Higgs boson would be a glittering prize indeed, though not so glittering as the gigantic lab itself. After a year of intense debate within the American high- energy-physics community, Lederman's argument won out.

His reasoning was firmly in the tradition of 20th- century particle physics. There seemed little question that massive power and scale of the Desertron was the necessary next step for real progress in the field.

At the beginning of the 20th century, Ernest Rutherford (who coined the memorable catch-phrase, "All science is either physics or stamp-collecting") discovered the nucleus of the atom with a mere five million electron volts. Rutherford's lab equipment not much more sophisticated than string and sealing-wax. To get directly at neutrons and protons, however, took much more energy -- a billion electron volts and a cyclotron. To get quark effects, some decades later, required ten billion volts and a synchrotron. To make quarks really stand up and dance in their full quantum oddity, required a hundred billion electron volts and a machine that was miles across. And to get at the Higgs boson would need at least ten trillion eV, and given that the fantastically powerful collision would be a very messy affair, a full forty trillion -- two particle beams of twenty TeV each, colliding head-on -- was a much safer bet.

Throughout the century, then, every major new advance in particle studies had required massive new infusions of power. A machine for the 1990s, the end result of decades of development, would require truly titanic amounts of juice. The physics community had hesitated at this step, and had settled for years at niggling around in the low trillions of electron volts. But the field of sub-atomic studies was looking increasingly mined-out, and the quantum Standard Model had not had a good paradigm- shattering kick in the pants in some time. From the perspective of the particle physicist, the Desertron, despite its necessarily colossal scale, made perfect scientific sense.

The Department of Energy, the bureaucratic descendant of the Atomic Energy Commission and the traditional federal patron of high-energy physics, had more or less recovered from its last major money-wasting debacle, the Carter Administration's synthetic fuels program. Under new leadership, the DoE was sympathetic to an ambitious project with some workable and sellable rationale.

Lederman's tentative scheme was developed, over three years, in great detail, by an expert central design group of federally-sponsored physicists and engineers from Lawrence Berkeley labs, Brookhaven and Fermilab. The "Desertron" was officially renamed the "Superconducting Super Collider." In 1986 the program proposal was carried to Ronald Reagan, then in his second term. While Reagan's cabinet seemed equally split on the merits of the SSC versus a much more modest research program, the Gipper decided the issue with one of his favorite football metaphors: "Throw deep."

Reagan's SSC was a deep throw indeed. The collider ring of Fermilab in Illinois was visible from space, and the grounds of Fermilab were big enough to boast their own herd of captive buffalo. But the ring of the mighty Super Collider made Fermilab's circumference look like a nickel on a dinner plate. One small section of the Super Collider, the High Energy Booster, was the size of Fermilab all by itself, but this Booster was only a humble injection device for the Super Collider.

The real action was to be in the fifty-four-mile, 14- ft-diameter Super Collider ring.

As if this titanic underground circus were not enough, the SSC also boasted two underground halls each over 300 feet long, to be stuffed with ultrasophisticated particle detectors so huge as to make their hard-helmeted minders resemble toy dolls. Along with the fifty-four miles of Collider were sixteen more miles of injection devices: the Linear Accelerator, the modest Low Energy Booster, the large Medium Energy Booster, the monster High Energy Booster, the Boosters acting like a set of gears to drive particles into ever-more frenzied states of relativistic overdrive, before their release into the ferocious grip of the main Super Collider ring.

Along the curves and arcs of these wheels-within- wheels, and along the Super Collider ring itself, were more than forty vertical access shafts, some of them two hundred feet deep. Up on the surface, twelve separate refrigeration plants would pipe tons of ultra-frigid liquid helium to more than ten thousand superconducting magnets, buried deep within the earth. All by itself, the SSC would more than double the amount of helium refrigeration taking place in the entire planet.

The site would have miles of new-paved roads, vast cooling ponds of fresh water, brand-new electrical utilities. Massive new office complexes were to be built for support and research, including two separate East and West campuses at opposite ends of the Collider, and two offsite research labs. With thousands of computers: personal computers, CAD workstations, network servers, routers, massively parallel supercomputing simulators. Office and laboratory networking including Internet and videoconferencing. Assembly buildings, tank farms, archives, libraries, security offices, cafeterias. The works.

There were, of course, dissenters from the dream. Some physicists feared that the project, though workable and probably quite necessary for any real breakthrough in their field, was simply too much to ask. Enemies from outside the field likened the scheme to Reagan's Star Wars -- an utter scientific farce -- and to the Space Station, a political pork-barrel effort with scarcely a shred of real use in research -- and to the hapless Space Shuttle, an overdesigned gobboon.

Within the field of high-energy-physics, though, the logic was too compelling and the traditional arc of development too strong. A few physicists -- Freeman Dyson among them -- quietly suggested that it might be time for a radically new tack; time to abandon the tried-and-true collider technology entirely, to try daringly novel, small-scale particle-acceleration schemes such as free- electron lasers, gyroklystrons, or wake- field accelerators. But that was not Big Thinking; and particle physics was the very exemplar of Big Science.

In the 1920 and 1930s, particle physicist Ernest Lawrence had practically invented "Big Science" with the Berkeley cyclotrons, each of them larger, more expensive, demanding greater resources and entire teams of scientists. Particle physics, in pursuit of ever-more- elusive particles, by its nature built huge, centralized facilities of ever greater complexity and ever greater expense for ever-larger staffs of researchers. There just wasn't any other way to do particle physics, but the big way.

And then there was the competitive angle, the race for international prestige: high-energy physics as the arcane, scholarly equivalent of the nuclear arms race. The nuclear arms race itself was, of course, a direct result of progress in 20th-century high-energy physics. For Cold Warriors, nuclear science, with its firm linkage to military power, was the Big Science par excellence.

Leon Lederman and his colleague Sheldon Glashow played the patriotic card very strongly in their influential article of March 1985, "The SSC: A Machine for the Nineties." There they wrote: "Of course, as scientists, we must rejoice in the brilliant achievements of our colleagues overseas. Our concern is that if we forgo the opportunity that SSC offers for the 1990s, the loss will not only be to our science but also to the broader issue of national pride and technological self-confidence. When we were children, America did most things best. So it should again."

Lederman and Glashow also argued for the SSC on the grounds of potential spinoffs for American industry: energy storage, power transmission, new tunneling techniques, industrial demand-pull in superconductivity. In meeting "all but insuperable technical obstacles," they declared, American industries would learn better to compete. (There was no mention of what might happen to American "national pride and technological self- confidence" if American industries simply failed to meet those "insuperable obstacles" -- as had already happened in ISABELLE.)

Glashow and Lederman also declared, with perhaps pardonable professional pride, that it was simply a good idea for America to create and employ large armies of particle physicists, pretty much for their own sake. "(P)article physics yields highly trained scientists accustomed to solving the unsolvable. They often go on to play vital roles in the rest of the world.... Many of us have become important contributors in the world of energy resources, neurophysiology, arms control and disarmament, high finance, defense technology and molecular biology.... High energy physics continues to attract and recruit into science its share of the best and brightest. If we were deprived of all those who began their careers with the lure and the dream of participating in this intellectual adventure, the nation would be considerably worse off than it is. Without the SSC, this is exactly what would come to pass."

Funding a gigantic physics lab may seem a peculiarly roundabout way to create, say, molecular biologists, especially when America's actual molecular biologists, no slouches at "solving the unsolvable" themselves, were getting none of the funding for the Super Collider.

When it came to creating experts in "high finance," however, the SSC was on much firmer ground. Financiers worked overtime as the SSC's cost estimates rose again and again, in leaps of billions. The Japanese were quite interested in basic research in superconductive technology; but when they learned they were expected to pay a great deal, but enjoy little of the actual technical development in superconductivity, they naturally balked. So did the Taiwanese, when an increasingly desperate SSC finally got around to asking them to help. The Europeans, recognizing a direct attempt to trump their treasured CERN collider, were superconductively chilly about the idea of investing in any Yankee dream- machine. Estimated cost of the project to the American taxpayer -- or rather, the American deficit borrower -- quickly jumped from 3.9 billion dollars to 4.9 billion, then 6.6 billion, then 8.25 billion, then 10 billion. Then, finally and fatally, to twelve.

Time and again the physicists went to the Congressional crap table, shot the dice for higher stakes, and somehow survived. Scientists outside the high-energy- physics community were livid with envy, but the powerful charisma of physics -- that very well-advanced field that had given America the atomic bomb and a raft of Nobels -- held firm against the jealous, increasingly bitter gaggle of "little science" advocates.

At the start of the project, the Congress was highly enthusiastic. The lucky winner of the SSC had a great deal to gain: a nucleus of high-tech development, scientific prestige, and billions in federally-subsidized infrastructure investment. The Congressperson carrying the SSC home to the district would have a prize beyond mere water-project pork; that lucky politician would have trapped a mastodon.

At length the lucky winner of the elaborate site- selection process was announced: Waxahachie, Texas. Texas Congresspeople were, of course, ecstatic; but other competitors wondered what on earth Waxahachie had to offer that they couldn't.

Waxahachie's main appeal was simple: lots of Texas- sized room for a Texas-sized machine. The Super Collider would, in fact, entirely encircle the historic town of Waxahachie, some 18,000 easy-going folks in a rural county previously best known for desultory cotton-farming. The word "Waxahachie" originally meant "buffalo creek." Waxahachie was well-watered, wooded, farming country built on a bedrock of soft, chalky, easily-excavated limestone.

Lederman, author of the Desertron proposal, rudely referred to Waxahachie as being "in Texas, in the desert" in his SSC promotional pop- science book THE GOD PARTICLE. There was no desert anywhere near Waxahachie, and worse yet, Lederman had serious problems correctly pronouncing the town's name.

The town of Waxahachie, a minor railroad boomtown in the 1870s and 1880s, had changed little during the twentieth century. In later years, Waxahachie had made a virtue of its fossilization. Downtown Waxahachie had a striking Victorian granite county courthouse and a brick- and- gingerbread historical district of downtown shops, mostly frequented by antique-hunting yuppies on day- trips from the Dallas-Fort Worth Metroplex, twenty miles to the north. There was a certain amount of suburban sprawl on the north edge of town, at the edge of commuting range to south Dallas, but it hadn't affected the pace of local life much. Quiet, almost sepulchral Waxahachie was the most favored place in Texas for period moviemaking. Its lovely oak-shadowed graveyard was one of the most- photographed cemeteries in the entire USA.

This, then, was to become the new capital of the high-energy physics community, the home of a global scientific community better known for Mozart and chablis than catfish and C&W. It seemed unbelievable. And it was unbelievable. Scientifically, Waxahachie made sense. Politically, Waxahachie could be sold. Culturally, Waxahachie made no sense whatsoever. A gesture by the federal government and a giant machine could not, in fact, transform good ol' Waxahachie into Berkeley or Chicago or Long Island. A mass migration of physicists might have worked for Los Alamos when hundreds of A-Bomb scientists had been smuggled there in top secrecy at the height of World War II, but there was no atomic war on at the moment. A persistent sense of culture shock and unreality haunted the SSC project from the beginning.

In his 1993 popular-science book THE GOD PARTICLE, Lederman made many glowing comparisons for the SSC: the cathedrals of Europe, the Pyramids, Stonehenge. But those things could all be seen. They all made instant sense even to illiterates. The SSC, unlike the Pyramids, was almost entirely invisible -- a fifty-mile subterranean wormhole stuffed with deep-frozen magnets.

A trip out to the SSC revealed construction cranes, vast junkyards of wooden crating and metal piping, with a few drab, rectangular, hopelessly unromantic assembly buildings, buildings with all the architectural vibrancy of slab-sided machine-shops (which is what they were). Here and there were giant weedy talus-heaps of limestone drill-cuttings from the subterranean "TBM," or Tunnel Boring Machine. The Boring Machine was a state-of-the-art Boring Machine, but its workings were invisible to all but the hard-hats, and the machine itself was, well, boring.

Here and there along the SSC's fifty-four mile circumference, inexplicable white vents rose from the middle of muddy cottonfields. These were the SSC's ventilation and access shafts, all of them neatly padlocked in case some mischievous soul should attempt to see what all the fuss was about. Nothing at the SSC was anything like the heart-lifting spires of Notre Dame, or even the neat-o high-tech blast of an overpriced and rickety Space Shuttle. The place didn't look big or mystical or uplifting; it just looked dirty and flat and rather woebegone.

As a popular attraction the SSC was a bust; and time was not on the side of its planners and builders. As the Cold War waned, the basic prestige of nuclear physics was also wearing rather thin. Hard times had hit America, and hard times had come for American science.

Lederman himself, onetime chairman of the board of the American Association for the Advancement of Science, was painfully aware of the sense of malaise and decline. In 1990 and 1991, Lederman, as chairman of AAAS, polled his colleagues in universities across America about the basic state of Science in America. He heard, and published, a great outpouring of discontent. There was a litany of complaint from American scholars. Pernickety government oversight. Endless paperwork for grants, consuming up to thirty percent of a scientist's valuable research time. A general aging of the academic populace, with graying American scientists more inclined to look back to vanished glories than to anticipate new breakthroughs. Meanspirited insistence by both government and industry that basic research show immediate and tangible economic benefits. A loss of zest and interest in the future, replaced by a smallminded struggle to keep making daily ends meet.

It was getting hard to make a living out there. The competition for money and advancement inside science was getting fierce, downright ungentlemanly. Big wild dreams that led to big wild breakthroughs were being nipped in the bud by a general societal malaise and a failure of imagination. The federal research effort was still vast in scope, and had been growing steadily despite the steadily growing federal deficits. But thanks to decades of generous higher education and the alluring prestige of a life in research, there were now far more mouths to feed in the world of Science. Vastly increased armies of grad students and postdocs found themselves waiting forever for tenure. They were forced to play careerist games over shrinking slices of the grantsmanship pie, rather than leaving money problems to the beancounters and getting mano-a-mano with the Big Questions.

"The 1950s and 1960s were great years for science in America," Lederman wrote nostalgically. "Compared to the much tougher 1990s, anyone with a good idea and a lot of determination, it seemed, could get his idea funded. Perhaps this is as good a criterion for healthy science as any." By this criterion, American science in the 90s was critically ill. The SSC seemed to offer a decisive way to break out of the cycle of decline, to return to those good old days. The Superconducting Super Collider would make Big Science really "super" again, not just once but twice.

The death of the project was slow, and agonizing, and painful. Again and again particle physicists went to Congress to put their hard-won prestige on the line, and their supporters used every tactic in the book. As SCIENCE magazine put in a grim postmortem editorial: "The typical hide-and-seek game of 'it's not the science, it's the jobs' on Monday, Wednesday, and Friday and 'it's not about jobs, it is very good science' on Tuesday, Thursday and Saturday wears thin after a while."

The House killed the Collider in June 1992; the Senate resurrected it. The House killed it again in June 1993, the Senate once again puffed the breath of life into the corpse, but Reagan and Bush were out of power now. Reagan had supported SSC because he was, in his own strange way, a visionary; Bush, though usually more prudent, took care to protect his Texan political base. Bush did in fact win Texas in the presidential election of 1992, but winning Texas was not enough. The party was over. In October 1993 the Super Collider was killed yet again. And this time it stayed dead.

In January 1994 I went to Waxahachie to see the dead Collider.

To say that morale is low at the SSC Labs does not begin to capture the sentiment there. Morale is subterranean. There are still almost two thousand people employed at the dead project; not because they have anything much to do there, but because there is still a tad of funding left for them to consume -- a meager six hundred million or so. And they also stay because, despite their alleged facility at transforming themselves into neurophysiologists, arms control advocates, et al., there is simply not a whole lot of market demand anywhere for particle physicists, at the moment.

The Dallas offices of the SSC Lab are a giant maze of cubicles, every one of them without exception sporting a networked color Macintosh. Employees have pinned up xeroxed office art indicative of their mood. One was a chart called:

"THE SIX PHASES OF A PROJECT: I. Enthusiasm. II. Disillusionment. III. Panic. IV. Search for the Guilty. V. Punishment of the Innocent. VI. Praise & Honor for the Nonparticipants."

According to the chart, the SSC is now at Phase Five, and headed for Six.

SSC staffers have a lot of rather dark jokes now. "The Sour Grapes Alert" reads "This is a special announcement for Supercollider employees only!! Your job is a test. It is only a test!! Had your job been an actual job, you would have received raises, promotions, and other signs of appreciation!! We now return you to your miserable state of existence."

Outside the office building, one of the lab's monstrous brown trash dumpsters has been renamed "Superconductor." The giant steel trash-paper compactor does look oddly like one of the SSC's fifty-foot-long superconducting magnets; but the point, of course, is that trash and the magnet are now roughly equivalent in worth.

The SSC project to date has cost about two billion dollars. Some $440,885,853 of that sum was spent by the State of Texas, and the Governor of the State of Texas, the volatile Ann Richards, is not at all happy about it.

The Governor's Advisory Committee on the Superconducting Super Collider held its first meeting at the SSC Laboratory in Dallas, on January 14, 1994. The basic assignment of this blue-ribbon panel of Texan scholars and politicians is to figure out how to recoup something for Texas from this massive failed investment.

Naturally I made it my business to attend, and sat in on a day's worth of presentations by such worthies as Bob White, President of the National Academy of Engineering; John Peoples, the SSC's current director; Roy Schwitters, the SSC's original Director, who resigned in anguish after the cancellation; the current, and former, Chancellors of the University of Texas System; the Governor's Chief of Staff; the Director of the Texas Office of State-Federal Relations; a pair of Texas Congressmen, and various other interested parties, including engineers, physicists, lawyers and one, other, lone journalist, from a Dallas newspaper. Forty-six people in all, counting the Advisory Committee of nine. Lunch was catered.

The mood was as dark as the fresh-drilled yet already-decaying SSC tunnels. "I hope we can make *something* positive out of all this," muttered US Congressman Joe Barton (R-Tex), Waxahachie's representative and a tireless champion of the original project. A Texas state lawyer told me bitterly that "the Department of Energy treats our wonderful asset like one of their hazardous waste sites!"

For his part, the DoE's official representative, a miserably unhappy flak-catcher from the Office of Energy Research, talked a lot under extensive grilling by the Committee, but said precisely nothing. "I honestly don't know how the Secretary is going to write her report," he mourned, wincing. "The policy is to close things down in as cheap a way as possible."

Nothing about the SSC can be cleared without the nod of the new Energy Secretary, the formidable Hazel O'Leary. At the moment, Ms. O'Leary is very busy, checking the DoE's back-files on decades of nuclear medical research on uninformed American citizens. Her representative conveyed the vague notion that Ms. O'Leary might be inclined to allow something to be done with the site of the SSC, if the State of Texas were willing to pay for everything, and if it weren't too much trouble for her agency. In the meantime she would like to cut the SSC's shut-down budget for 1994 by two-thirds, with no money at all for the SSC in 1995.

Hans Mark, former Chancellor of the University of Texas System, gamely declared that the SSC would in fact be built -- someday. Despite anything Congress may say, the scientific need is still there, he told the committee -- and Waxahachie is still the best site for such a project. Mr. Mark compared the cancelled SSC to the "cancelled" B-1 Bomber, a project that was built at last despite the best efforts of President Carter to kill it. "Five years down the road," he predicted, "or ten years." He urged the State of Texas not to sell the 16,747 acres it has purchased to house the site.

Federal engineering mandarin Bob White grimly called the cancellation "a watershed in American science," noting that never before had such a large project, of undisputed scientific worth, been simply killed outright by Congress. He noted that the physical assets of the SSC are worth essentially nothing -- pennies per pound -- without the trained staff, and that the staff is wasting away.

There remain some 1,983 people in the employ of the SSC (or rather in the employ of the Universities Research Association, a luckless academic bureaucracy that manages the SSC and has taken most of the political blame for the cost overruns). The dead Collider's technical staff alone numbers over a thousand people: 16 in senior management, 133 scientists, 56 applied physicists, 429 engineers, 159 computer specialists and network people, 159 guest scientists and research associates on grants from other countries and other facilities, and 191 "technical associates."

"Deadwood," scoffed one attendee, "three hundred and fifty people in physics research when we don't even have a machine!" But the truth is that without a brilliantly talented staff in place, all those one-of-a-kind cutting- edge machines are so much junk. Many of those who stay are staying in the forlorn hope of actually using some of the smaller machines they have spent years developing and building.

There have been, so far, about sixty more-or-less serious suggestions for alternate uses of the SSC, its facilities, its machineries, and its incomplete tunnel.

The SSC's Linear Accelerator was one of the smaller assets of the great machine, but it is almost finished and would be world-class anywhere else. It has been repeatedly suggested that it could be used for medical radiation treatments or for manufacturing medical isotopes. Unfortunately, the Linear Accelerator is in rural Ellis County, miles from Waxahachie and miles from any hospital, and it was designed and optimized for physics research, not for medical treatment or manufacturing.

The former "N-15" site of the Collider, despite its colorless name, is the most advanced manufacturing and testing facility in the world -- when it comes to giant superconducting magnets. The N-15 magnet facility is not only well-nigh complete, but was almost entirely financed by funds from the State of Texas. Unfortunately, the only real market remaining for its "products" -- brobdingnagian frozen accelerator magnets -- is the European CERN accelerator.

CERN itself has been hurting for money lately, its German and Spanish government partners in particular complaining loudly about the dire expense of hunting top quarks and such.

Former SSC Director Roy Schwitters therefore declared that CERN would need SSC's valuable magnets, and that the US should use these assets as leverage for influence at CERN.

This suggestion, however, was too much for Texan Congressman Joe Barton. He described Schwitter's suggestion as "very altruistic" and pointed out that the Europeans had given the SSC "the back of their hand for eight years!"

One could only admire the moral grit of SSC's former Director in gamely proposing that the magnets, the very backbone of his dead Collider, should be shipped, for the good of science, to his triumphant European rivals. It would seem that the American particle-physics research has suffered such a blow from the collapse of the SSC that the only reasonable course of action for the American physics community is to go cap in hand to the Europeans and try, somehow, to make things up.

At least, that proposal, galling as it may be, does make some sense for American physicists -- but for an American politician, to drop two billion dollars on the SSC just to ship its magnets to some cyclotron in Switzerland is quite another matter. When an attendee gently urged Congressman Barton to "take a longer view" - - perhaps, someday, the Europeans would reciprocate the scientific favor -- the Texan Congressman merely narrowed his eyes in a glare that would have scared Clint Eastwood, and vowed "I will 'reciprocate' the concern that the Europeans have shown for the SSC!"

It's been suggested that the numerous well-appointed SSC offices could become campuses of some new research institution: on magnets, or cryogenics, or controls, or computer simulation. The physics departments of many Texas colleges and universities like this idea. After all, there's a great deal of handy state-of-the-art clutter there, equipment any research lab in the world would envy. Six and a half million dollars' worth of machine tools and welding equipment. Three million in high-tech calibration equipment and measuring devices. Ten million dollars in trucks, vans, excavators, bulldozers and such. A million-dollar print shop.

And almost fifty million dollars worth of state-of- the-art computing equipment circa 1991 or so, including a massively parallel Hypercube simulator, CAD/CAM engineering and design facilities with millions of man- hours of custom software, FDDI, OSI, and videoconferencing office computer networks, and 2,600 Macintosh IIvx personal computers. Plus a two-million dollar, fully- equipped physics library.

Unfortunately it's very difficult to propose a new physics facility just to make use of this, well, stuff, when there are long-established federal physics research facilities such as Los Alamos and Lawrence Livermore, now going begging because nobody wants their veteran personnel to build new nuclear weapons. If anyone builds such a place in Waxahachie, then the State of Texas will have to pay for it. And Texas is not inclined to shell out more money. Texas already feels that the rest of the United States owes Texas $440,885,853 for the dead Collider.

Besides the suggestions for medical uses, magnetic and superconductive studies, and the creation of some new research institute, there are the many suggestions collectively known as "Other." One is to privatize the SSC as the "American Institute for Superconductivity Competitiveness" and ask for corporate help. Unfortunately the hottest (or maybe "coolest") research area in superconductivity these days is not giant helium- frozen magnets for physicists, but the new ceramic superconductors.

Other and odder schemes include a compressed-air energy-storage research facility. An earth-wobble geophysics experiment. Natural gas storage.

And, perhaps inevitably, the suggestion of Committee member Martin Goland that the SSC tunnel be made into a high-level nuclear waste-storage site. A "temporary" waste site, he assured the Committee, that would store highly radioactive nuclear waste in specially designed "totally safe" steel shipping casks, until a "permanent" site opens somewhere in New Mexico.

"I'm gonna sell my house now," stage-whispered the physicist next to me in the audience. "Waxahachie will be a ghost town!"

This was an upshot worthy of Greek myth -- a tunnel built to steal the fiery secrets of the God Particle, which ends up constipated by thousands of radioactive steel coprolites, the Trojan Horse gift of Our Friend Mr. Atom. It's such a darkly poetic, Southern-Gothic example of hubris clobbered by nemesis that one almost wishes it would actually happen.

As far as safety goes, hiding nuclear waste in an incomplete 14.7 mile tunnel under Texas is certainly far more safe than leaving the waste where it is at the moment (basically, all over America, from sea to shining sea). DoE's nuclear-waste chickens have come back to roost in major fashion lately, as time catches up with a generation of Cold War weapons scientists. "They were never given the money they needed to do it cleanly, but just told to do it right away in the name of National Security," a federal expert remarked glumly over the ham and turkey sandwiches at the lunch break. He went on to grimly mention "huge amounts of carbon tetrachloride seeping into the water table" and radioactive waste "storage tanks that burp hydrogen."

But the Texans were having none of that; the chairman of the Committee declared that they had heard Mr. Goland's suggestion, and that it would go no further. The room erupted into nervous laughter.

The Committee's first meeting broke up with the suggestion that sixty million dollars be found somewhere- or-other to maintain an unspecified "core staff" of SSC researchers, while further study is undertaken on what to actually do with the remains.

As the head of SMU's physics department has remarked, "The general impression was that it would be an embarrassment or a waste or sinful to say that, after $2 billion, you get nothing, zip, zero for it." However, zip and zero may well be exactly the result, despite the best intentions of the Texan clean-up crew. The dead Collider is a political untouchable now. The Texans would like to make something from the corpse, not for its own sake, really, but just so the people of Texas will not look quite so much like total hicks and chumps. The DoE, for its part, would like this relic of nutty Reagan Republicanism to vanish into the memory hole with all appropriate speed. The result is quite likely to be a lawsuit by the State of Texas against the DoE, where yet more millions are squandered in years of wrangling by lawyers, an American priesthood whose voracious appetite for public funds puts even physicists to shame.

But perhaps "squandered" is too harsh a word for the SSC. After all, it's not as if those two billion dollars were actually spent on the subatomic level. They were spent in perfectly normal ways, and went quite legally into the pockets of standard government contractors such as Sverdrup and EG&G (facilities construction), Lockheed (systems engineering), General Dynamics, Westinghouse, and Babcock and Wilcox (magnets), Obayashi & Dillingham (tunnel contractors), and Robbins Company (Tunnel Boring Machine). The money went to architects and engineers and designers and roadpavers and people who string Ethernet cable and sell UNIX boxes and Macintoshes. Those dollars also paid the salaries of 2,000 researchers for several years. Admittedly, the nation would have been far better off it those 2,000 talented people simply had been given a million dollars each and told to go turn themselves into anything except particle physicists, but that option wasn't presented.

The easy-going town of Waxahachie seems to have few real grudges over the experience. A public meeting, called so that sufferers in Waxahachie could air their economic complaints about the dead Collider, had almost no attendees. The entire bizarre enterprise seems scarcely to have impinged at all on everyday life in Waxahachie.

Besides, not five miles from the SSC's major campus, the Waxahachians still have their "Scarborough Fair," a huge mock-medieval "English Village" where drawling "lords and ladies" down on day-trips from Dallas can watch fake jousts and drink mead in a romantic heroic-fantasy atmosphere with ten times the popular appeal of that tiresome hard-science nonsense.

As boondoggles go, SSC wasn't small. However, SSC wasn't anywhere near so grotesque as the multiple billions spent, both openly and covertly, on American military science funding. Many of the SSC's contractors were in fact military-industrial contractors, and it may have done them some good to find (slightly) alternate employment. The same goes for the many Russian nuclear physicists employed by the SSC, who earned useful hard currency and were spared the grim career-choices in Russia's collapsing nuclear physics enterprise. It has been a cause of some concern lately that Russian nuclear physicists may, as Lederman and Glashow once put it, "go on to play vital roles in the rest of the world" -- i.e., in the nuclear enterprises of Libya, North Korea, Syria and Iraq. It's a pity those Russians can't be put to work salting the tails of quarks inside the SSC; though a cynic might say it's a greater pity that they were ever taught physics in the first place.

SCIENCE magazine, in its editorial post-mortem "The Lessons of the Super Collider," had its own morals to draw. Lesson One: "High energy physics has become too expensive to be defined by national boundaries." Lesson Two: "Just because particle physics asks questions about the fundamental structure of matter does not give it any greater claim on taxpayer dollars than solid-state physics or molecular biology. Proponents of any project must justify the costs in relation to the scientific and social return."

That may indeed be the New Reality for American science funding today, but it was never the justification of the Machine in the Desert. The Machine in the Desert was an absolute vision, about the absolute need to know.

And it was about pride. "Pride," wrote Lederman and Glashow in 1985, "is one of the seven deadly sins," yet they nevertheless declared their pride in the successes of their predecessors, and their unbounded determination to make America not merely the best in particle physics, but the best in everything, as America had been when they were children.

In his own 1993 post-mortem on the dead Collider, written for the New York Times, Lederman raised the rhetorical question, "Is the real problem the hubris of physicists to believe that society would continue to support this exploration no matter what the cost?" A rhetorical question because Lederman, having raised that cogent question, never bothered to address it. Instead, he ended his column by blaming the always-convenient spectre of American public ignorance of science. "Most important of all," he concluded, "scientists must rededicate themselves to a massive effort at raising the science literacy of the general public. Only when the citizens have a reasonable science savvy will their congressional servants vote correctly."

Alas, many of our congressional servants already possess plenty of science savvy; what they have, is science savvy to their own ends. Not science for the sake of Galileo, Newton, Maxwell, Einstein or Leon Lederman, but science for the sake of the devil's bargain American science has made with its political sponsors: knowledge as power.

As for the supposedly ignorant general public, the American public were far more generous with scientists when scientists were very few in number, and regarded with a proper superstitious awe by a mainly agricultural and blue-collar populace. The more they come to understand science, the less respect the American general public has for the whims of its practitioners. Americans may not do a lot of calculus, but most American voters are "knowledge workers" of one sort or another nowadays, and they've seen Carl Sagan on TV often enough to know that, even though Carl's a nice guy, billions of stars and zillions of quarks won't put bread on their tables. Raising the general science literacy of the American public is probably a self-defeating effort when it comes to monster projects like the SSC. Teaching more American kids more math and science will only increase the already vast armies of scientists and federally funded researchers, drastically shrinking the pool of available funds tomorrow.

It's an open question whether a 40TeV collider like the SSC will ever be built, by anyone, anywhere, ever. The Europeans, in their low-key, suave, yet subtly menacing fashion, seem confident that they can snag the Higgs scalar boson with their upgraded CERN collider at a mere tenth of the cost of Reagan's SSC. If so, corks will pop in Zurich and there will be gnashing of teeth in Brookhaven and Berkeley. American scientific competitors will taste some of the agony of intellectual defeat in the realm of physics that European scientists have been swallowing yearly since 1945. That won't mean the end of the world.

On the other hand, the collapse of SSC may well suck CERN down in the backdraft. It may be that the global prestige of particle physics has now collapsed so utterly that European governments will also stop signing the checks, and CERN itself will fail to build its upgrade.

Or even if they do build it, they may be simply unlucky, and at 10 TeV the CERN people may get little to show.

In which case, it may be that the entire pursuit of particle physics, stymied by energy limits, will simply go out of intellectual fashion. If the global revulsion against both nuclear weapons and nuclear power increases and intensifies, it is not beyond imagination to imagine nuclear research simply dwindling away entirely. The whole kit-and-caboodle of pions, mesons, gluinos, antineutrinos, that whole strange charm of quarkiness, may come to seem a very twentieth-century enthusiasm. Something like the medieval scholastic enthusiasm for numbering the angels that can dance on the head of a pin. Nowadays that's a byword for a silly waste of intellectual effort, but in medieval times that was actually the very same inquiry as modern particle physics: a question about the absolute limits of space and material being.

Or the SSC may never be built for entirely different reasons. It may be that accelerating particles in the next century will not require the massive Rube Goldberg apparatus of a fifty-four-mile tunnel and the twelve cryogenic plants with their entire tank farms of liquid helium. It is a bit hard to believe that scientific questions as basic as the primal nature of matter will be abandoned entirely, but there is more than one way to boost a particle. Giant *room-temperature* superconductors really would transform the industrial base, and they might make quarks jump hoops without the macho necessity of being "super" at all.

In the end, it is hard to wax wroth at the dead Collider, its authors, or those who pulled the plug. The SSC was both sleazy and noble: at one level a "quark- barrel" commercialized morass of contractors scrambling at the federal trough, while Congressmen eye-gouged one another in the cloakroom, scientists angled for the main chance and a steady paycheck, and supposedly dignified scholars ground their teeth in public and backbit like a bunch of jealous prima donnas. And yet at the same time, the SSC really was a Great Enterprise, a scheme to gladden the heart of Democritus and Newton and Tycho Brahe, and all those other guys who had no real job or a fat state sinecure.

The Machine in the Desert was a transcendant scheme to steal cosmic secrets, an enterprise whose unashamed raison d'etre was to enable wild and glorious flights of imagination and comprehension. It was sense-of-wonder and utter sleaze at one and the same time. Rather like science fiction, actually. Not that the SSC itself was science fictional, although it certainly was (and is). I mean, rather, that the SSC was very like the actual writing and publishing of science fiction, an enterprise where bright but surprisingly naive people smash galaxies for seven cents a word and a chance at a plastic brick.

It would take a hard-hearted science fiction writer indeed to stand at the massive lip of that 240-foot hole in the ground at N15 -- as I did late one evening in January, with the sun at my back and tons of hardware gently rusting all around me and not a human being in sight -- and not feel a deep sense of wonder and pity.

In another of his determined attempts to enlighten the ignorant public, in his book THE GOD PARTICLE, Leon Lederman may have said it best.

In a parody of the Bible called "The Very New Testament," he wrote:

"And it came to pass, as they journeyed from the east, that they found a plain in the land of Waxahachie, and they dwelt there. And they said to one another, Go to, let us build a Giant Collider, whose collisions may reach back to the beginning of time. And they had superconducting magnets for bending, and protons had they for smashing.

"And the Lord came down to see the accelerator, which the children of men builded. And the Lord said, Behold the people are unconfounding my confounding. And the Lord sighed and said, Go to, let us go down, and there give them the God Particle so that they may see how beautiful is the universe I have made."

A man who justifies his own dreams in terms of frustrating God and rebuilding the Tower of Babel -- only this time in Texas, and this time done right -- has got to be utterly tone-deaf to his own intellectual arrogance. Worse yet, the Biblical parody is openly blasphemous, unnecessarily alienating a large section of Lederman's potential audience of American voters. Small wonder that the scheme came to grief -- great wonder, in fact, that Lederman's Babel came anywhere as near to success as it did.

Nevertheless, I rather like the sound of that rhetoric; I admire its sheer cosmic chutzpah. I scarcely see what real harm has been done. (Especially compared to the harm attendant on the works of Lederman's colleagues such as Oppenheimer and Sakharov.) It's true that a man was crushed to death building the SSC, but he was a miner by profession, and mining is very hazardous work under any circumstances. Two billion dollars was, it's true, almost entirely wasted, but governments always waste money, and after all, it was only money.

Give it a decade or two, to erase the extreme humiliation naturally and healthfully attendant on this utter scientific debacle. Then, if the United States manages to work its way free of its fantastic burden of fiscal irresponsibility without destroying the entire global economy in the process, then I, for one, as an American and Texan citizen, despite everything, would be perfectly happy to see the next generation of particle physicists voted another three billion dollars, and told to get digging again.

Or even four billion dollars.

Okay, maybe five billion tops; but that's my final offer.

"Bitter Resistance"

Two hundred thousand bacteria could easily lurk under the top half of this semicolon; but for the sake of focussing on a subject that's too often out of sight and out of mind, let's pretend otherwise. Let's pretend that a bacterium is about the size of a railway tank car.

Now that our fellow creature the bacterium is no longer three microns long, but big enough to crush us, we can get a firmer mental grip on the problem at hand. The first thing we notice is that the bacterium is wielding long, powerful whips that are corkscrewing at a blistering 12,000 RPM. When it's got room and a reason to move, the bacterium can swim ten body-lengths every second. The human equivalent would be sprinting at forty miles an hour.

The butt-ends of these spinning whips are firmly socketed inside rotating, proton-powered, motor-hubs. It seems very unnatural for a living creature to use rotating wheels as organs, but bacteria are serenely untroubled by our parochial ideas of what is natural.

The bacterium, constantly chugging away with powerful interior metabolic factories, is surrounded by a cloud of its own greasy spew. The rotating spines, known as flagella, are firmly embedded in the bacterium's outer hide, a slimy, lumpy, armored bark. Studying it closely (we evade the whips and the cloud of mucus), we find the outer cell wall to be a double-sided network of interlocking polymers, two regular, almost crystalline layers of macromolecular chainmail, something like a tough plastic wiffleball.

The netted armor, wrinkled into warps and bumps, is studded with hundreds of busily sucking and spewing orifices. These are the bacterium's "porins," pores made from wrapped-up protein membrane, something like damp rolled-up newspapers that protrude through the armor into the world outside.

On our scale of existence, it would be very hard to drink through a waterlogged rolled-up newspaper, but in the tiny world of a bacterium, osmosis is a powerful force. The osmotic pressure inside our bacterium can reach 70 pounds per square inch, five times atmospheric pressure. Under those circumstances, it makes a lot of sense to be shaped like a tank car.

Our bacterium boasts strong, highly sophisticated electrochemical pumps working through specialized fauceted porins that can slurp up and spew out just the proper mix of materials. When it's running its osmotic pumps in some nutritious broth of tasty filth, our tank car can pump enough juice to double in size in a mere twenty minutes. And there's more: because in that same twenty minutes, our bacterial tank car can build in entire duplicate tank car from scratch.

Inside the outer wall of protective bark is a greasy space full of chemically reactive goo. It's the periplasm. Periplasm is a treacherous mess of bonding proteins and digestive enzymes, which can yank tasty fragments of gunk right through the exterior hide, and break them up for further assimilation, rather like chemical teeth. The periplasm also features chemoreceptors, the bacterial equivalent of nostrils or taste- buds.

Beneath the periplasmic goo is the interior cell membrane, a tender and very lively place full of elaborate chemical scaffolding, where pump and assembly-work goes on.

Inside the interior membrane is the cytoplasm, a rich ointment of salts, sugars, vitamins, proteins, and fats, the tank car's refinery treasure-house.

If our bacterium is lucky, it has some handy plasmids in its custody. A plasmid is an alien DNA ring, a kind of fly-by-night genetic franchise which sets up work in the midst of somebody else's sheltering cytoplasm. If the bacterium is unlucky, it's afflicted with a bacteriophage, a virus with the modus operandi of a plasmid but its own predatory agenda.

And the bacterium has its own native genetic material. Eukaryotic cells -- we humans are made from eukaryotic cells -- possess a neatly defined nucleus of DNA, firmly coated in a membrane shell. But bacteria are prokaryotic cells, the oldest known form of life, and they have an attitude toward their DNA that is, by our standards, shockingly promiscuous. Bacterial DNA simply sprawls out amid the cytoplasmic goo like a circular double-helix of snarled and knotted Slinkies.

Any plasmid or transposon wandering by with a pair of genetic shears and a zipper is welcome to snip some data off or zip some data in, and if the mutation doesn't work, well, that's just life. A bacterium usually has 200,000 or so clone bacterial sisters around within the space of a pencil dot, who are more than willing to take up the slack from any failed experiment in genetic recombination. When you can clone yourself every twenty minutes, shattering the expected laws of Darwinian heredity merely adds spice to life.

Bacteria live anywhere damp. In water. In mud. In the air, as spores and on dust specks. In melting snow, in boiling volcanic springs. In the soil, in fantastic numbers. All over this planet's ecosystem, any liquid with organic matter, or any solid foodstuff with a trace of damp in it, anything not salted, mummified, pickled, poisoned, scorching hot or frozen solid, will swarm with bacteria if exposed to air. Unprotected food always spoils if it's left in the open. That's such a truism of our lives that it may seem like a law of physics, something like gravity or entropy; but it's no such thing, it's the relentless entrepreneurism of invisible organisms, who don't have our best interests at heart.

Bacteria live on and inside human beings. They always have; bacteria were already living on us long, long before our species became human. They creep onto us in the first instants in which we are held to our mother's breast. They live on us, and especially inside us, for as long as we live. And when we die, then other bacteria do their living best to recycle us.

An adult human being carries about a solid pound of commensal bacteria in his or her body; about a hundred trillion of them. Humans have a whole garden of specialized human-dwelling bacteria -- tank-car E. coli, balloon-shaped staphylococcus, streptococcus, corynebacteria, micrococcus, and so on. Normally, these lurkers do us little harm. On the contrary, our normal human-dwelling bacteria run a kind of protection racket, monopolizing the available nutrients and muscling out other rival bacteria that might want to flourish at our expense in a ruder way.

But bacteria, even the bacteria that flourish inside us all our lives, are not our friends. Bacteria are creatures of an order vastly different from our own, a world far, far older than the world of multicellular mammals. Bacteria are vast in numbers, and small, and fetid, and profoundly unsympathetic.

So our tank car is whipping through its native ooze, shuddering from the jerky molecular impacts of Brownian motion, hunting for a chemotactic trail to some richer and filthier hunting ground, and periodically peeling off copies of itself. It's an enormously fast-paced and frenetic existence. Bacteria spend most of their time starving, because if they are well fed, then they double in number every twenty minutes, and this practice usually ensures a return to starvation in pretty short order. There are not a lot of frills in the existence of bacteria. Bacteria are extremely focussed on the job at hand. Bacteria make ants look like slackers.

And so it went in the peculiar world of our acquaintance the tank car, a world both primitive and highly sophisticated, both frenetic and utterly primeval. Until an astonishing miracle occurred. The miracle of "miracle drugs," antibiotics.

Sir Alexander Fleming discovered penicillin in 1928, and the power of the sulfonamides was recognized by drug company researchers in 1935, but antibiotics first came into general medical use in the 1940s and 50s. The effects on the hidden world of bacteria were catastrophic. Bacteria which had spent many contented millennia decimating the human race were suddenly and swiftly decimated in return. The entire structure of human mortality shifted radically, in a terrific attack on bacteria from the world of organized intelligence.

At the beginning of this century, back in the pre-antibiotic year of 1900, four of the top ten leading causes of death in the United States were bacterial. The most prominent were tuberculosis ("the white plague," *Mycobacterium tuberculosis*) and pneumonia (*Streptococcus pneumoniae,* *Pneumococcus*). The death rate in 1900 from gastroenteritis (*Escherichia coli,* various *Campylobacter* species, etc.) was higher than that for heart disease. The nation's number ten cause of death was diphtheria (*Corynebacterium diphtheriae*). Bringing up the bacterial van were gonorrhea, meningitis, septicemia, dysentery, typhoid fever, whooping cough, and many more.

At the end of the century, all of these festering bacterial afflictions (except pneumonia) had vanished from the top ten. They'd been replaced by heart disease, cancer, stroke, and even relative luxuries of postindustrial mortality, such as accidents, homicide and suicide. All thanks to the miracle of antibiotics.

Penicillin in particular was a chemical superweapon of devastating power. In the early heyday of penicillin, the merest trace of this substance entering a cell would make the hapless bacterium literally burst. This effect is known as "lysing."

Penicillin makes bacteria lyse because of a chemical structure called "beta-lactam." Beta-lactam is a four-membered cyclic amide ring, a molecular ring which bears a fatal resemblance to the chemical mechanisms a bacterium uses to build its cell wall.

Bacterial cell walls are mostly made from peptidoglycan, a plastic- like molecule chained together to form a tough, resilient network. A bacterium is almost always growing, repairing damage, or reproducing, so there are almost always raw spots in its cell wall that require construction work.

It's a sophisticated process. First, fragments of not-yet-peptided glycan are assembled inside the cytoplasm. Then the glycan chunks are hauled out to the cell wall by a chemical scaffolding of lipid carrier molecules, and they are fitted in place. Lastly, the peptidoglycan is busily knitted together by catalyzing enzymes and set to cure.

But beta-lactam is a spanner in the knitting-works, which attacks the enzyme which links chunks of peptidoglycan together. The result is like building a wall of bricks without mortar; the unlinked chunks of glycan break open under osmotic pressure, and the cell spews out its innards catastrophically, and dies.

Gram-negative bacteria, of the tank-car sort we have been describing, have a double cell wall, with an outer armor plus the inner cell membrane, rather like a rubber tire with an inner tube. They can sometimes survive a beta-lactam attack, if they don't leak to death. But gram-positive bacteria are more lightly built and rely on a single wall only, and for them a beta-lactam puncture is a swift kiss of death.

Beta-lactam can not only mimic, subvert and destroy the assembly enzymes, but it can even eat away peptide-chain mortar already in place. And since mammalian cells never use any peptidoglycans, they are never ruptured by penicillin (although penicillin does sometimes provoke serious allergic reactions in certain susceptible patients).

Pharmaceutical chemists rejoiced at this world-transforming discovery, and they began busily tinkering with beta-lactam products, discovering or producing all kinds of patentable, marketable, beta-lactam variants. Today there are more than fifty different penicillins and seventy-five cephalosporins, all of which use beta-lactam rings in one form or another.

The enthusiastic search for new medical miracles turned up substances that attack bacteria through even more clever methods. Antibiotics were discovered that could break-up or jam-up a cell's protein synthesis; drugs such as tetracycline, streptomycin, gentamicin, and chloramphenicol. These drugs creep through the porins deep inside the cytoplasm and lock onto the various vulnerable sites in the RNA protein factories. This RNA sabotage brings the cell's basic metabolism to a seething halt, and the bacterium chokes and dies.

The final major method of antibiotic attack was an assault on bacterial DNA. These compounds, such as the sulphonamides, the quinolones, and the diaminopyrimidines, would gum up bacterial DNA itself, or break its strands, or destroy the template mechanism that reads from the DNA and helps to replicate it. Or, they could ruin the DNA's nucleotide raw materials before those nucleotides could be plugged into the genetic code. Attacking bacterial DNA itself was the most sophisticated attack yet on bacteria, but unfortunately these DNA attackers often tended to be toxic to mammalian cells as well. So they saw less use. Besides, they were expensive.

In the war between species, humanity had found a full and varied arsenal. Antibiotics could break open cell walls, choke off the life-giving flow of proteins, and even smash or poison bacterial DNA, the central command and control center. Victory was swift, its permanence seemed assured, and the command of human intellect over the realm of brainless germs was taken for granted. The world of bacteria had become a commercial empire for exploitation by the clever mammals.

Antibiotic production, marketing and consumption soared steadily. Nowadays, about a hundred thousand tons of antibiotics are manufactured globally every year. It's a five billion dollar market. Antibiotics are cheap, far cheaper than time-consuming, labor-intensive hygienic cleanliness. In many countries, these miracle drugs are routinely retailed in job-lots as over-the-counter megadosage nostrums.

Nor have humans been the only mammals to benefit. For decades, antibiotics have been routinely fed to American livestock. Antibiotics are routinely added to the chow in vast cattle feedlots, and antibiotics are fed to pigs, even chickens. This practice goes on because a meat animal on antibiotics will put on poundage faster, and stay healthier, and supply the market with cheaper meat. Repeated protests at this practice by American health authorities have been successfully evaded in courts and in Congress by drug manufacturers and agro-business interests.

The runoff of tainted feedlot manure, containing millions of pounds of diluted antibiotics, enters rivers and watersheds where the world's free bacteria dwell.

In cities, municipal sewage systems are giant petri-dishes of diluted antibiotics and human-dwelling bacteria.

Bacteria are restless. They will try again, every twenty minutes. And they never sleep.

Experts were aware in the 1940s and 1950s that bacteria could, and would, mutate in response to selection pressure, just like other organisms. And they knew that bacteria went through many generations very rapidly, and that bacteria were very fecund. But it seemed that any bacteria would be very lucky to mutate to successfully resist even one antibiotic. Compounding that luck by evolving to resist two antibiotics at once seemed well-nigh impossible. Bacteria were at our mercy. They didn't seem any more likely to resist penicillin and tetracycline than a rainforest can resist bulldozers and chainsaws.

However, thanks to convenience and the profit motive, once- miraculous antibiotics had become a daily commonplace. A general chemical haze of antibiotic pollution spread across the planet. Titanic numbers of bacteria, in all niches of bacterial life, were being given an enormous number of chances to survive antibiotics.

Worse yet, bacteriologists were simply wrong about the way that bacteria respond to a challenge.

Bacteria will try anything. Bacteria don't draw hard and fast intellectual distinctions between their own DNA, a partner's DNA, DNA from another species, virus DNA, plasmid DNA, and food.

This property of bacteria is very alien to the human experience. If your lungs were damaged from smoking, and you asked your dog for a spare lung, and your dog, in friendly fashion, coughed up a lung and gave it to you, that would be quite an unlikely event. It would be even more miraculous if you could swallow a dog's lung and then breathe with it just fine, while your dog calmly grew himself a new one. But in the world of bacteria this kind of miracle is a commonplace.

Bacteria share enormous amounts of DNA. They not only share DNA among members of their own species, through conjugation and transduction, but they will encode DNA in plasmids and transposons and packet-mail it to other species. They can even find loose DNA lying around from the burst bodies of other bacteria, and they can eat that DNA like food and then make it work like information. Pieces of stray DNA can be swept all willy-nilly into the molecular syringes of viruses, and injected randomly into other bacteria. This fetid orgy isn't what Gregor Mendel had in mind when he was discovering the roots of classical genetic inheritance in peas, but bacteria aren't peas, and don't work like peas, and never have. Bacteria do extremely strange and highly inventive things with DNA, and if we don't understand or sympathize, that's not their problem, it's ours.

Some of the best and cleverest information-traders are some of the worst and most noxious bacteria. Such as *Staphylococcus *(boils). *Haemophilus* (ear infections). *Neisseria *(gonorrhea). Pseudomonas (abcesses, surgical infections). Even *Escherichia,* a very common human commensal bacterium.

When it comes to resisting antibiotics, bacteria are all in the effort together. That's because antibiotics make no distinctions in the world of bacteria. They kill, or try to kill, every bacterium they touch.

If you swallow an antibiotic for an ear infection, the effects are not confined to the tiny minority of toxic bacteria that happen to be inside your ear. Every bacterium in your body is assaulted, all hundred trillion of them. The toughest will not only survive, but they will carefully store, and sometimes widely distribute, the genetic information that allowed them to live.

The resistance from bacteria, like the attack of antibiotics, is a multi-pronged and sophisticated effort. It begins outside the cell, where certain bacteria have learned to spew defensive enzymes into the cloud of slime that surrounds them -- enzymes called beta-lactamases, specifically adapted to destroy beta-lactam, and render penicillin useless. At the cell-wall itself, bacteria have evolved walls that are tougher and thicker, less likely to soak up drugs. Other bacteria have lost certain vulnerable porins, or have changed the shape of their porins so that antibiotics will be excluded instead of inhaled.

Inside the wall of the tank car, the resistance continues. Bacteria make permanent stores of beta-lactamases in the outer goo of periplasm, which will chew the drugs up and digest them before they ever reach the vulnerable core of the cell. Other enzymes have evolved that will crack or chemically smother other antibiotics.

In the pump-factories of the inner cell membrane, new pumps have evolved that specifically latch on to antibiotics and spew them back out of the cell before they can kill. Other bacteria have mutated their interior protein factories so that the assembly-line no longer offers any sabotage- sites for site-specific protein-busting antibiotics. Yet another strategy is to build excess production capacity, so that instead of two or three assembly lines for protein, a mutant cell will have ten or fifty, requiring ten or fifty times as much drug for the same effect. Other bacteria have come up with immunity proteins that will lock-on to antibiotics and make them useless inert lumps.

Sometimes -- rarely -- a cell will come up with a useful mutation entirely on its own. The theorists of forty years ago were right when they thought that defensive mutations would be uncommon. But spontaneous mutation is not the core of the resistance at all. Far more often, a bacterium is simply let in on the secret through the exchange of genetic data.

Beta-lactam is produced in nature by certain molds and fungi; it was not invented from scratch by humans, but discovered in a petri dish. Beta- lactam is old, and it would seem likely that beta-lactamases are also very old.

Bacteriologists have studied only a few percent of the many microbes in nature. Even those bacteria that have been studied are by no means well understood. Antibiotic resistance genes may well be present in any number of different species, waiting only for selection pressure to manifest themselves and spread through the gene-pool.

If penicillin is sprayed across the biosphere, then mass death of bacteria will result. But any bug that is resistant to penicillin will swiftly multiply by millions of times, thriving enormously in the power- vacuum caused by the slaughter. The genes that gave the lucky winner its resistance will also increase by millions of times, becoming far more generally available. And there's worse: because often the resistance is carried by plasmids, and one single bacterium can contain as many as a thousand plasmids, and produce them and spread them at will.

That genetic knowledge, once spread, will likely stay around a while. Bacteria don't die of old age. Bacteria aren't mortal in the sense that we understand mortality. Unless they are killed, bacteria just keep splitting and doubling. The same bacterial "individual" can spew copies of itself every twenty minutes, basically forever. After billions of generations, and trillions of variants, there are still likely to be a few random oldtimers around identical to ancestors from some much earlier epoch. Furthermore, spores of bacteria can remain dormant for centuries, then sprout in seconds and carry on as if nothing had happened. This gives the bacterial gene-pool -- better described as an entire gene-ocean -- an enormous depth and range. It's as if Eohippus could suddenly show up at the Kentucky Derby -- and win.

It seems likely that many of the mechanisms of bacterial resistance were borrowed or kidnapped from bacteria that themselves produce antibiotics. The genus Streptomyces, which are filamentous, Gram- positive bacteria, are ubiquitous in the soil; in fact the characteristic "earthy" smell of fresh soil comes from Streptomyces' metabolic products. And Streptomyces bacteria produce a host of antibiotics, including streptomycin, tetracycline, neomycin, chloramphenicol, and erythromycin.

Human beings have been using streptomycin's antibiotic poisons against tuberculosis, gonorrhea, rickettsia, chlamydia, and candida yeast infection, with marked success. But in doing so, we have turned a small- scale natural process into a massive industrial one.

Streptomyces already has the secret of surviving its own poisons. So, presumably, do at least some of streptomyces's neighbors. If the poison is suddenly broadcast everywhere, through every niche in the biosphere, then word of how to survive it will also get around.

And when the gospel of resistance gets around, it doesn't come just one chapter at a time. Scarily, it tends to come in entire libraries. A resistance plasmid (familiarly known to researchers as "R-plasmids," because they've become so common) doesn't have to specialize in just one antibiotic. There's plenty of room inside a ring of plasmid DNA for handy info on a lot of different products and processes. Moving data on and off the plasmid is not particularly difficult. Bacterial scissors-and-zippers units known as "transposons" can knit plasmid DNA right into the central cell DNA -- or they can transpose new knowledge onto a plasmid. These segments of loose DNA are aptly known as "cassettes."

So when a bacterium is under assault by an antibiotic, and it acquires a resistance plasmid from who-knows where, it can suddenly find an entire arsenal of cassettes in its possession. Not just resistance to the one antibiotic that provoked the response, but a whole Bible of resistance to all the antibiotics lately seen in the local microworld.

Even more unsettling news has turned up in a lab report in the Journal of Bacteriology from 1993. Tetracycline-resistant strains in the bacterium Bacteroides have developed a kind of tetracycline reflex. Whenever tetracycline appears in the neighborhood, a Bacteroides transposon goes into overdrive, manufacturing R-plasmids at a frantic rate and then passing them to other bacteria in an orgy of sexual encounters a hundred times more frequent than normal. In other words, tetracycline itself now directly causes the organized transfer of resistance to tetracycline. As Canadian microbiologist Julian Davies commented in Science magazine (15 April 1994), "The extent and biochemical nature of this phenomenon is not well understood. A number of different antibiotics have been shown to promote plasmid transfer between different bacteria, and it might even be considered that some antibiotics are bacterial pheromones."

If this is the case, then our most potent chemical weapons have been changed by our lethal enemies into sexual aphrodisiacs.

The greatest battlegrounds of antibiotic warfare today are hospitals. The human race is no longer winning. Increasingly, to enter a hospital can make people sick. This is known as "nosocomial infection," from the Latin for hospital. About five percent of patients who enter hospitals nowadays pick up an infection from inside the hospital itself.

An epidemic of acquired immune deficiency has come at a particularly bad time, since patients without natural immunity are forced to rely heavily on megadosages of antibiotics. These patients come to serve as reservoirs for various highly resistant infections. So do patients whose immune systems have been artificially repressed for organ transplantion. The patients are just one aspect of the problem, though; healthy doctors and nurses show no symptoms, but they can carry strains of hospital superbug from bed to bed on their hands, deep in the pores of their skin, and in their nasal passages. Superbugs show up in food, fruit juices, bedsheets, even in bottles and buckets of antiseptics.

The advent of antibiotics made elaborate surgical procedures safe and cheap; but nowadays half of nosocomial infections are either surgical infections, or urinary tract infections from contaminated catheters. Bacteria are attacking us where we are weakest and most vulnerable, and where their own populations are the toughest and most battle-hardened. From hospitals, resistant superbugs travel to old-age homes and day-care centers, predating on the old and the very young.

*Staphylococcus aureus,* a common hospital superbug which causes boils and ear infections, is now present in super-strains highly resistant to every known antibiotic except vancomycin. Enterococcus is resistant to vancomycin, and it has been known to swap genes with staphylococcus. If staphylococcus gets hold of this resistance information, then staph could become the first bacterial superhero of the post-antibiotic era, and human physicians of the twenty-first century would be every bit as helpless before it as were physicians of the 19th. In the 19th century physicians dealt with septic infection by cutting away the diseased flesh and hoping for the best.

Staphylococcus often lurks harmlessly in the nose and throat. *Staphylococcus epidermis,* a species which lives naturally on human skin, rarely causes any harm, but it too must battle for its life when confronted with antibiotics. This harmless species may serve as a reservoir of DNA data for the bacterial resistance of other, truly lethal bacteria. Certain species of staph cause boils, others impetigo. Staph attacking a weakened immune system can kill, attacking the lungs (pneumonia) and brain (meningitis). Staph is thought to cause toxic shock syndrome in women, and toxic shock in post-surgical patients.

A 1994 outbreak of an especially virulent strain of the common bacterium Streptococcus, "necrotizing fasciitis," caused panic headlines in Britain about "flesh-eating germs" and "killer bugs." Of the fifteen reported victims so far, thirteen have died.

A great deal has changed since the 1940s and 1950s. Strains of bacteria can cross the planet with the speed of jet travel, and populations of humans -- each with their hundred trillion bacterial passengers -- mingle as never before. Old-fashioned public-health surveillance programs, which used to closely study any outbreak of bacterial disease, have been dismantled, or put in abeyance, or are underfunded. The seeming triumph of antibiotics has made us careless about the restive conquered population of bacteria.

Drug companies treat the standard antibiotics as cash cows, while their best-funded research efforts currently go into antiviral and antifungal compounds. Drug companies follow the logic of the market; with hundreds of antibiotics already cheaply available, it makes little commercial sense to spend millions developing yet another one. And the market is not yet demanding entirely new antibiotics, because the resistance has not quite broken out into full-scale biological warfare. And drug research is expensive and risky. A hundred million dollars of investment in antibiotics can be wiped out by a single point-mutation in a resistant bacterium.

We did manage to kill off the smallpox virus, but none of humanity's ancient bacterial enemies are extinct. They are all still out there, and they all still kill people. Drug companies mind their cash flow, health agencies become complaisant, people mind what they think is their own business, but bacteria never give up. Bacteria have learned to chew up, spit out, or shield themselves from any and every drug we can throw at them. They can now defeat every technique we have. The only reason true disaster hasn't broken out is because all bacteria can't all defeat all the techniques all at once. Yet.

There have been no major conceptual breakthroughs lately in the antibiotic field. There has been some encouraging technical news, with new techniques such as rational drug design and computer-assisted combinatorial chemistry. There may be entirely new miracle drugs just over the horizon that will fling the enemy back once again, with enormous losses. But on the other hand, there may well not be. We may already have discovered all the best antibiotic tricks available, and squandered them in a mere fifty years.

Anyway, now that the nature of their resistance is better understood, no bacteriologist is betting that any new drug can foil our ancient enemies for very long. Bacteria are better chemists than we are and they don't get distracted.

If the resistance triumphs, it does not mean the outbreak of universally lethal plagues or the end of the human race. It is not an apocalyptic problem. What it would really mean -- probably -- is a slow return, over decades, to the pre-antibiotic bacterial status-quo. A return to the bacterial status-quo of the nineteenth century.

For us, the children of the miracle, this would mean a truly shocking decline in life expectancy. Infant mortality would become very high; it would once again be common for parents to have five children and lose three. It would mean a return to epidemic flags, quarantine camps, tubercular sanatariums, and leprosariums.

Cities without good sanitation -- mostly Third World cities -- would suffer from water-borne plagues such as cholera and dysentery. Tuberculosis would lay waste the underclass around the world. If you cut yourself at all badly, or ate spoiled food, there would be quite a good chance that you would die. Childbirth would be a grave septic risk for the mother.

The practice of medicine would be profoundly altered. Elaborate, high-tech surgical procedures, such as transplants and prosthetic implants, would become extremely risky. The expense of any kind of surgery would soar, since preventing infection would be utterly necessary but very tedious and difficult. A bad heart would be a bad heart for life, and a shattered hip would be permanently disabling. Health-care budgets would be consumed by antiseptic and hygienic programs.

Life without contagion and infection would seem as quaintly exotic as free love in the age of AIDS. The decline in life expectancy would become just another aspect of broadly diminishing cultural expectations in society, economics, and the environment. Life in the developed world would become rather pinched, wary, and nasty, while life in the overcrowded human warrens of the megalopolitan Third World would become an abattoir.

If this all seems gruesomely plausible, it's because that's the way our ancestors used to live all the time. It's not a dystopian fantasy; it was the miracle of antibiotics that was truly fantastic. It that miracle died away, it would merely mean an entirely natural return to the normal balance of power between humanity and our invisible predators.

At the close of this century, antibiotic resistance is one of the gravest threats that confronts the human race. It ranks in scope with overpopulation, nuclear disaster, destruction of the ozone, global warming, species extinction and massive habitat destruction. Although it gains very little attention in comparison to those other horrors, there is nothing theoretical or speculative about antibiotic resistance. The mere fact that we can't see it happening doesn't mean that it's not taking place. It is occurring, stealthily and steadily, in a world which we polluted drastically before we ever took the trouble to understand it.

We have spent billions to kill bacteria but mere millions to truly comprehend them. In our arrogance, we have gravely underestimated our enemy's power and resourcefulness. Antibiotic resistance is a very real threat which is well documented and increasing at considerable speed. In its scope and its depth and the potential pain and horror of its implications, it may the greatest single menace that we human beings confront -- besides, of course, the steady increase in our own numbers. And if we don't somehow resolve our grave problems with bacteria, then bacteria may well resolve that population problem for us.


home | my bookshelf | | Essays. FSF Columns |     цвет текста   цвет фона   размер шрифта   сохранить книгу

Текст книги загружен, загружаются изображения
Всего проголосовало: 2
Средний рейтинг 3.5 из 5



Оцените эту книгу