It was the summer of 1936 when Ernest Lawrence, the inventor of the atom-smashing cyclotron, received a visit from Emilio Segrè, a scientific colleague from Italy. Segrè explained that he had come all the way to America to ask a very small favor: He wondered whether Lawrence would part with a few strips of thin metal from an old cyclotron unit. Dr Lawrence was happy to oblige; as far as he was concerned the stuff Segrè sought was mere radioactive trash. He sealed some scraps of the foil in an envelope and mailed it to Segrè’s lab in Sicily. Unbeknownst to Lawrence, Segrè was on a surreptitious scientific errand.
At that time the majority of chemical elements had been isolated and added to the periodic table, yet there was an unsightly hole where an element with 43 protons ought to be. Elements with 42 and 44 protons—42molybdenum and 44ruthenium respectively—had been isolated decades earlier, but element 43 was yet to be seen. Considerable accolades awaited whichever scientist could isolate the elusive element, so chemists worldwide were scanning through tons of ores with their spectroscopes, watching for the anticipated pattern.
Upon receiving Dr Lawrence’s radioactive mail back in Italy, Segrè and his colleague Carlo Perrier subjected the strips of molybdenum foil to a carefully choreographed succession of bunsen burners, salts, chemicals, and acids. The resulting precipitate confirmed their hypothesis: element 42 was the answer. The radiation in Lawrence’s cyclotron had converted a few 42molybdenum atoms into element 43, and one ten-billionth of a gram of the stuff now sat in the bottom of their beaker. They dubbed their plundered discovery “technetium” for the Greek word technetos, meaning “artificial.” It was considered to be the first element made by man rather than nature, and its “short” half-life—anywhere from a few nanoseconds to a few million years depending on the isotope—was the reason there’s negligible naturally-occurring technetium left on modern Earth.
In the years since this discovery scientists have employed increasingly sophisticated apparatuses to bang particles together to create and isolate increasingly heavy never-before-seen elements, an effort which continues even today. Most of the obese nuclei beyond 92uranium are too unstable to stay assembled for more than a moment, to the extent that it makes one wonder why researchers expend such time, effort, and expense to fabricate these fickle fragments of matter. But according to our current understanding of quantum mechanics, if we can pack enough protons and neutrons into these husky nuclei we may encounter something astonishing.
On 10 January 1709, pioneering weather observer William Derham recorded an historic event outside his home near London. He examined his thermometer in the frigid morning air and jotted an entry into his meticulous meteorological log. The prior weeks had been typical for an English winter, but overnight an oppressive cold had lodged itself over the Kingdom. As far as Derham was aware, London had never experienced so few millimeters of mercury as it did that morning: -12º C.
The remarkable cold lingered in Europe for weeks. Lakes, rivers, and the sea froze over, and the soil solidified a meter deep. The cold cracked open trees, crushed the life out of livestock huddling in stables, and made travel a treacherous undertaking. It was the coldest winter in the past 500 years, and one of the coldest moments in a larger global phenomenon known as the Little Ice Age. Likely causes include volcanic activity, oceanic currents, and/or reforestation due to Black-Death-induced population decline. It is nearly certain, however, that it has something to do with the unusually low number of sunspots that appeared at that time, a phenomenon referred to as the Maunder minimum.
We now know that such solar minima correlate quite closely with colder-than-normal temperatures on Earth, but science has yet to ascertain exactly why. Solar maximums, on the other hand, have historically had little noteworthy impact on the Earth apart from extra-splendid auroral displays. But thanks to our modern, electrified, interconnected society these previously innocuous events could cause catastrophic economic and social damage in the coming decades.
On the 5th of February 1974, NASA’s plucky Mariner 10 space probe zipped past the planet Venus at over 18,000 miles per hour. Mission scientists took advantage of the opportunity to snap some revealing photos of our sister planet, but the primary purpose of the Venus flyby was to accelerate the probe towards the enigmatic Mercury, a body which had yet to be visited by any Earthly device. The event constituted the first ever gravitational slingshot, successfully sending Mariner 10 to grope the surface of Mercury using its array of sensitive instruments. This validation of the gravity-assist technique put the entire solar system within the practical reach of humanity’s probes, and it was used with spectacular success a few years later as Voyagers 1 and 2 toured the outer planets at a brisk 34,000 miles per hour.
One of the more intriguing theories to fall out of the early gravity-assist research was a hypothetical spacecraft called the Cycler, a vehicle which could utilize gravity to cycle between two bodies indefinitely— Earth and Mars, for instance— with little or no fuel consumption. Even before the complex orbital mathematics were within the grasp of science, tinkerers speculated that a small fleet of Cyclers might one day provide regular bus service to Mars, toting men and equipment to and from the Red Planet every few months. Though this interplanetary ferry may sound a bit like perpetual-motion poppycock, one of the concept’s chief designers and proponents is a man who is intimately familiar with aggressive-yet-successful outer-space endeavors: scientist/astronaut Dr. Buzz Aldrin.
Humanity’s home is far from factory-fresh these days. Frankly, the Earth has received its share of scratches and dents, including large asteroid impacts, megavolcanoes, earthquakes, ice ages, and heat waves. It’s to be expected. There are over four billion years on the clock, after all.
Though it has long been clear that Earth 1.0 is in need of an upgrade, it was not until a few years ago that someone began to take the notion seriously. In 2004, at a respected international design exhibition called the Venice Architecture Biennale, a young artist and architect named Christian Waldvogel displayed his plans for total global annihilation and the creation of Earth 2.0.
In a world where everything from our automobiles to our underwear may soon run on electricity, more efficient portable power is a major concern. After a century of stagnation, chemical and ultracapacitor batteries have recently made some strides forward, and more are on the horizon. But the most promising way of storing energy for the future might come from a more unlikely source, and one that far predates any battery: the flywheel.
In principle, a flywheel is nothing more than a wheel on an axle which stores and regulates energy by spinning continuously. The device is one of humanity’s oldest and most familiar technologies: it was in the potter’s wheel six thousand years ago, as a stone tablet with enough mass to rotate smoothly between kicks of a foot pedal; it was an essential component in the great machines that brought on the industrial revolution; and today it’s under the hood of every automobile on the road, performing the same function it has for millennia—now regulating the strokes of pistons rather than the strokes of a potter’s foot.
Ongoing research, however, suggests that humanity has yet to seize the true potential of the flywheel. When spun up to very high speeds, a flywheel becomes a reservoir for a massive amount of kinetic energy, which can be stored or drawn back out at will. It becomes, in effect, an electromechanical battery.
There’s a caustic substance common to our environment whose very presence turns iron into brittle rust, dramatically increases the risk of fire and explosion, and sometimes destroys the cells of the very organisms that depend on it for survival. This substance that makes up 21% of our atmosphere is Diatomic oxygen (O2), more widely know as just oxygen.
Of course, oxygen has its good points. Besides being necessary for respiration and the reliable combustion engine, it can be liquefied and used as rocket fuel. Oxygen is also widely used in the world of medicine as a means to imbue the body with a greater amount of the needed gas. But recent studies indicate that administering oxygen might be doing less good than hoped—and in fact be causing harm. No one is immune to the dangers of oxygen, but the people who might most suffer the ill effects are infants newly introduced to breathing, and those who are clinically dead.
In the latter half of 1998, a small clutch of researchers and students at the University of Texas embarked upon a groundbreaking experiment. Within a large outbuilding marked with a slapdash sign reading “Center for Quantum Electronics”, the team powered up a makeshift x-ray emitter and directed its radiation beam at an overturned disposable coffee cup. Atop the improvised styrofoam platform was a tiny smear of one of the most expensive materials on Earth: a variation of the chemical element hafnium known as Hf-178-m2.
The researchers’ contraption— cobbled together from a scavenged dental x-ray machine and an audio amplifier— bombarded the sample with radiation for several days as monitoring equipment quietly collected data. When the experiment ended and the measurements were scrutinized, the project leader Dr. Carl B. Collins declared unambiguous success. If his conclusions are accurate, Collins and his colleagues may have found the key to developing fist-sized bombs which can deliver destruction equivalent to a dozen tons of conventional explosives. Despite considerable skepticism from the scientific community, the US Department of Defense has since spent millions of dollars probing the physicist’s findings.
In June 2006 at the ATR Intelligent Robotics and Communication Laboratories in Keihanna, Japan, reporters and scientists gathered for the unveiling of a major new project by Dr. Hiroshi Ishiguro. Once everyone had arrived, an assistant pulled back a curtain to reveal…another Dr. Ishiguro? Certainly the second figure had a very strong resemblance to Dr. Ishiguro, wearing the same glasses and dressed in the same clothing. Seated in a chair, the duplicate was rocking one foot back and forth, blinking and adjusting itself. It looked around and then, in ordinary Japanese, introduced itself; it was named Geminoid HI-1.
For the reporters, up to that point virtually the only clue that Geminoid was an android had come from knowing that Ishiguro is a prominent roboticist. Ishiguro’s creation is more a puppet than an android, strictly speaking; Ishiguro speaks and acts through it via the Internet. As well as transmitting his voice, a motion-capture system allows Ishiguro to project the movements of his mouth and upper body onto Geminoid. The android itself is built of silicone and steel, and based on casts taken from Ishiguro’s body. Regular, small actions such as blinking are controlled by autonomous programs.
The strikingly realistic robot has since been met largely with wonder and admiration, which could mark success for Ishiguro in more ways than the obvious. Although Ishiguro’s earlier android projects were only a little less realistic, they tended to disturb viewers. This is consistent with a 1970 hypothesis by Dr. Masahiro Mori, another Japanese roboticist. Although not yet well-investigated by science, Mori’s “Uncanny Valley” theory holds that as a simulation of a human being’s appearance and/or motion becomes increasingly accurate, there is very suddenly a point at which humans’ interest in the creation turns into utter repulsion.