Matt Williams is a space journalist and science communicator for Universe Today and Interesting Engineering. He's also a science fiction author, podcaster (Stories from Space), and Taekwon-Do instructor who lives on Vancouver Island with his wife and family.
For decades, the human race has been deploying satellites into orbit. And in all that time, the method has remained the same – a satellite is placed aboard a booster rocket which is then launched from a limited number of fixed ground facilities with limited slots available. This process not only requires a month or more of preparation, it requires years of planning and costs upwards of millions of dollars.
On top of all that, fixed launch sites are limited in terms of the timing and direction of orbits they can establish, and launches can be delayed by things as simple as bad weather. As such, DARPA has been working towards a new method of satellite deployment, one which eliminates rockets altogether. It’s known as the Airborne Launch Assist Space Access (ALASA), a concept which could turn any airstrip into a spaceport and significantly reduce the cost of deploying satellites.
What ALASA comes down to is a cheap, expendable dispatch launch vehicle that can be mounted onto the underside of an aircraft, flown to a high altitude, and then launched from the craft into low earth orbit. By using the aircraft as a first-stage, satellite deployment will not only become much cheaper, but much more flexible.
DARPA’s aim in creating ALASA was to ensure a three-fold decrease in launch costs, but also to create a system that could carry payloads of up to 45 kg (100 lbs) into orbit with as little as 24 hours’ notice. Currently, small satellite payloads cost roughly $66,000 a kilogram ($30,000 per pound) to launch, and payloads often must share a launcher. ALASA seeks to bring that down to a total of $1 million per launch, and to ensure that satellites can be deployed more precisely.
News of the agency’s progress towards this was made at the 18th Annual Commercial Space Transportation Conference (Feb 4th and 5th) in Washington, DC. Bradford Tousley, the director of DARPA’s Tactical Technology Office, reported on the progress of the agency’s program, claiming that they had successfully completed phase one, which resulted in three viable system designs.
Phase two – which began in March of 2014 when DARPA awarded Boeing the prime contract for development – will consist of DARPA incorporating commercial-grade avionics and advanced composites into the design. Once this is complete, it will involve launch tests that will gauge the launch vehicle’s ability to deploy satellites to desired locations.
“We’ve made good progress so far toward ALASA’s ambitious goal of propelling 100-pound satellites into low earth orbit (LEO) within 24 hours of call-up, all for less than $1 million per launch,” said Tousley in an official statement. “We’re moving ahead with rigorous testing of new technologies that we hope one day could enable revolutionary satellite launch systems that provide more affordable, routine and reliable access to space.”
These technologies include the use of a high-energy monopropellant, where fuel and oxidizer are combined into a single liquid. This technology, which is still largely experimental, will also cut the costs associated with satellite launches by both simplifying engine design and reducing the cost of engine manufacture and operation.
Also, the ability to launch satellites from runways instead of fixed launch sites presents all kinds of advantages. At present, the Department of Defense (DoD) and other government agencies require scheduling years in advance because the number of slots and locations are very limited. This slow, expensive process is causing a bottleneck when it comes to deploying essential space assets, and is also inhibiting the pace of scientific research and commercial interests in space.
“ALASA seeks to overcome the limitations of current launch systems by streamlining design and manufacturing and leveraging the flexibility and re-usability of an air-launched system,” said Mitchell Burnside Clapp, DARPA program manager for ALASA. “We envision an alternative to ride-sharing for satellites that enables satellite owners to launch payloads from any location into orbits of their choosing, on schedules of their choosing, on a launch vehicle designed specifically for small payloads.”
The program began in earnest in 2011, with the agency conducting initial trade studies and market/business case analysis. In November of that same year, development began with both system designs and the development of the engine and propellant technologies. Phase 2 is planned to last late into 2015, with the agency conducting tests of both the vehicle and the monopropellant.
Pending a successful run, the program plan includes 12 orbital launches to test the integrated ALASA prototype system – which is slated to take place in the first half of 2016. Depending on test results, the program would conduct up to 11 further demonstration launches through the summer of 2016. If all goes as planned, ALASA would provide convenient, cost-effective launch capabilities for the growing government and commercial markets for small satellites, which are currently the fastest-growing segment of the space launch industry.
And be sure to check out this concept video of the ALASA, courtesy of DARPA:
The spring is a marvel of human engineering and creativity. For one, it comes in so many varieties – the compression spring, the extension spring, the torsion spring, the coil spring, etc. – all of which serve different and specific functions. These functions in turn allow for the creation of many man-made objects, most of which emerged as part of the Scientific Revolution during the late 17th and 18th centuries.
As an elastic object used to store mechanical energy, the applications for them are extensive, making possible such things as an automotive suspension systems, pendulum clocks, hand sheers, wind-up toys, watches, rat traps, digital micromirror devices, and of course, the Slinky.
Like so many other devices invented over the centuries, a basic understanding of the mechanics is required before it can so widely used. In terms of springs, this means understanding the laws of elasticity, torsion and force that come into play – which together are known as Hooke’s Law.
Hooke’s Law is a principle of physics that states that the that the force needed to extend or compress a spring by some distance is proportional to that distance. The law is named after 17th century British physicist Robert Hooke, who sought to demonstrate the relationship between the forces applied to a spring and its elasticity.
He first stated the law in 1660 as a Latin anagram, and then published the solution in 1678 as ut tensio, sic vis – which translated, means “as the extension, so the force” or “the extension is proportional to the force”).
This can be expressed mathematically as F= -kX, where F is the force applied to the spring (either in the form of strain or stress); X is the displacement of the spring, with a negative value demonstrating that the displacement of the spring once it is stretched; and k is the spring constant and details just how stiff it is.
Hooke’s law is the first classical example of an explanation of elasticity – which is the property of an object or material which causes it to be restored to its original shape after distortion. This ability to return to a normal shape after experiencing distortion can be referred to as a “restoring force”. Understood in terms of Hooke’s Law, this restoring force is generally proportional to the amount of “stretch” experienced.
In addition to governing the behavior of springs, Hooke’s Law also applies in many other situations where an elastic body is deformed. These can include anything from inflating a balloon and pulling on a rubber band to measuring the amount of wind force is needed to make a tall building bend and sway.
This law has had many important practical applications, with one being the creation of a balance wheel, which made possible the creation of the mechanical clock, the portable timepiece, the spring scale and the manometer (aka. the pressure gauge). Also, because it is a close approximation of all solid bodies (as long as the forces of deformation are small enough), numerous branches of science and engineering as also indebted to Hooke for coming up with this law. These include the disciplines of seismology, molecular mechanics and acoustics.
However, like most classical mechanics, Hooke’s Law only works within a limited frame of reference. Because no material can be compressed beyond a certain minimum size (or stretched beyond a maximum size) without some permanent deformation or change of state, it only applies so long as a limited amount of force or deformation is involved. In fact, many materials will noticeably deviate from Hooke’s law well before those elastic limits are reached.
Still, in its general form, Hooke’s Law is compatible with Newton’s laws of static equilibrium. Together, they make it possible to deduce the relationship between strain and stress for complex objects in terms of the intrinsic materials of the properties it is made of. For example, one can deduce that a homogeneous rod with uniform cross section will behave like a simple spring when stretched, with a stiffness (k) directly proportional to its cross-section area and inversely proportional to its length.
Another interesting thing about Hooke’s law is that it is a a perfect example of the First Law of Thermodynamics. Any spring when compressed or extended almost perfectly conserves the energy applied to it. The only energy lost is due to natural friction.
In addition, Hooke’s law contains within it a wave-like periodic function. A spring released from a deformed position will return to its original position with proportional force repeatedly in a periodic function. The wavelength and frequency of the motion can also be observed and calculated.
The modern theory of elasticity is a generalized variation on Hooke’s law, which states that the strain/deformation of an elastic object or material is proportional to the stress applied to it. However, since general stresses and strains may have multiple independent components, the “proportionality factor” may no longer be just a single real number.
A good example of this would be when dealing with wind, where the stress applied varies in intensity and direction. In cases like these, it is best to employ a linear map (aka. a tensor) that can be represented by a matrix of real numbers instead of a single value.
If you enjoyed this article there are several others that you will enjoy on Universe Today. Here is one about Sir Isaac Newton’s contributions to the many fields of science. Here is an interesting article about gravity.
There are also some great resources online, such as this lecture on Hooke’s Law that you can watch on academicearth.org. There is also a great explanation of elasticity on howstuffworks.com.
Remember a few weeks ago when the weather on Mars was making the news? At the time, parts of the Red Planet was experiencing temperatures that were actually warmer than parts of the US. Naturally, there were quite a few skeptics. How could a planet with barely any atmosphere which is farther from the Sun actually be warmer than Earth?
Well, according to recent data obtained by the Curiosity rover, temperatures in the Gale Crater reached a daytime high of -8 °C (17.6 °F) while cities like Chicago and Buffalo were experiencing lows of -16 to -20 °C (2 to -4 °F). As it turns out, this is due to a number of interesting quirks that allow for significant temperature variability on Mars, which at times allow some regions to get warmer than places here on Earth.
It’s no secret that this past winter, we here in North America have been experiencing a bit of a record-breaking cold front. This was due to surges of cold air pushing in from Siberia and the North Pole into Canada, the Northern Plains and the Midwest. This resulted in many cities experiencing January-like weather conditions in November, and several cities hitting record-lows not seen in decades or longer.
For instance, the morning of November 18th, 2014, was the coldest since 1976, with a national average temperature of -7 °C (19.4 °F). That same day, Detroit tied a record it had set in 1880, with a record low of -12 °C (11 °F).
Five days earlier, the city of Denver, Colorado experienced temperatures as cold as -26 °C (-14 °F) while the city of Casper, Wyoming, hit a record low of -33 °C (-27 °F). And then on November 20th, the town of Jacksonville, Florida broke a previous record (which it set in 1873) with an uncharacteristic low of -4° C (25 °F).
Hard to believe isn’t it? Were it not for the constant need for bottled oxygen, more people might consider volunteering for Mars One‘s colonizing mission – which, btw, is still scheduled to depart in 2023, so there’s still plenty of time register! However, these comparative figures manage to conceal a few interesting facts about Mars.
For starters, Mars experiences an average surface temperature of about -55 °C (-67 °F), with temperatures at the pole reaching as low as a frigid -153 °C (-243.4 °F). Meanwhile, here on Earth the average surface temperature is 7.2 °C (45 °F), which is also due to a great deal of seasonal and geographic variability.
In the desert regions near the equator, temperature can get as high as 57.7 °C, with the hottest temperature ever recorded being 70.7 °C (158.36 °F) in the summertime in the desert region of Iran. At the south pole in Antarctica temperatures can reach as low as -89.2 °C (-128.6 °F). Pretty darn cold, but still balmy compared to Mars’ polar ice caps!
Also, since its arrival in 2012, the Curiosity Rover has been rolling around inside Gale Crater – which is located near the planet’s equator. Here, the planet’s temperature experiences the most variability, and can reach as high as 20 °C (68 °F) during midday.
And last, but not least, Mars has a greater eccentricity than all other planet’s in the Solar System – save for Mercury. This means that when the planet is at perihelion (closest to the Sun) it is roughly 0.28 AUs (42.5 million km) closer than when it is at aphelion (farthest from the Sun). Having just passed perihelion recently, the average surface temperatures on Mars can vary by up to an additional 20 ºC.
In short, Mars is still, and by far, the colder of the two planets. Not that it’s a competition or anything…
Ever since its discovery was announced earlier this year, the 3 km-wide ring structure discovered on the of Antarctica has been a source of significant interest and speculation. Initially, the discovery was seen as little more than a happy accident that occurred during a survey of East Antarctica by a WEGAS (West-East Gondwana Amalgamation and its Separation) team from the Alfred Wegener Institute.
However, after the team was interviewed by the Brussels-based International Polar Foundation, news of the find and its possible implication spread like wildfire. Initial theories for the possible origin of the ring indicated that it could be the result of the impact of a large meteor. However, since the news broke, team leader Olaf Eisen has offered an alternative explanation: that the ring structure is in fact the result of other ice-shelf processes.
As Eisen indicated in a new entry on the AWI Ice Blog: “Doug MacAyeal, glaciologist from the University of Chicago, put forward the suggestion that the ring structure could be an ice doline.” Ice dolines are round sinkholes that are caused by a pool of melt water formed within the shelf ice. They are formed by the caving in of ice sheets or glaciers, much in the same way that sinkholes form over caves.
“If the melt water drains suddenly,” he wrote, “like it often does, the surface of the glacier is destabilised and does collapse, forming a round crater. Ice depressions like this have been observed in Greenland and on ice shelves of the Antarctic Peninsula since the 1930s.”
However, in glaciers, these cavities form much more rapidly, as the meltwater created by temperature variations causes englacial lakes or water pockets to from which then drains through the ice sheet. Such dolines have been observed for decades, particularly in Greenland and the Antarctic Peninsula where the ice melts during the summertime.
Initial analysis of satellite images appear to confirm this, as they indicate that the feature could have been present before the supposed impact took place around 25 years ago. In addition, relying on data from Google Maps and Google Earth, the WEGAS (West-East Gondwana Amalgamation and its Separation) team observed that the 3 km ring is accompanied by other, smaller rings.
Such formations are inconsistent with meteorite impacts, which generally leave a single crater with a raised center. And as a general rule, these craters also measure between ten to twenty times the size of the meteorite itself – in this case, that would mean a meteorite 200 meters in diameter. This would mean that, had the ring structure been caused by a meteorite, it would have been the largest Antarctic meteor impact on record.
It is therefore understandable why the announcement of this ring structure triggered such speculation and interest. Meteorite impacts, especially record-breaking ones are nothing if not a hot news item. Too bad this does not appear to be the case.
However, the possibility that the ring structure is the result of an ice doline raises a new host of interesting questions. For one, it would indicate that dolines are much more common in East Antarctica than previously thought. Ice dolines were first noticed in the regions of West Antarctica and the Antarctic Peninsula, where rapid warming is known to take place.
East Antarctica, by contrast, has long been understood to be the coldest, windiest and driest landmass on the planet. Knowing that such a place could produce rapid warming that would lead to the creation of a significant englacial lake would certainly force scientists to rethink what they know about this continent.
“To form an ice doline this size, it would need a considerable reservoir of melt water,” Eisen said. “Therefore we would need to ask, where did all this melt water come from? Which melting processes have caused such an amount of water and how does the melting fit into the climate pattern of East Antarctica?”
In the coming months, Eisen and the AWI scientists plant to analyze the data from the Polar 6 (Eisen’s mission) measurements thoroughly, in the hopes of getting all the facts straight. Also, Jan Lenaerts – a Belgian glaciologist with AWI – is planning an land-based expedition to the site; which unfortunately due to the short Antarctic summer season and the preparation time needed won’t be taking place until the end of 2015.
But what is especially interesting, according to Eisen, is the rapid pace at which the debate surrounding the ring structure occurred. Within days of their announcement, the WEGAS team was astounded by the nature of the debate taking place in the media and on the internet (particularly Facebook), bringing together glaciologists from all around the world.
As Eisen put it in his blog entry, “For the WEGAS team, however, our experience of the last few days has shown that modern scientific discussion is not confined to the ivory towers of learned meetings, technical papers, and lecture halls, but that the public and social media play a tremendous role. For us, cut off from the modern world amongst the eternal ice, this new science seems to have happened at an almost breathtaking pace.”
This activity brought the discussion about the nature of the ring structure forward by several weeks, he claims, focusing attention on the true causes of the surprise discovery itself and comparing and contrasting possible theories.
Planetary rings are an interesting phenomena. The mere mention of these two words tends to conjure up images of Saturn, with its large and colorful system of rings that form an orbiting disk. But in fact, several other planets in our Solar System have rings. It’s just that, unlike Saturn, their systems are less visible, and perhaps less beautiful to behold.
Thanks to exploration efforts mounted in the past few decades, which have seen space probes dispatched to the outer Solar System, we have come to understand that all the gas giants – Jupiter, Saturn, Uranus and Neptune – all have their own ring systems. And that’s not all! In fact, ring systems may be more common than previously thought…
Jupiter’s Rings:
In was not until 1979 that the rings of Jupiter were discovered when the Voyager 1 space probe conducted a flyby of the planet. They were also thoroughly investigated in the 1990s by the Galileo orbiter. Because it is composed mainly of dust, the ring system is faint and can only be observed by the most powerful telescopes, or up-close by orbital spacecraft. However, during the past twenty-three years, it has been observed from Earth numerous times, as well as by the Hubble Space Telescope.
The ring system has four main components: a thick inner torus of particles known as the “halo ring”; a relatively bright, but extremely thin “main ring”; and two wide, thick, and faint outer “gossamer rings”. These outer rings are composed of material from the moons Amalthea and Thebe and are named after these moons (i.e. the “Amalthea Ring” and “Thebe Ring”).
The main and halo rings consist of dust ejected from the moons Metis, Adrastea, and other unobserved parent bodies as the result of high-velocity impacts. Scientists believe that a ring could even exist around the moon of Himalia’s orbit, which could have been created when another small moon crashed into it and caused material to be ejected from the surface.
Saturn’s Rings:
The rings of Saturn, meanwhile, have been known for centuries. Although Galileo Galilei became the first person to observe the rings of Saturn in 1610, he did not have a powerful enough telescope to discern their true nature. It was not until 1655 that Christiaan Huygens, the Dutch mathematician and scientist, became the first person to describe them as a disk surrounding the planet.
Subsequent observations, which included spectroscopic studies by the late 19th century, confirmed that they are composed of smaller rings, each one made up of tiny particles orbiting Saturn. These particles range in size from micrometers to meters that form clumps orbiting the planet, and which are composed almost entirely of water ice contaminated with dust and chemicals.
In total, Saturn has a system of 12 rings with 2 divisions. It has the most extensive ring system of any planet in our solar system. The rings have numerous gaps where particle density drops sharply. In some cases, this due to Saturn’s Moons being embedded within them, which causes destabilizing orbital resonances to occur.
However, within the Titan Ringlet and the G Ring, orbital resonance with Saturn’s moons has a stabilizing influence. Well beyond the main rings is the Phoebe ring, which is tilted at an angle of 27 degrees to the other rings and, like Phoebe, orbits in retrograde fashion.
Uranus’ Rings:
The rings of Uranus are thought to be relatively young, at not more than 600 million years old. They are believed to have originated from the collisional fragmentation of a number of moons that once existed around the planet. After colliding, the moons probably broke up into numerous particles, which survived as narrow and optically dense rings only in strictly confined zones of maximum stability.
Uranus has 13 rings that have been observed so far. They are all very faint, the majority being opaque and only a few kilometers wide. The ring system consists mostly of large bodies 0.2 to 20 m in diameter. A few rings are optically thin and are made of small dust particles which makes them difficult to observe using Earth-based telescopes.
Neptune’s Rings:
The rings of Neptune were not discovered until 1989 until the Voyager 2 space probe conducted a flyby of the planet. Six rings have been observed in the system, which are best described as faint and tenuous. The rings are very dark, and are likely composed by organic compounds processed by radiation, similar to that found in the rings of Uranus. Much like Uranus, and Saturn, four of Neptune’s moons orbit within the ring system.
Other Bodies:
Back in 2008, it was suggested that the magnetic effects around the Saturnian moon of Rhea may indicate that it has its own ring system. However, a subsequent study indicated that observations obtained the Cassini mission suggested that some other mechanism was responsible for the magnetic effects.
Years before the the New Horizons probe visited the system, astronomers speculated that Pluto might also have a ring system. However, after conducting its historic flyby of the system in July of 2015, the New Horizons probe did not find any evidence of a ring system. While the dwarf planet had many satellites aside from its largest (Charon), debris from around the planet has not coalesced into rings, as was theorized.
The minor planet of Chariklo – an asteroid that orbits the Sun between Saturn and Uranus – also has two rings that orbit it. These are perhaps due to a collision that caused a chain of debris to form in orbit around it. The announcement of these rings was made on March 26th of 2014, and was based on observations made during a stellar occultation on June 3rd, 2013.
This was followed by findings made in 2015 that indicated that 2006 Chiron – another major Centaur – could have a ring of its own. This led to further speculation that there might be many minor planets in our Solar System that have a system of rings.
In short, four planets in our Solar System have intricate ring systems, as well as the minor planet Chariklo, and perhaps even many other smaller objects. In this sense, ring systems appear to be a lot more common in our Solar System than previously thought.
Planet Earth boasts some very long rivers, all of which have long and honored histories. The Amazon, Mississippi, Euphrates, Yangtze, and Nile have all played huge roles in the rise and evolution of human societies. Rivers like the Danube, Seine, Volga and Thames are intrinsic to the character of some of our most major cities.
But when it comes to the title of which river is longest, the Nile takes top billing. At 6,583 km (4,258 miles) long, and draining in an area of 3,349,000 square kilometers, it is the longest river in the world, and even the longest river in the Solar System. It crosses international boundaries, its water is shared by 11 African nations, and it is responsible for the one of the greatest and longest-lasting civilizations in the world.
Officially, the Nile begins at Lake Victoria – Africa’s largest Great Lake that occupies the border region between Tanzania, Uganda and Kenya – and ends in a large delta and empties into the Mediterranean Sea. However, the great river also has many tributaries, the greatest of which are the Blue Nile and White Nile rivers.
The White Nile is the source of the majority of the Nile’s water and fertile soil, and originates from Africa’s Great Lakes region of Central Africa (a group that includes Lake Victoria, Edward, Tanganyika, etc.). The Blue Nile starts at Lake Tana in Ethiopia, and flows north-west to where it meets the Nile near Khartoum, Sudan.
The northern section of the Nile flows entirely through the Sudanese Desert to Egypt. Historically speaking, most of the population and cities of these two countries were built along the river valley, a tradition which continues into the modern age. In addition to the capitol cities of Juba, Khartoum, and Cairo, nearly all the cultural and historical sites of Ancient Egypt are to be found along the riverbanks.
The Nile was a much longer river in ancient times. Prior to the Miocene era (ca. 23 to 5 million years ago), Lake Tangnayika drained northwards into the Albert Nile, making the Nile about 1,400 km. That portion of the river became blocked by the bulk of the formation of the Virunga Mountains through volcanic activity.
Between 8000 and 1000 B.C.E., there was also a third tributary called the Yellow Nile that connected the highlands of eastern Chad to the Nile River Valley. Its remains are known as the Wadi Howar, a riverbed that passes through the northern border of Chad and meets the Nile near the southern point of the Great Bend – the region that lies between Khartoum and Aswan in southern Egypt where the river protrudes east and west before traveling north again.
The Nile, as it exists today, is thought to be the fifth river that has flowed from the Ethiopian Highlands. Some form of the Nile is believed to have existed for 25 million years. Satellite images have been used to confirm this, identifying dry watercourses to the west of the Nile that are believed to have been the Eonile.
This “ancestral Nile” is believed to be what flowed in the region during the later Miocene, transporting sedimentary deposits to the Mediterranean Sea. During the late-Miocene Era, the Mediterranean Sea became a closed basin and evaporated to the point of being empty or nearly so. At this point, the Nile cut a new course down to a base level that was several hundred meters below sea level.
This created a very long and deep canyon which was filled with sediment, which at some point raised the riverbed sufficiently for the river to overflow westward into a depression to create Lake Moeris southwest of Cairo. A canyon, now filled by surface drift, represents an ancestral Nile called the Eonile that flowed during the Miocene.
Due to their inability to penetrate the wetlands of South Sudan, the headwaters of the Nile remained unknown to Greek and Roman explorers. Hence, it was not until 1858 when John Speke sighted Lake Victoria that the source of the Nile became known to European historians. He reached its southern shore while traveling with Richard Burton on an expedition to explore central Africa and locate the African Great Lakes.
Believing he had found the source of the Nile, he named the lake after Queen Victoria, the then-monarch of the United Kingdom. Upon learning of this, Burton was outraged that Speke claimed to have found the true source of the Nile and a scientific dispute ensued.
This in turn triggered new waves of exploration that sent David Livingstone into the area. However, he failed by pushing too far to the west where he encountered the Congo River. It was not until the Welsh-American explorer Henry Morton Stanley circumvented Lake Victoria during an expedition that ran from 1874 to 1877 that Speke’s claim to have found the source of the Nile was confirmed.
The Nile became a major transportation route during the European colonial period. Many steamers used the waterway to travel through Egypt and south to the Sudan during the 19th century. With the completion of the Suez Canal and the British takeover of Egypt in the 1870s, steamer navigation of the river became a regular occurrence and continued well into the 1960s and the independence of both nations.
Today, the Nile River remains a central feature to Egypt and the Sudan. Its waters are used by all nations that it passes through for irrigation and farming, and its important to the rise and endurance of civilization in the region cannot be underestimated. In fact, the sheer longevity of Egypt’s many ruling dynasties is often attributed by historians to the periodic flows of sediment and nutrients from Lake Victoria to the delta. Thanks to these flows, it is believed, communities along the Nile River never experienced collapse and disintegration as other cultures did.
In the past four decades, NASA and other space agencies from around the world have accomplished some amazing feats. Together, they have sent manned missions to the Moon, explored Mars, mapped Venus and Mercury, conducted surveys, and captured breathtaking images of the Outer Solar System. However, looking ahead to the next generation of exploration and the more-distant frontiers that remain to be explored, it is clear that new ideas need to be put forward on how to quickly and efficiently reach those destinations.
Basically, this means finding ways to power rockets that are more fuel and cost-effective while still providing the necessary power to get crews, rovers, and orbiters to their far-flung destinations. In this respect, NASA has been taking a good look at nuclear fission as a possible means of propulsion.
In fact, according to a presentation made by Doctor Michael G. Houts of the NASA Marshall Space Flight Center back in October of 2014, nuclear power and propulsion have the potential to be “game-changing technologies for space exploration.”
As the Marshall Space Flight Center’s manager of nuclear thermal research, Dr. Houts is well-versed in the benefits it has to offer space exploration. According to the presentation he and fellow staffers made, a fission reactor can be used in a rocket design to create Nuclear Thermal Propulsion (NTP). In an NTP rocket, uranium or deuterium reactions are used to heat liquid hydrogen inside a reactor, turning it into ionized hydrogen gas (plasma), which is then channeled through a rocket nozzle to generate thrust.
A second possible method, known as Nuclear Electric Propulsion (NEP), involves the same basic reactor converting its heat and energy into electrical energy which then powers an electrical engine. In both cases, the rocket relies on nuclear fission to generate propulsion rather than chemical propellants, which has been the mainstay of NASA and all other space agencies to date.
Compared to this traditional form of propulsion, both NTP and NEP offer a number of advantages. The first and most obvious is the virtually unlimited energy density it offers compared to rocket fuel. At a steady state, a fission reactor produces an average of 2.5 neutrons per reaction. However, it would only take a single neutron to cause a subsequent fission and produce a chain reaction and provide constant power.
In fact, according to the report, an NTP rocket could generate 200 kWt of power using a single kilogram of uranium for a period of 13 years – which works out to a fuel efficiency rating of about 45 grams per 1000 MW-hr.
In addition, a nuclear-powered engine could also provide superior thrust relative to the amount of propellant used. This is what is known as specific impulse, which is measured either in terms of kilo-newtons per second per kilogram (kN·s/kg) or in the amount of seconds the rocket can continually fire. This would cut the total amount of propellent needed, thus cutting launch weight and the cost of individual missions. And a more powerful nuclear engine would mean reduced trip times, another cost-cutting measure.
Although no nuclear-thermal engines have ever flown, several design concepts have been built and tested over the past few decades, and numerous concepts have been proposed. These have ranged from the traditional solid-core design to more advanced and efficient concepts that rely on either a liquid or a gas core.
In the case of a solid-core design, the only type that has ever been built, a reactor made from materials with a very high melting point houses a collection of solid uranium rods which undergo controlled fission. The hydrogen fuel is contained in a separate tank and then passes through tubes around the reactor, gaining heat and converted into plasma before being channeled through the nozzles to achieve thrust.
Using hydrogen propellant, a solid-core design typically delivers specific impulses on the order of 850 to 1000 seconds, which is about twice that of liquid hydrogen-oxygen designs – i.e. the Space Shuttle’s main engine.
However, a significant drawback arises from the fact that nuclear reactions in a solid-core model can create much higher temperatures than conventional materials can withstand. The cracking of fuel coatings can also result from large temperature variations along the length of the rods, which taken together, sacrifices much of the engine’s potential for performance.
Many of these problems were addressed with the liquid core design, where nuclear fuel is mixed into the liquid hydrogen and the fission reaction takes place in the liquid mixture itself. This design can operate at temperatures above the melting point of the nuclear fuel, thanks to the fact that the container wall is actively cooled by the liquid hydrogen. It is also expected to deliver a specific impulse performance of 1300 to 1500 (1.3 to 1.5 kN·s/kg) seconds.
However, compared to the solid-core design, engines of this type are much more complicated and therefore more expensive and difficult to build. Part of the problem has to do with the time it takes to achieve a fission reaction, which is significantly longer than the time it takes to heat the hydrogen fuel. Therefore, engines of this kind require methods to trap the fuel inside the engine while simultaneously allowing heated plasma the ability to exit through the nozzle.
The final classification is the gas-core engine, a modification of the liquid-core design that uses rapid circulation to create a ring-shaped pocket of gaseous uranium fuel in the middle of the reactor that is surrounded by liquid hydrogen. In this case, the hydrogen fuel does not touch the reactor wall, so temperatures can be kept below the melting point of the materials used.
An engine of this kind could allow for specific impulses of 3000 to 5000 seconds (30 to 50 kN·s/kg). But in an “open-cycle” design of this kind, the losses of nuclear fuel would be difficult to control. An attempt to remedy this was drafted with the “closed cycle design” – aka. the “nuclear lightbulb” engine – where the gaseous nuclear fuel is contained in a series of super-high-temperature quartz containers.
Although this design is less efficient than the open-cycle design and has more in common with the solid-core concept, the limiting factor here is the critical temperature of quartz and not that of the fuel stack. What’s more, the closed-cycle design is expected to still deliver a respectable specific impulse of about 1500–2000 seconds (15–20 kN·s/kg).
However, as Houts indicated, one of the greatest assets nuclear fission has going for it is the long history of service it has enjoyed here on Earth. In addition to commercial reactors providing electricity all over the world, naval vessels (such as aircraft carriers and submarines) have made good use of slow-fission reactors for decades.
Also, NASA has been relying on nuclear reactors to power unmanned craft and rovers for over four decades, mainly in the form of Radioisotope Thermoelectric Generators (RTGs) and Radioisotope Heater Units (RHU). In the case of the former, heat is generated by the slow decay of plutonium-238 (Pu-238), which is then converted into electricity. In the case of the latter, the heat itself is used to keep components and ship systems warm and running.
These types of generators have been used to power and maintain everything from the Apollo rockets to the Curiosity Rover, as well as countless satellites, orbiters and robots in between. Since its inception,a total of 44 missions have been launched by NASA that have used either RTGs or RHUs, while the former-Soviet space program launched a comparatively solid 33.
Nuclear engines were also considered for a time as a replacement for the J-2, a liquid-fuel cryogenic rocket engine used on the S-II and S-IVB stages on the Saturn V and Saturn I rockets. But despite there being numerous versions of solid-core reactors produced and tested in the past, none were ever put into service for an actual space flight.
Between 1959 and 1972, the United States tested twenty different sizes and designs during Project Rover and NASA’s Nuclear Engine for Rocket Vehicle Application (NERVA) program. The most powerful engine ever tested was the Phoebus 2a, which operated for a total of 32 minutes and maintained power levels of more than 4.0 million kilowatts for 12 minutes.
But looking to the future, Houts’ and the Marshall Space Flight Center see great potential and many possible applications for this technology. Examples cited in the report include long-range satellites that could explore the Outer Solar System and Kuiper Belt, fast, efficient transportation for manned missions throughout the Solar System, and even the provisions of power for settlements on the Moon and Mars someday.
One possibility is to equip NASA’s latest flagship – the Space Launch System (SLS) – with chemically-powered lower-stage engines and a nuclear-thermal engine on its upper stage. The nuclear engine would remain “cold” until the rocket had achieved orbit, at which point the upper stage would be deployed and the reactor would be activated to generate thrust.
This concept for a “bimodal” rocket – one which relies on chemical propellants to achieve orbit and a nuclear-thermal engine for propulsion in space – could become the mainstay of NASA and other space agencies in the coming years. According to Houts and others at Marshall, the dramatic increase in efficiency offered by such rockets could also facilitate NASA’s plans to explore Mars by allowing for the reliable delivery of high-mass automated payloads in advance of manned missions.
These same rockets could then be retooled for speed (instead of mass) and used to transport the astronauts themselves to Mars in roughly half the time it would take for a conventional rocket to make the trip. This would not only save time and cut mission costs but also ensure that the astronauts were exposed to less harmful solar radiation during the course of their flight.
To see this vision become reality, Dr. Houts and other researchers from the Marshall Space Center’s Propulsion Research and Development Laboratory are currently conducting NTP-related tests at the Nuclear Thermal Rocket Element Environmental Simulator (or “NTREES”) in Huntsville, Alabama.
Here, they have spent the past few years analyzing the properties of various nuclear fuels in a simulated thermal environment, hoping to learn more about how they might affect engine performance and longevity when it comes to a nuclear-thermal rocket engine.
These tests are slated to run until June 2015 and are expected to lay the groundwork for large-scale ground tests and eventual full-scale testing in flight. The ultimate goal of all of this is to ensure that a manned mission to Mars takes place by the 2030s and to provide NASA flight engineers and mission planners with all the information they need to see it through.
But of course, it is also likely to have its share of applications when it comes to future Lunar missions, sending crews to study Near-Earth Objects (NEOs), and sending craft to the Jovian moons and other locations in the outer Solar System. As the report shows, NTP craft can be easily modified using modular components to perform everything from Lunar cargo landings to crewed missions to surveying Near-Earth Asteroids (NEAs).
The Universe is a big place, and space exploration is still very much in its infancy. But if we intend to keep exploring it and reaping the rewards that such endeavors have to offer, our methods will have to mature. NTP is merely one proposed possibility. But unlike Nuclear Pulse Propulsion, the Daedalus concept, anti-matter engines, or the Alcubierre Warp Drive, a rocket that runs on nuclear fission is feasible, practical, and possible within the near future.
Nuclear thermal research at the Marshall Center is part of NASA’s Advanced Exploration Systems (AES) Division, managed by the Human Exploration and Operations Mission Directorate and including participation by the U.S. Department of Energy.
Our Solar System is a pretty picturesque place. Between the Sun, the Moon, and the Inner and Outer Solar System, there is no shortage of wondrous things to behold. But arguably, it is the eight planets that make up our Solar System that are the most interesting and photogenic. With their spherical discs, surface patterns and curious geological formations, Earth’s neighbors have been a subject of immense fascination for astronomers and scientists for millennia.
And in the age of modern astronomy, which goes beyond terrestrial telescopes to space telescopes, orbiters and satellites, there is no shortage of pictures of the planets. But here are a few of the better ones, taken with high-resolutions cameras on board spacecraft that managed to capture their intricate, picturesque, and rugged beauty.
Named after the winged messenger of the gods, Mercury is the closest planet to our Sun. It’s also the smallest (now that Pluto is no longer considered a planet. At 4,879 km, it is actually smaller than the Jovian moon of Ganymede and Saturn’s largest moon, Titan.
Because of its slow rotation and tenuous atmosphere, the planet experiences extreme variations in temperature – ranging from -184 °C on the dark side and 465 °C on the side facing the Sun. Because of this, its surface is barren and sun-scorched, as seen in the image above provided by the MESSENGER spacecraft.
Venus is the second planet from our Sun, and Earth’s closest neighboring planet. It also has the dubious honor of being the hottest planet in the Solar System. While farther away from the Sun than Mercury, it has a thick atmosphere made up primarily of carbon dioxide, sulfur dioxide and nitrogen gas. This causes the Sun’s heat to become trapped, pushing average temperatures up to as high as 460°C. Due to the presence of sulfuric and carbonic compounds in the atmosphere, the planet’s atmosphere also produces rainstorms of sulfuric acid.
Because of its thick atmosphere, scientists were unable to examine of the surface of the planet until 1970s and the development of radar imaging. Since that time, numerous ground-based and orbital imaging surveys have produced information on the surface, particularly by the Magellan spacecraft (1990-94). The pictures sent back by Magellan revealed a harsh landscape dominated by lava flows and volcanoes, further adding to Venus’ inhospitable reputation.
Earth is the third planet from the Sun, the densest planet in our Solar System, and the fifth largest planet. Not only is 70% of the Earth’s surface covered with water, but the planet is also in the perfect spot – in the center of the hypothetical habitable zone – to support life. It’s atmosphere is primarily composed of nitrogen and oxygen and its average surface temperatures is 7.2°C. Hence why we call it home.
Being that it is our home, observing the planet as a whole was impossible prior to the space age. However, images taken by numerous satellites and spacecraft – such as the Apollo 11 mission, shown above – have been some of the most breathtaking and iconic in history.
Mars is the fourth planet from our Sun and Earth’s second closest neighbor. Roughly half the size of Earth, Mars is much colder than Earth, but experiences quite a bit of variability, with temperatures ranging from 20 °C at the equator during midday, to as low as -153 °C at the poles. This is due in part to Mars’ distance from the Sun, but also to its thin atmosphere which is not able to retain heat.
Mars is famous for its red color and the speculation it has sparked about life on other planets. This red color is caused by iron oxide – rust – which is plentiful on the planet’s surface. It’s surface features, which include long “canals”, have fueled speculation that the planet was home to a civilization.
Observations made by satellites flybys in the 1960’s (by the Mariner 3 and 4 spacecraft) dispelled this notion, but scientists still believe that warm, flowing water once existed on the surface, as well as organic molecules. Since that time, a small army of spacecraft and rovers have taken the Martian surface, and have produced some of the most detailed and beautiful photos of the planet to date.
Jupiter, the closest gas giant to our Sun, is also the largest planet in the Solar System. Measuring over 70,000 km in radius, it is 317 times more massive than Earth and 2.5 times more massive than all the other planets in our Solar System combined. It also has the most moons of any planet in the Solar System, with 67 confirmed satellites as of 2012.
Despite its size, Jupiter is not very dense. The planet is comprised almost entirely of gas, with what astronomers believe is a core of metallic hydrogen. Yet, the sheer amount of pressure, radiation, gravitational pull and storm activity of this planet make it the undisputed titan of our Solar System.
Jupiter has been imaged by ground-based telescopes, space telescopes, and orbiter spacecraft. The best ground-based picture was taken in 2008 by the ESO’s Very Large Telescope (VTL) using its Multi-Conjugate Adaptive Optics Demonstrator (MAD) instrument. However, the greatest images captured of the Jovian giant were taken during flybys, in this case by the Galileo and Cassini missions.
Saturn, the second gas giant closest to our Sun, is best known for its ring system – which is composed of rocks, dust, and other materials. All gas giants have their own system of rings, but Saturn’s system is the most visible and photogenic. The planet is also the second largest in our Solar System, and is second only to Jupiter in terms of moons (62 confirmed).
Much like Jupiter, numerous pictures have been taken of the planet by a combination of ground-based telescopes, space telescopes and orbital spacecraft. These include the Pioneer, Voyager, and most recently, Cassini spacecraft.
Another gas giant, Uranus is the seventh planet from our Sun and the third largest planet in our Solar System. The planet contains roughly 14.5 times the mass of the Earth, but it has a low density. Scientists believe it is composed of a rocky core that is surrounded by an icy mantle made up of water, ammonia and methane ice, which is itself surrounded by an outer gaseous atmosphere of hydrogen and helium.
It is for this reason that Uranus is often referred to as an “ice planet”. The concentrations of methane are also what gives Uranus its blue color. Though telescopes have captured images of the planet, only one spacecraft has even taken pictures of Uranus over the years. This was the Voyager 2 craft which performed a flyby of the planet in 1986.
Neptune is the eight planet of our Solar System, and the farthest from the Sun. Like Uranus, it is both a gas giant and ice giant, composed of a solid core surrounded by methane and ammonia ices, surrounded by large amounts of methane gas. Once again, this methane is what gives the planet its blue color. It is also the smallest gas giant in the outer Solar System, and the fourth largest planet.
All of the gas giants have intense storms, but Neptune has the fastest winds of any planet in our Solar System. The winds on Neptune can reach up to 2,100 kilometers per hour, and the strongest of which are believed to be the Great Dark Spot, which was seen in 1989, or the Small Dark Spot (also seen in 1989). In both cases, these storms and the planet itself were observed by the Voyager 2 spacecraft, the only one to capture images of the planet.
Given that our Solar System sits inside the Milky Way Galaxy, getting a clear picture of what it looks like as a whole can be quite tricky. In fact, it was not until 1852 that astronomer Stephen Alexander first postulated that the galaxy was spiral in shape. And since that time, numerous discoveries have come along that have altered how we picture it.
For decades astronomers have thought the Milky Way consists of four arms — made up of stars and clouds of star-forming gas — that extend outwards in a spiral fashion. Then in 2008, data from the Spitzer Space Telescope seemed to indicate that our Milky Way has just two arms, but a larger central bar. But now, according to a team of astronomers from China, one of our galaxy’s arms may stretch farther than previously thought, reaching all the way around the galaxy.
This arm is known as Scutum–Centaurus, which emanates from one end of the Milky Way bar, passes between us and Galactic Center, and extends to the other side of the galaxy. For many decades, it was believed that was where this arm terminated.
However, back in 2011, astronomers Thomas Dame and Patrick Thaddeus from the Harvard–Smithsonian Center for Astrophysics spotted what appeared to be an extension of this arm on the other side of the galaxy.
But according to astronomer Yan Sun and colleagues from the Purple Mountain Observatory in Nanjing, China, the Scutum–Centaurus Arm may extend even farther than that. Using a novel approach to study gas clouds located between 46,000 to 67,000 light-years beyond the center of our galaxy, they detected 48 new clouds of interstellar gas, as well as 24 previously-observed ones.
For the sake of their study, Sun and his colleagues relied on radio telescope data provided by the Milky Way Imaging Scroll Painting project, which scans interstellar dust clouds for radio waves emitted by carbon monoxide gas. Next to hydrogen, this gas is the most abundant element to be found in interstellar space – but is easier for radio telescopes to detect.
Combining this information with data obtained by the Canadian Galactic Plane Survey (which looks for hydrogen gas), they concluded that these 72 clouds line up along a spiral-arm segment that is 30,000 light-years in length. What’s more, they claim in their report that: “The new arm appears to be the extension of the distant arm recently discovered by Dame & Thaddeus (2011) as well as the Scutum-Centaurus Arm into the outer second quadrant.”
This would mean the arm is not only the single largest in our galaxy, but is also the only one to effectively reach 360° around the Milky Way. Such a find would be unprecedented given the fact that nothing of the sort has been observed with other spiral galaxies in our local universe.
Thomas Dame, one of the astronomers who discovered the possible extension of the Scutum-Centaurus Arm in 2011, was quoted by Scientific American as saying: “It’s rare. I bet that you would have to look through dozens of face-on spiral galaxy images to find one where you could convince yourself you could track one arm 360 degrees around.”
Naturally, the prospect presents some problems. For one, there is an apparent gap between the segment that Dame and Thaddeus discovered in 2011 and the start of the one discovered by the Chinese team – a 40,000 light-year gap to be exact. This could mean that the clouds that Sun and his colleagues discovered may not be part of the Scutum-Centaurus Arm after all, but an entirely new spiral-arm segment.
If this is true, than it would mean that our Galaxy has several “outer” arm segments. On the other hand, additional research may close that gap (so to speak) and prove that the Milky Way is as beautiful when seen afar as any of the spirals we often observe from the comfort of our own Solar System.
It’s a cornerstone of modern physics that nothing in the Universe is faster than the speed of light (c). However, Einstein’s theory of special relativity does allow for instances where certain influences appear to travel faster than light without violating causality. These are what is known as “photonic booms,” a concept similar to a sonic boom, where spots of light are made to move faster than c.
And according to a new study by Robert Nemiroff, a physics professor at Michigan Technological University (and co-creator of Astronomy Picture of the Day), this phenomena may help shine a light (no pun!) on the cosmos, helping us to map it with greater efficiency.
Consider the following scenario: if a laser is swept across a distant object – in this case, the Moon – the spot of laser light will move across the object at a speed greater than c. Basically, the collection of photons are accelerated past the speed of light as the spot traverses both the surface and depth of the object.
The resulting “photonic boom” occurs in the form of a flash, which is seen by the observer when the speed of the light drops from superluminal to below the speed of light. It is made possible by the fact that the spots contain no mass, thereby not violating the fundamental laws of Special Relativity.
Another example occurs regularly in nature, where beams of light from a pulsar sweep across clouds of space-borne dust, creating a spherical shell of light and radiation that expands faster than c when it intersects a surface. Much the same is true of fast-moving shadows, where the speed can be much faster and not restricted to the speed of light if the surface is angular.
At a meeting of the American Astronomical Society in Seattle, Washington earlier this month, Nemiroff shared how these effects could be used to study the universe.
“Photonic booms happen around us quite frequently,” said Nemiroff in a press release, “but they are always too brief to notice. Out in the cosmos they last long enough to notice — but nobody has thought to look for them!”
Superluminal sweeps, he claims, could be used to reveal information on the 3-dimensional geometry and distance of stellar bodies like nearby planets, passing asteroids, and distant objects illuminated by pulsars. The key is finding ways to generate them or observe them accurately.
For the purposes of his study, Nemiroff considered two example scenarios. The first involved a beam being swept across a scattering spherical object – i.e. spots of light moving across the Moon and pulsar companions. In the second, the beam is swept across a “scattering planar wall or linear filament” – in this case, Hubble’s Variable Nebula.
In the former case, asteroids could be mapped out in detail using a laser beam and a telescope equipped with a high-speed camera. The laser could be swept across the surface thousands of times a second and the flashes recorded. In the latter, shadows are observed passing between the bright star R Monocerotis and reflecting dust, at speeds so great that they create photonic booms that are visible for days or weeks.
This sort of imaging technique is fundamentally different from direct observations (which relies on lens photography), radar, and conventional lidar. It is also distinct from Cherenkov radiation – electromagnetic radiation emitted when charged particles pass through a medium at a speed greater than the speed of light in that medium. A case in point is the blue glow emitted by an underwater nuclear reactor.
Combined with the other approaches, it could allow scientists to gain a more complete picture of objects in our Solar System, and even distant cosmological bodies.
Nemiroff’s study accepted for publication by the Publications of the Astronomical Society of Australia, with a preliminary version available online at arXiv Astrophysics