Meteoric Evidence Suggests Mars May Have a Subsurface Reservoir

Scientists were able to gauge the rate of water loss on Mars by measuring the ratio of water and HDO from today and 4.3 billion years ago. Credit: Kevin Gill

It is a scientific fact that water exists on Mars. Though most of it today consists of water ice in the polar regions or in subsurface areas near the temperate zones, the presence of H²O has been confirmed many times over. It is evidenced by the sculpted channels and outflows that still mark the surface, as well as the presence of clay and mineral deposits that could only have been formed by water. Recent geological surveys provide more evidence that Mars’ surface was once home to warm, flowing water billions of years ago.

But where did the water go? And how and when did it disappear exactly? As it turns out, the answers may lie here on Earth, thanks to meteorites from Mars that indicate that it may have a global reservoir of ice that lies beneath the surface.

Together, researchers from the Tokyo Institute of Technology, the Lunar and Planetary Institute in Houston, the Carnegie Institution for Science in Washington and NASA’s Astromaterials Research and Exploration Science Division examined three Martian meteorites. What they found were samples of water that contained hydrogen atoms that had a ratio of isotopes distinct from that found in water in Mars’ mantle and atmosphere.

Mudstone formations in the Gale Crater show the flat bedding of sediments deposited at the bottom of a lakebed. Credit: NASA/JPL-Caltech/MSSS
Mudstone formations in the Gale Crater show the flat bedding of sediments deposited at the bottom of a lakebed. Credit: NASA/JPL-Caltech/MSSS

This new study examined meteors obtained from different periods in Mars’ past. What the researchers found seemed to indicate that water-ice may have existed beneath the crust intact over long periods of time.

As Professor Tomohiro told Universe Today via email, the significance of this find is that “the new hydrogen reservoir (ground ice and/or hydrated crust) potentially accounts for the “missing” surface water on Mars.”

Basically, there is a gap between what is thought to have existed in the past, and what is observed today in the form of water ice. The findings made by Tomohiro and the international research team help to account for this.

“The total inventory of “observable” current surface water (that mostly occurs as polar ice, ~10E6 km3) is more than one order magnitude smaller than the estimated volume of ancient surface water (~10E7 to 10E8 km3) that is thought to have covered the northern lowlands,” said Tomohiro. “The lack of water at the surface today was problematic for advocates of such large paleo-ocean and -lake volume.”

Meteorites from Mars, like NWA 7034 (shown here), contain evidence of Mars' watery past. Credit: NASA
Meteorites from Mars, like NWA 7034 (shown here), contain evidence of Mars’ watery past. Credit: NASA

In their investigation, the researchers compared the water, hydrogen isotopes and other volatile elements within the meteorites. The results of these examinations forced them to consider two possibilities: In one, the newly identified hydrogen reservoir is evidence of a near-surface ice interbedded with sediment. The second possibility, which seemed far more likely, was that they came from hydrated rock that exists near the top of the Martian crust.

“The evidence is the ‘non-atmospheric’ hydrogen isotope composition of this reservoir,” Tomohiro said. “If this reservoir occurs near the surface, it should easily interact with the atmosphere, resulting in “isotopic equilibrium”.  The non-atmospheric signature indicates that this reservoir must be sequestered elsewhere of this red planet, i.e. ground-ice.”

While the issue of the “missing Martian water” remains controversial, this study may help to bridge the gap between Mars supposed warm, wet past and its cold and icy present. Along with other studies performed here on Earth – as well as the massive amounts of data being transmitted from the many rover and orbiters operating on and in orbit of the planet – are helping to pave the way towards a manned mission, which NASA plans to mount by 2030.

The team’s findings are reported in the journal Earth and Planetary Science Letters.

Further Reading: NASA

Compromises Lead to Climate Change Deal

Secretary-General Addresses Lima Climate Action High-level Meeting. Credit: UN Photo/Mark Garten

Earlier this month, delegates from the various states that make up the UN met in Lima, Peru, to agree on a framework for the Climate Change Conference that is scheduled to take place in Paris next year. For over two weeks, representatives debated and discussed the issue, which at times became hotly contested and divisive.

In the end, a compromise was reached between rich and developing nations, which found themselves on opposite sides for much of the proceedings.

And while few member states walked away feeling they had received all they wanted, many expressed that the meeting was an important step on the road to the 2015 Climate Change Conference. It is hoped that this conference will, after 20 years of negotiations, create the first binding and universal agreement on climate change.

The 2015 Paris Conference will be the 21st session of the Conference of the Parties who signed the 1992 United Nations Framework Convention on Climate Change (UNFCCC) and the 11th session of the Meeting of the Parties who drafted the 1997 Kyoto Protocol.

The objective of the conference is to achieve a legally binding and universal agreement on Climate Change specifically aimed at curbing greenhouse gas emissions to limit global temperature increases to an average of 2 degrees Celsius above pre-industrial levels.

This map represents global temperature anomalies averaged from 2008 through 2012. Credit: NASA Goddard Institute for Space Studies/NASA Goddard's Scientific Visualization Studio.
This map represents global temperature anomalies averaged from 2008 through 2012. Credit: NASA Goddard Institute for Space Studies/NASA Goddard’s Scientific Visualization Studio.

This temperature increase is being driven by increased carbon emissions that have been building steadily since the late 18th century and rapidly in the 20th. According to NASA, CO² concentrations have not exceeded 300 ppm in the upper atmosphere for over 400,000 years, which accounts for the whole of human history.

However, in May of last year, the National Oceanic and Atmospheric Administration (NOAA) announced that these concentrations had reached 400 ppm, based on ongoing observations from the Mauna Loa Observatory in Hawaii.

Meanwhile, research conducted by the U.S. Global Change Research Program indicates that by the year 2100, carbon dioxide emissions could either level off at about 550 ppm or rise to as high as 800. This could mean the difference between a temperature increase of 2.5 °C, which is sustainable, and an increase of 4.5 °C (4.5 – 8 °F), which would make life untenable for many regions of the planet.

Hence the importance of reaching, for the first time in over 20 years of UN negotiations, a binding and universal agreement on the climate that will involve all the nations of the world. And with the conclusion of the Lima Conference, the delegates have what they believe will be a sufficient framework for achieving that next year.

While many environmental groups see the framework as an ineffectual compromise, it was hailed by members of the EU as a step towards the long-awaited global climate deal that began in 1992.

“The decisions adopted in Lima pave the way for the adoption of a universal and meaningful agreement in 2015,” said UN Secretary-General Ban Ki-moon in a statement issued at the conclusion of the two-week meeting. In addition, Peru’s environment minister – Manuel Pulgar-Vidal, who chaired the summit – was quoted by the BBC as saying: “As a text it’s not perfect, but it includes the positions of the parties.”

Al Gore and UNEP Executive Director Achim Steiner at the China Pavilion. Credit: UNEP
Al Gore and UNEP Executive Director Achim Steiner at the China Pavilion at the Lima Conference. Credit: UNEP

Amongst the criticisms leveled by environmental groups is the fact that many important decisions were postponed, and that the draft agreement contained watered-down language.

For instance, on national pledges, it says that countries “may” include quantifiable information showing how they intend to meet their emissions targets, rather than “shall”. By making this optional, environmentalists believe that signatories will be entering into an agreement that is not binding and therefore has no teeth.

However, on the plus side, the agreement kept the 194 members together and on track for next year. Concerns over responsibilities between developed and developing nations were alleviated by changing the language in the agreement, stating that countries have “common but differentiated responsibilities”.

Other meaningful agreements were reached as well, which included boosted commitments to a Green Climate Fund (GCF), financial aid for “vulnerable nations”, new targets to be set for carbon emission reductions, a new process of Multilateral Assessment to achieve new levels of transparency for carbon-cutting initiatives, and new calls to raise awareness by putting climate change into school curricula.

In addition, the Lima Conference also led to the creation of The 1 Gigaton Coalition, a UN-coordinated group dedicated to promoting renewable energy. As stated by the UNEP, this group was created “to boost efforts to save billions of dollars and billions of tonnes of CO² emissions each year by measuring and reporting reductions of greenhouse gas emissions resulting from projects and programs that promote renewable energy and energy efficiency in developing countries.”

A massive, over 7-metre-high balloon, representing one tonne of carbon dioxide (CO2). Credit: UN Photo/Mark Garten
A massive, over 7-metre-high balloon, representing one tonne of carbon dioxide (CO2). Credit: UN Photo/Mark Garten

Coordinated by the United Nations Environment Programme (UNEP) with the support of the Government of Norway, they will be responsible for measuring CO² reductions through the application of renewable energy projects. The coalition was formed in light of the fact that while many nations have such initiatives in place, they are not measuring or reporting the drop in greenhouse gases that result.

They believe that, if accurately measured, these drops in emissions would equal 1 Gigaton by the year 2020. This would not only be beneficial to the environment, but would result in a reduced financial burden for governments all across the world.

As UNEP Executive Director Achim Steiner stated in a press release: “Our global economy could be $18 trillion better off by 2035 if we adopted energy efficiency as a first choice, while various estimates put the potential from energy efficient improvements anywhere between 2.5 and 6.8 gigatons of carbon per year by 2030.”

Ultimately, the 1 Gigaton Coalition hopes to provide the information that demonstrates unequivocally that energy efficiency and renewables are helping to close the gap between current emissions levels and what they will need to come down to if we hope to meet a temperature increase of just 2 °C. This, as already stated, could mean the difference between life and death for many people, and ultimately for the environment as a whole.

The location of UNFCCC talks are rotated by regions throughout United Nations countries. The 2015 conference will be held at Le Bourget from 30 November to 11 December 2015.

Further Reading: UN, UNEP, UNFCCC

SpaceX Continues to Expand Facilities, Workforce in Quest for Space

A SpaceX Falcon 9 Grasshopper reusable rocket undergoing testing. Credit: SpaceX

SpaceX was founded by Elon Musk in 2002 with a dream of making commercial space exploration a reality. Since that time, Musk has seen his company become a major player in the aerospace industry, landing contracts with various governments, NASA, and other private space companies to put satellites in orbit and ferry supplies to the International Space Station.

But 2014 was undoubtedly their most lucrative year to date. In September, the company (along with Boeing) signed a contract with NASA for $6.8 billion to develop space vehicles that would bring astronauts to and from the ISS by 2017 and end the nation’s reliance on Russia.

And this past week, the company announced a plan to expand operations at its Rocket Development and Test Facility in McGregor, Texas. This move, which is costing the company a cool $46 million, is expected to create 300 new full-time jobs in the community and expand testing and development even further.

According to Mike Copeland of the Waco Tribute, an additional $1.5 million in funding could be allocated from McLennon County. This would give SpaceX a total of $3 million in funds from the Waco-McLennan County Economic Development Corportation, a fund which is used to attract and keep industry in the region.

A SuperDraco engine being tested at the McGregor Facility in Texas. Credit: SpaceX
A SuperDraco thruster being tested at the Rocket Development and Test Facility in McGregor, Texas. Credit: SpaceX

Copeland also indicates that a report prepared by the Waco City Council specified what types of jobs would be created. Apparently, SpaceX is is need of additional engineers, technicians and industry professionals. No doubt, this planned expansion has much to do with the company meeting its new contractual obligations with NASA.

Originally built in 2003, the Rocket Development and Test Facility has been the site of some exciting events over the years. Using rocket test stands, the company has conducted several low-altitude Vertical Takeoff and Vertical Landing (VTVL) test flights with the Falcon 9 Grasshopper rocket. In addition, the McGregor facility is used for post-flight disassembly and defueling of the Dragon spacecraft.

In the past ten years, SpaceX has also made numerous expansions and improvements to the facility, effectively doubling the size of the facility by purchasing several pieces of adjacent farmland. As of September 2013, the facility measured 900 acres (360 hectares). But by early 2014, the company had more than quadrupled its lease in McGregor, to a total of 4,280 acres.

Though far removed from the company’s rocket building facilities at their headquarters in Hawthorne, California, the facility plays an important role in the development of their space capsule and reusable rocket systems. According to SpaceX’s company website, “Every Merlin engine that powers the Falcon 9 rocket and every Draco thruster that controls the Dragon spacecraft is tested on one of 11 test stands.”

A Falcon 9 Grasshopper conducting VTVL testing. Credit: SpaceX
A Falcon 9 Grasshopper conducting VTVL testing. Credit: SpaceX

In short, the facility is the key testing grounds for all SpaceX technology. And now that the company is actively collaborating with NASA to restore indigenous space-launch ability to the US, more testing will be needed. Much has been made about the company’s efforts with VTVL rocket systems – such as the Falcon 9 Grasshopper (pictured above) – but the Dragon V2 takes things to another level.

As revealed by SpaceX in May of this year, the Dragon V2 capsule is designed to ferry crew members and supplies into orbit, and then land propulsively (i.e. under its own power) back to Earth before refueling and flying again. This is made possible thanks to the addition of eight side-mounted SuperDraco engines.

Compared to the standard Draco Engine, which is designed to give the Dragon Capsule (and the upper stages of the Falcon 9 rocket) attitude control in space, the SuperDraco is 100 times more powerful.

According to SpaceX, each SuperDraco is capable of producing 16,000 pounds of thrust and can be restarted multiple times if necessary. In addition, the engines have the ability to deep throttle, providing astronauts with precise control and enormous power.

With eight engines in total, that would provide a Dragon V2 with 120,000 pounds of axial thrust, giving it the ability to land anywhere without the need of a parachute (though they do come equipped with a backup chute).

Between this and ongoing developments with the Falcon 9 reusable rocket system, employees in McGregor are likely to have their hands full in the coming years. The expansion is expected to be complete by 2018.

Further Reading: NASA, SpaceX, Waco Tribute

What is the Average Surface Temperature of the Planets in our Solar System?

Artist's impression of the planets in our solar system, along with the Sun (at bottom). Credit: NASA

It’s is no secret that Earth is the only inhabited planet in our Solar System. All the planets besides Earth lack a breathable atmosphere for terrestrial beings, but also, many of them are too hot or too cold to sustain life. A “habitable zone” which exists within every system of planets orbiting a star. Those planets that are too close to their sun are molten and toxic, while those that are too far outside it are icy and frozen.

But at the same time, forces other than position relative to our Sun can affect surface temperatures. For example, some planets are tidally locked, which means that they have one of their sides constantly facing towards the Sun. Others are warmed by internal geological forces and achieve some warmth that does not depend on exposure to the Sun’s rays. So just how hot and cold are the worlds in our Solar System? What exactly are the surface temperatures on these rocky worlds and gas giants that make them inhospitable to life as we know it?

Mercury:

Of our eight planets, Mercury is closest to the Sun. As such, one would expect it to experience the hottest temperatures in our Solar System. However, since Mercury also has no atmosphere and it also spins very slowly compared to the other planets, the surface temperature varies quite widely.

What this means is that the side exposed to the Sun remains exposed for some time, allowing surface temperatures to reach up to a molten 465 °C. Meanwhile, on the dark side, temperatures can drop off to a frigid -184°C. Hence, Mercury varies between extreme heat and extreme cold and is not the hottest planet in our Solar System.

Venus imaged by Magellan Image Credit: NASA/JPL
Venus is an incredibly hot and hostile world, due to a combination of its thick atmosphere and proximity to the Sun. Image Credit: NASA/JPL

Venus:

That honor goes to Venus, the second closest planet to the Sun which also has the highest average surface temperatures – reaching up to 460 °C on a regular basis. This is due in part to Venus’ proximity to the Sun, being just on the inner edge of the habitability zone, but also to Venus’ thick atmosphere, which is composed of heavy clouds of carbon dioxide and sulfur dioxide.

These gases create a strong greenhouse effect which traps a significant portion of the Sun’s heat in the atmosphere and turns the planet surface into a barren, molten landscape. The surface is also marked by extensive volcanoes and lava flows, and rained on by clouds of sulfuric acid. Not a hospitable place by any measure!

Earth:

Earth is the third planet from the Sun, and so far is the only planet that we know of that is capable of supporting life. The average surface temperature here is about 14 °C, but it varies due to a number of factors. For one, our world’s axis is tilted, which means that one hemisphere is slanted towards the Sun during certain times of the year while the other is slanted away.

This not only causes seasonal changes, but ensures that places located closer to the equator are hotter, while those located at the poles are colder. It’s little wonder then why the hottest temperature ever recorded on Earth was in the deserts of Iran (70.7 °C) while the lowest was recorded in Antarctica (-89.2 °C).

Mars' thin atmosphere, visible on the horizon, is too weak to retain heat. Credit: NASA
Mars’ thin atmosphere, visible on the horizon, is too weak to retain heat. Credit: NASA

Mars:

Mars’ average surface temperature is -55 °C, but the Red Planet also experiences some variability, with temperatures ranging as high as 20 °C at the equator during midday, to as low as -153 °C at the poles. On average though, it is much colder than Earth, being just on the outer edge of the habitable zone, and because of its thin atmosphere – which is not sufficient to retain heat.

In addition, its surface temperature can vary by as much as 20 °C due to Mars’ eccentric orbit around the Sun (meaning that it is closer to the Sun at certain points in its orbit than at others).

Jupiter:

Since Jupiter is a gas giant, it has no solid surface, so it has no surface temperature. But measurements taken from the top of Jupiter’s clouds indicate a temperature of approximately -145°C. Closer to the center, the planet’s temperature increases due to atmospheric pressure.

At the point where atmospheric pressure is ten times what it is on Earth, the temperature reaches 21°C, what we Earthlings consider a comfortable “room temperature”. At the core of the planet, the temperature is much higher, reaching as much as 35,700°C – hotter than even the surface of the Sun.

Saturn and its rings, as seen from above the planet by the Cassini spacecraft. Credit: NASA/JPL/Space Science Institute. Assembled by Gordan Ugarkovic.
Saturn and its rings, as seen from above the planet by the Cassini spacecraft. Credit: NASA/JPL/Space Science Institute/Gordan Ugarkovic

Saturn:

Due to its distance from the Sun, Saturn is a rather cold gas giant planet, with an average temperature of -178 °Celsius. But because of Saturn’s tilt, the southern and northern hemispheres are heated differently, causing seasonal temperature variation.

And much like Jupiter, the temperature in the upper atmosphere of Saturn is cold, but increases closer to the center of the planet. At the core of the planet, temperatures are believed to reach as high as 11,700 °C.

Uranus:

Uranus is the coldest planet in our Solar System, with a lowest recorded temperature of -224°C. Despite its distance from the Sun, the largest contributing factor to its frigid nature has to do with its core.

Much like the other gas giants in our Solar System, the core of Uranus gives off far more heat than is absorbed from the Sun. However, with a core temperature of approximately 4,737 °C, Uranus’ interior gives of only one-fifth the heat that Jupiter’s does and less than half that of Saturn.

Neptune photographed by Voyage. Image credit: NASA/JPL
Neptune photographed by Voyager 2. Image credit: NASA/JPL

Neptune:

With temperatures dropping to -218°C in Neptune’s upper atmosphere, the planet is one of the coldest in our Solar System. And like all of the gas giants, Neptune has a much hotter core, which is around 7,000°C.

In short, the Solar System runs the gambit from extreme cold to extreme hot, with plenty of variance and only a few places that are temperate enough to sustain life. And of all of those, it is only planet Earth that seems to strike the careful balance required to sustain it perpetually.

Universe Today has many articles on the temperature of each planet, including the temperature of Mars and the temperature of Earth.

You may also want to check out these articles on facts about the planets and an overview of the planets.

NASA has a great graphic here that compares the temperatures of all the planets in our Solar System.

Astronomy Cast has episodes on all planets including Mercury.

Just in Time for the Holidays – Galactic Encounter Puts on Stunning Display

That's the case with NGC 2207 and IC 2163, which are located about 130 million light-years from Earth, in the constellation of Canis Major. Image credit: NASA/CXC/SAO/STScI/JPL-Caltech

At this time of year, festive displays of light are to be expected. This tradition has clearly not been lost on the galaxies NHC 2207 and IC 2163. Just in time for the holidays, these colliding galaxies, which are located within the Canis Major constellation (some 130 million light-years from Earth,) were seen putting on a spectacular lights display for us folks here on Earth!

And while this galaxy has been known to produce a lot of intense light over the years, the image above is especially luminous. A composite using data from the Chandra Observatory and the Hubble and Spitzer Space Telescopes, it shows the combination of visible, x-ray, and infrared light coming from the galactic pair.

In the past fifteen years, NGC 2207 and IC 2163 have hosted three supernova explosions and produced one of the largest collections of super bright X-ray lights in the known universe. These special objects – known as “ultraluminous X-ray sources” (ULXs) – have been found using data from NASA’s Chandra X-ray Observatory.

While the true nature of ULXs is still being debated, it is believed that they are a peculiar type of star X-ray binary. These consist of a star in a tight orbit around either a neutron star or a black hole. The strong gravity of the neutron star or black hole pulls matter from the companion star, and as this matter falls toward the neutron star or black hole, it is heated to millions of degrees and generates X-rays.

 the core of galaxy Messier 82 (M82), where two ultraluminous X-ray sources, or ULXs, reside (X-1 and X-2). Credit: NASA
The core of galaxy Messier 82 (M82), where two ultraluminous X-ray sources, or ULXs, reside (X-1 and X-2). Credit: NASA

Data obtained from Chandra has determined that – much like the Milky Way Galaxy – NGC 2207 and IC 2163 are sprinkled with many star X-ray binaries. In the new Chandra image, this x-ray data is shown in pink, which shows the sheer prevalence of x-ray sources within both galaxies.

Meanwhile, optical light data from the Hubble Space Telescope is rendered in red, green, and blue (also appearing as blue, white, orange, and brown due to color combinations,) and infrared data from the Spitzer Space Telescope is shown in red.

The Chandra observatory spent far more time observing these galaxies than any previous ULX study, roughly five times as much. As a result, the study team – which consisted of researchers from Harvard University, MIT, and Sam Houston State University – were able to confirm the existence of 28 ULXs between NGC 2207 and IC 2163, seven of which had never before been seen.

In addition, the Chandra data allowed the team of scientists to observe the correlation between X-ray sources in different regions of the galaxy and the rate at which stars are forming in those same regions.

Galaxy mergers, such as the Mice Galaxies will be part of Galaxy Zoo's newest project. Credit: Hubble Space Telescope
The Mice galaxies, seen here well into the process of merging. Credit: Hubble Space Telescope

As the new Chandra image shows, the spiral arms of the galaxies – where large amounts of star formation is known to be occurring – show the heaviest concentrations of ULXs, optical light, and infrared. This correlation also suggests that the companion star in the star X-ray binaries is young and massive.

This in turn presents another possibility which has to do with star formation during galactic mergers. When galaxies come together, they produce shock waves that cause clouds of gas within them to collapse, leading to periods of intense star formation and the creation of star clusters.

The fact that the ULXs and the companion stars are young (the researchers estimate that they are only 10 million years old) would seem to confirm that they are the result of NGC 2207 and IC 2163 coming together. This seem a likely explanation since the merger between these two galaxies is still in its infancy, which is attested to by the fact that the galaxies are still separate.

They are expected to collide soon, a process which will make them look more like the Mice Galaxies (pictured above). In about one billion years time, they are expected to finish the process, forming a spiral galaxy that would no doubt resemble our own.

A paper describing the study was recently published on online with The Astrophysical Journal.

Further Reading: NASA/JPL, Chandra, arXiv Astrophysics

What Causes Day and Night?

Image of the Sunrise Solstice captured over Stonehenge. Image Credit: Max Alexander/STFC/SPL

For most of here on planet Earth, sunrise, sunset, and the cycle of day and night (aka. the diurnal cycle) are just simple facts of life. As a result of seasonal changes that happen with every passing year, the length of day and night can vary – and be either longer or shorter – by just a few hours. But in some regions of the world (i.e. the poles) the Sun does not set during certain times of the year. And there are also seasonal periods where a single night can last many days.

Naturally, this gives rise to certain questions. Namely, what causes the cycle of day and night, and why don’t all places on the planet experience the same patterns? As with many other seasonal experiences, the answer has to do with two facts: One, the Earth rotates on its axis as it orbits the Sun. And two, the fact that Earth’s axis is tilted.

Earth’s Rotation:

Earth’s rotation occurs from west to east, which is why the Sun always appears to be rising on the eastern horizon and setting on the western. If you could view the Earth from above, looking down at the northern polar region, the planet would appear to be rotating counter-clockwise. However, viewed from the southern polar region, it appears to be rotating clockwise.

Earth's axial tilt (or obliquity) and its relation to the rotation axis and plane of orbit as viewed from the Sun during the Northward equinox. Credit: NASA
Earth’s axial tilt and its relation to the rotation axis and plane of orbit as viewed from the Sun during the Northward equinox. Credit: NASA

The Earth rotates once in about 24 hours with respect to the Sun and once every 23 hours 56 minutes and 4 seconds with respect to the stars.  What’s more, its central axis is aligned with two stars. The northern axis points outward to Polaris, hence why it is called “the North Star”, while its southern axis points to Sigma Octantis.

Axial Tilt:

As already noted, due to the Earth’s axial tilt (or obliquity), day and night are not evenly divided. If the Earth’s axis were perpendicular to its orbital plane around the Sun, all places on Earth would experience equal amounts of day and night (i.e. 12 hours of day and night, respectively) every day during the year and there would be no seasonal variability.

Instead, at any given time of the year, one hemisphere is pointed slightly more towards the Sun, leaving the other pointed away. During this time, one hemisphere will be experiencing warmer temperatures and longer days while the other will experience colder temperatures and longer nights.

Seasonal Changes:

Of course, since the Earth is rotating around the Sun and not just on its axis, this process is reversed during the course of a year. Every six months, the Earth undergoes a half orbit and changes positions to the other side of the Sun, allowing the other hemisphere to experience longer days and warmer temperatures.

Precession of the Equinoxes. Image credit: NASA
Artist’s rendition of the Earth’s rotation and the precession of the Equinoxes. Credit: NASA

Consequently, in extreme places like the North and South pole, daylight or nighttime can last for days. Those times of the year when the northern and southern hemispheres experience their longest days and nights are called solstices, which occur twice a year for the northern and southern hemispheres.

The Summer Solstice takes place between June 20th and 22nd in the northern hemisphere and between December 20th and 23rd each year in the southern hemisphere. The Winter Solstice occurs at the same time but in reverse – between Dec. 20th and 23rd for the northern hemisphere and June 20th and 22nd for the southern hemisphere.

According to NOAA, around the Winter Solstice at the North Pole there will be no sunlight or even twilight beginning in early October, and the darkness lasts until the beginning of dawn in early March. Conversely, around the Summer Solstice, the North Pole stays in full sunlight all day long throughout the entire summer (unless there are clouds). After the Summer Solstice, the sun starts to sink towards the horizon.

Another common feature in the cycle of day and night is the visibility of the Moon, the stars, and other celestial bodies. Technically, we don’t always see the Moon at night. On certain days, when the Moon is well-positioned between the Earth and the Sun, it is visible during the daytime. However, the stars and other planets of our Solar System are only visible at night after the Sun has fully set.

Astrophoto: Night Sky by Sam Crimmin
“Night Sky”. On a clear night, the stars and the glowing band of the Milky Way Galaxy are generally visible. Credit: Sam Crimmin

The reason for this is because the light of these objects is too faint to be seen during daylight hours. The Sun, being the closest star to us and the most radiant object visible from Earth, naturally obscures them when it is overhead. However, with the Earth tilted away from the Sun, we are able to see the Moon radiating the Sun’s light more clearly, and the stars light is detectable.

On an especially clear night, and assuming light pollution is not a major factor, the glowing band of the Milky Way and other clouds of dust and gas may also be visible in the night sky. These objects are more distant than the stars in our vicinity of the Galaxy, and therefore have less luminosity and are more difficult to see.

Another interesting thing about the cycle of day and night is that it is getting slower with time. This is due to the tidal effects the Moon has on Earth’s rotation, which is making days longer (but only marginally). According to atomic clocks around the world, the modern day is about 1.7 milliseconds longer than it was a century ago – a change which may require the addition of more leap seconds in the future.

We have many interesting articles on Earth’s Rotation here at Universe Today. To learn more about solstices here in Universe Today, be sure to check out our articles on the Shortest Day of the Year and the Summer Solstice.

More information can be found at NASA, Seasons of the Year, The Sun at Solstice

Check out this podcast at Astronomy Cast: The Life of the Sun

NASA’s RoboSimian And Surrogate Robots

RoboSimian and Surrogate are robots that were designed and built at NASA's Jet Propulsion Laboratory in Pasadena, California. Credit: JPL-Caltech

Since they were first announced in 2012, NASA has been a major contender in the DARPA Robotics Challenge (DRC). This competition – which involves robots navigating obstacle courses using tools and vehicles – was first conceived by DARPA to see just how capable robots could be at handling disaster response.

The Finals for this challenge will be taking place on June 5th and 6th, 2015, at Fairplex in Pomona, California. And after making it this far with their RoboSimian design, NASA was faced with a difficult question. Should their robotic primate continue to represent them, or should that honor go to their recently unveiled Surrogate robot?

As the saying goes “you dance with the one who brung ya.” In short, NASA has decided to stick with RoboSimian as they advance into the final round of obstacles and tests in their bid to win the DRC and the $2 million prize.

Surrogate’s unveiling took place this past October 24th at NASA’s Jet Propulsion Laboratory in Pasadena, California. The appearance of this robot on stage, to the them song of 2001: A Space Odyssey, was held on the same day that Thomas Rosenbaum was inaugurated as the new president of the California Institute of Technology.

Robotics researchers at NASA's Jet Propulsion Laboratory in Pasadena, California, stand with robots RoboSimian and Surrogate, both built at JPL. Credit: JPL-Caltech
Robotics researchers at NASA’s Jet Propulsion Laboratory stand with robots RoboSimian and Surrogate, both built at JPL. Credit: JPL-Caltech

In honor of the occasion, Surrogate (aka “Surge”) strutted its way across the stage to present a digital tablet to Rosenbaum, which he used to push a button that initiated commands for NASA’s Mars rover Curiosity. Despite the festive nature of the occasion, this scene was quite calm compared to what the robot was designed for.

“Surge and its predecessor, RoboSimian, were designed to extend humanity’s reach, going into dangerous places such as a nuclear power plant during a disaster scenario such as we saw at Fukushima. They could take simple actions such as turning valves or flipping switches to stabilize the situation or mitigate further damage,” said Brett Kennedy, principal investigator for the robots at JPL.

RoboSimian was originally created for the DARPA Robotics Challenge, and during the trial round last December, the JPL team’s robot won a spot to compete in the finals, which will be held in Pomona, California, in June 2015.

With the support of the Defense Threat Reduction Agency and the Robotics Collaborative Technology Alliance, the Surrogate robot began construction in 2014. Its designers began by incorporating some of RoboSimian’s extra limbs, and then added a wheeled base, twisty spine, an upper torso, and a head for holding sensors.

Surrogate, nicknamed "Surge," is a robot designed and built at NASA's Jet Propulsion Laboratory in Pasadena, California. Credit: JPL-Caltech
Surrogate, nicknamed “Surge,” is a robot designed and built at NASA’s Jet Propulsion Laboratory in Pasadena, California. Credit: JPL-Caltech

Additional components include a the hat-like appendage on top, which is in fact a LiDAR (Light Detection and Ranging) device. This device spins and shoots out laser beams in a 360-degree field to map the surrounding environment in 3-D.

Choosing between them was a tough call, and took the better part of the last six months. On the one hand, Surrogate was designed to be more like a human. It has an upright spine, two arms and a head, standing about 1.4 meters (4.5 feet) tall and weighing about  91 kilograms (200 pounds). Its major strength is in how it handles objects, and its flexible spine allows for extra manipulation capabilities. But the robot moves on tracks, which doesn’t allow it to move over tall objects, such as flights of stairs, ladders, rocks, and rubble.

RoboSimian, by contrast, is more ape-like, moving around on four limbs. It is better suited to travel over complicated terrain and is an adept climber. In addition, Surrogate has only one set of “eyes” – two cameras that allow for stereo vision – mounted to its head, whereas RoboSimian has up to seven sets of eyes mounted all over its body.

The robots also run on almost identical computer code, and the software that plans their motion is very similar. As in a video game, each robot has an “inventory” of objects with which it can interact. Engineers have to program the robots to recognize these objects and perform pre-set actions on them, such as turning a valve or climbing over blocks.

RoboSimian is an ape-like robot that moves around on four limbs. It was designed and built at NASA's Jet Propulsion Laboratory in Pasadena, California. Credit: JPL-Caltech
RoboSimian is an ape-like robot that moves around on four limbs. It will be representing the Jet Propulsion Laboratory at the DARPA Robotics Challenge Finals in June, 2015. Credit: JPL-Caltech

In the end, they came to a decision. RoboSimian will represent the team in Pomona.

“It comes down to the fact that Surrogate is a better manipulation platform and faster on benign surfaces, but RoboSimian is an all-around solution, and we expect that the all-around solution is going to be more competitive in this case,” Kennedy said.

The RoboSimian team at JPL is collaborating with partners at the University of California, Santa Barbara, and Caltech to get the robot to walk more quickly. JPL researchers also plan to put a LiDAR on top of RoboSimian in the future. These efforts seek to improve the robot in the long-run, but are also aimed at getting it ready to face the challenges of the DARPA Robot Challenge Finals.

Specifically, it will be faced with such tasks as driving a vehicle and getting out of it, negotiating debris blocking a doorway, cutting a hole in a wall, opening a valve, and crossing a field with cinderblocks or other debris. There will also be a surprise task.

Although RoboSimian is now the focus of Kennedy’s team, Surrogate won’t be forgotten.

“We’ll continue to use it as an example of how we can take RoboSimian limbs and reconfigure them into other platforms,” Kennedy said.

For details about the DARPA Robotics Challenge, visit: http://www.theroboticschallenge.org/

Further Reading: NASA

A Universe of 10 Dimensions

Superstrings may exist in 11 dimensions at once. Via National Institute of Technology Tiruchirappalli.

When someone mentions “different dimensions,” we tend to think of things like parallel universes – alternate realities that exist parallel to our own but where things work differently. However, the reality of dimensions and how they play a role in the ordering of our Universe is really quite different from this popular characterization.

To break it down, dimensions are simply the different facets of what we perceive to be reality. We are immediately aware of the three dimensions that surround us – those that define the length, width, and depth of all objects in our universes (the x, y, and z axes, respectively).

Beyond these three visible dimensions, scientists believe that there may be many more. In fact, the theoretical framework of Superstring Theory posits that the Universe exists in ten different dimensions. These different aspects govern the Universe, the fundamental forces of nature, and all the elementary particles contained within.

The first dimension, as already noted, is that which gives it length (aka. the x-axis). A good description of a one-dimensional object is a straight line, which exists only in terms of length and has no other discernible qualities. Add to that a second dimension, the y-axis (or height), and you get an object that becomes a 2-dimensional shape (like a square).

The third dimension involves depth (the z-axis) and gives all objects a sense of area and a cross-section. The perfect example of this is a cube, which exists in three dimensions and has a length, width, depth, and hence volume. Beyond these three dimensions reside the seven that are not immediately apparent to us but can still be perceived as having a direct effect on the Universe and reality as we know it.

The timeline of the universe, beginning with the Big Bang. Credit: NASA
The timeline of the Universe, beginning with the Big Bang. According to String Theory, this is just one of many possible worlds. Credit: NASA

Scientists believe that the fourth dimension is time, which governs the properties of all known matter at any given point. Along with the three other dimensions, knowing an object’s position in time is essential to plotting its position in the Universe. The other dimensions are where the deeper possibilities come into play, and explaining their interaction with the others is where things get particularly tricky for physicists.

According to Superstring Theory, the fifth and sixth dimensions are where the notion of possible worlds arises. If we could see on through to the fifth dimension, we would see a world slightly different from our own, giving us a means of measuring the similarity and differences between our world and other possible ones.

In the sixth, we would see a plane of possible worlds, where we could compare and position all the possible universes that start with the same initial conditions as this one (i.e., the Big Bang). In theory, if you could master the fifth and sixth dimensions, you could travel back in time or go to different futures.

In the seventh dimension, you have access to the possible worlds that start with different initial conditions. Whereas in the fifth and sixth, the initial conditions were the same, and subsequent actions were different, everything is different from the very beginning of time. The eighth dimension again gives us a plane of such possible universe histories. Each begins with different initial conditions and branches out infinitely (hence why they are called infinities).

In the ninth dimension, we can compare all the possible universe histories, starting with all the different possible laws of physics and initial conditions. In the tenth and final dimension, we arrive at the point where everything possible and imaginable is covered. Beyond this, nothing can be imagined by us lowly mortals, which makes it the natural limitation of what we can conceive in terms of dimensions.

String space - superstring theory lives in 10 dimensions, which means that six of the dimensions have to be "compactified" in order to explain why we can only perceive four. The best way to do this is to use a complicated 6D geometry called a Calabi-Yau manifold, in which all the intrinsic properties of elementary particles are hidden. Credit: A Hanson. String space - superstring theory lives in 10 dimensions, which means that six of the dimensions have to be "compactified" in order to explain why we can only perceive four. The best way to do this is to use a complicated 6D geometry called a Calabi-Yau manifold, in which all the intrinsic properties of elementary particles are hidden. Credit: A Hanson.
The existence of extra dimensions is explained using the Calabi-Yau manifold, in which all the intrinsic properties of elementary particles are hidden. Credit: A Hanson.

The existence of these additional six dimensions, which we cannot perceive, is necessary for String Theory for there to be consistency in nature. The fact that we can perceive only four dimensions of space can be explained by one of two mechanisms: either the extra dimensions are compactified on a very small scale, or else our world may live on a 3-dimensional submanifold corresponding to a brane, on which all known particles besides gravity would be restricted (aka. brane theory).

If the extra dimensions are compactified, then the extra six dimensions must be in the form of a Calabi–Yau manifold (shown above). While imperceptible as far as our senses are concerned, they would have governed the formation of the Universe from the very beginning. Hence why scientists believe that by peering back through time and using telescopes to observe light from the early Universe (i.e., billions of years ago), they might be able to see how the existence of these additional dimensions could have influenced the evolution of the cosmos.

Much like other candidates for a grand unifying theory – aka the Theory of Everything (TOE) – the belief that the Universe is made up of ten dimensions (or more, depending on which model of string theory you use) is an attempt to reconcile the standard model of particle physics with the existence of gravity. In short, it is an attempt to explain how all known forces within our Universe interact and how other possible universes themselves might work.

For additional information, here’s an article on Universe Today about parallel Universes and another on a parallel Universe that scientists thought they’d found, but doesn’t actually exist.

There are also some other great resources online. There is a great video that explains the ten dimensions in detail. You can also look at the PBS website for the TV show Elegant Universe. It has a great page on the ten dimensions.

You can also listen to Astronomy Cast. You might find Episode 137: Large Scale Structure of the Universe very interesting.

Source: PBS

Solar System History: How Was the Earth Formed?

Winter Solstice
Earth as viewed from the cabin of the Apollo 11 spacecraft. Credit: NASA

Just how did the Earth — our home and the place where life as we know it evolved — come to be created in the first place? In some fiery furnace atop a great mountain? On some divine forge with the hammer of the gods shaping it out of pure ether? How about from a great ocean known as Chaos, where something was created out of nothing and then filled with all living creatures?

If any of those accounts sound familiar, they are some of the ancient legends that have been handed down through the years that attempt to describe how our world came to be. And interestingly enough, some of these ancient creation stories contain an element of scientific fact to them.

Continue reading “Solar System History: How Was the Earth Formed?”

The Science of Heat Transfer: What Is Conduction?

Diagram showing the transfer of thermal energy via conduction. Credit: Boundless

Heat is an interesting form of energy. Not only does it sustain life, make us comfortable and help us prepare our food, but understanding its properties is key to many fields of scientific research. For example, knowing how heat is transferred and the degree to which different materials can exchange thermal energy governs everything from building heaters and understanding seasonal change to sending ships into space.

Heat can only be transferred through three means: conduction, convection and radiation. Of these, conduction is perhaps the most common, and occurs regularly in nature. In short, it is the transfer of heat through physical contact. It occurs when you press your hand onto a window pane, when you place a pot of water on an active element, and when you place an iron in the fire.

This transfer occurs at the molecular level — from one body to another — when heat energy is absorbed by a surface and causes the molecules of that surface to move more quickly. In the process, they bump into their neighbors and transfer the energy to them, a process which continues as long as heat is still being added.

Heat conduction occurs through any material, represented here by a rectangular bar. The temperature of the material is T2 on the left and T1 on the right, where T2 is greater than T1. The rate of heat transfer by conduction is directly proportional to the surface area A, the temperature difference T2?T1, and the substance's conductivity k. The rate of heat transfer is inversely proportional to the thickness d. Credit: Boundless
Heat conduction occurs through any material, represented here by a rectangular bar. The rate at which it is transfers depends in part on the thickness of the material (rep. by A). Credit: Boundless

The process of heat conduction depends on four basic factors: the temperature gradient, the cross section of the materials involved, their path length, and the properties of those materials.

A temperature gradient is a physical quantity that describes in which direction and at what rate the temperature changes in a specific location. Temperature always flows from the hottest to coldest source, due to the fact that cold is nothing but the absence of heat energy. This transfer between bodies continues until the temperature difference decays, and a state known as thermal equilibrium occurs.

Cross-section and path length are also important factors. The greater the size of the material involved in the transfer, the more heat is needed to warm it. Also, the more surface area that is exposed to open air, the greater likelihood for heat loss. So shorter objects with a smaller cross-section are the best means of minimizing the loss of heat energy.

Last, but certainly not least, is the physical properties of the materials involved. Basically, when it comes to conducting heat, not all substances are created equal. Metals and stone are considered good conductors since they can speedily transfer heat, whereas materials like wood, paper, air, and cloth are poor conductors of heat.

Conduction, as demonstrated by heating a metal rod with a flame. Credit: Thomson Higher Education
Conduction, as demonstrated by heating a metal rod with a flame. Credit: Thomson Higher Education

These conductive properties are rated based on a “coefficient” which is measured relative to silver. In this respect, silver has a coefficient of heat conduction of 100, whereas other materials are ranked lower. These include copper (92), iron (11), water (0.12), and wood (0.03). At the opposite end of the spectrum is a perfect vacuum, which is incapable of conducting heat, and is therefore ranked at zero.

Materials that are poor conductors of heat are called insulators. Air, which has a conduction coefficient of .006, is an exceptional insulator because it is capable of being contained within an enclosed space. This is why artificial insulators make use of air compartments, such as double-pane glass windows which are used for cutting heating bills. Basically, they act as buffers against heat loss.

Feather, fur, and natural fibers are all examples of natural insulators. These are materials that allows birds, mammals and human beings to stay warm. Sea otters, for example, live in ocean waters that are often very cold and their luxuriously thick fur keeps them warm. Other sea mammals like sea lions, whales and penguins rely on thick layers of fat (aka. blubber) – a very poor conductor – to prevent heat loss through their skin.

This view of the nose, the forward underside and crew cabin of the space shuttle Discovery was provided by an Expedition 26 crew member during a survey of the approaching STS-133 vehicle prior to docking with the International Space Station. Credit: NASA
This view of the nose section of space shuttle Discovery, build of heat-resistance carbon-composites. Credit: NASA

This same logic is applied to insulating homes, buildings, and even spacecraft. In these cases, methods involve either trapped air pockets between walls, fiber-glass (which traps air within it) or high-density foam. Spacecraft are a special case, and use insulation in the form of foam, reinforced carbon composite material, and silica fiber tiles. All of these are poor conductors of heat, and therefore prevent heat from being lost in space and also prevent the extreme temperatures caused by atmospheric reentry from entering the crew cabin.

See this video demonstration of the heat tiles on the Space Shuttle:

The laws governing conduction of heat are very similar to Ohm’s Law, which governs electrical conduction. In this case, a good conductor is a material that allows electrical current (i.e. electrons) to pass through it without much trouble. An electric insulator, by contrast, is any material whose internal electric charges do not flow freely, and therefore make it very hard to conduct an electric current under the influence of an electric field.

In most cases, materials that are poor conductors of heat are also poor conductors of electricity. For instance, copper is good at conducting both heat and electricity, hence why copper wires are used so widely in the manufacture of electronics. Gold and silver are even better, and where price is not an issue, these materials are used in the construction of electrical circuits as well.

And when one is looking to “ground” a charge (i.e. neutralize it), they send it through a physical connection to the Earth, where the charge is lost. This is common with electrical circuits where exposed metal is a factor, ensuring that people who accidentally come into contact are not electrocuted.

Insulating materials, such as rubber on the soles of shoes, is worn to ensure that people working with sensitive materials or around electrical sources are protected from electrical charges. Other insulating materials like glass, polymers, or porcelain are commonly used on power lines and high-voltage power transmitters to keep power flowing to the circuits (and nothing else!)

In short, conduction comes down to the transfer of heat or the transfer of an electrical charge. Both happen as a result of a substance’s ability to allow molecules to transfer energy across them.

We have written many articles about conduction for Universe Today. Check out this article on the first law of thermodynamics, or this one on static electricity.

If you’d like more info on the conduction, check out BBC’s article about Heat Transfer, and here’s a link to The Physics Hypertextbook.

We’ve also recorded an entire episode of Astronomy Cast about Magnetism – Episode 42: Magnetism Everywhere.