Matt Williams is a space journalist and science communicator for Universe Today and Interesting Engineering. He's also a science fiction author, podcaster (Stories from Space), and Taekwon-Do instructor who lives on Vancouver Island with his wife and family.
SpaceX was founded by Elon Musk in 2002 with a dream of making commercial space exploration a reality. Since that time, Musk has seen his company become a major player in the aerospace industry, landing contracts with various governments, NASA, and other private space companies to put satellites in orbit and ferry supplies to the International Space Station.
But 2014 was undoubtedly their most lucrative year to date. In September, the company (along with Boeing) signed a contract with NASA for $6.8 billion to develop space vehicles that would bring astronauts to and from the ISS by 2017 and end the nation’s reliance on Russia.
And this past week, the company announced a plan to expand operations at its Rocket Development and Test Facility in McGregor, Texas. This move, which is costing the company a cool $46 million, is expected to create 300 new full-time jobs in the community and expand testing and development even further.
According to Mike Copeland of the Waco Tribute, an additional $1.5 million in funding could be allocated from McLennon County. This would give SpaceX a total of $3 million in funds from the Waco-McLennan County Economic Development Corportation, a fund which is used to attract and keep industry in the region.
Copeland also indicates that a report prepared by the Waco City Council specified what types of jobs would be created. Apparently, SpaceX is is need of additional engineers, technicians and industry professionals. No doubt, this planned expansion has much to do with the company meeting its new contractual obligations with NASA.
Originally built in 2003, the Rocket Development and Test Facility has been the site of some exciting events over the years. Using rocket test stands, the company has conducted several low-altitude Vertical Takeoff and Vertical Landing (VTVL) test flights with the Falcon 9 Grasshopper rocket. In addition, the McGregor facility is used for post-flight disassembly and defueling of the Dragon spacecraft.
In the past ten years, SpaceX has also made numerous expansions and improvements to the facility, effectively doubling the size of the facility by purchasing several pieces of adjacent farmland. As of September 2013, the facility measured 900 acres (360 hectares). But by early 2014, the company had more than quadrupled its lease in McGregor, to a total of 4,280 acres.
Though far removed from the company’s rocket building facilities at their headquarters in Hawthorne, California, the facility plays an important role in the development of their space capsule and reusable rocket systems. According to SpaceX’s company website, “Every Merlin engine that powers the Falcon 9 rocket and every Draco thruster that controls the Dragon spacecraft is tested on one of 11 test stands.”
In short, the facility is the key testing grounds for all SpaceX technology. And now that the company is actively collaborating with NASA to restore indigenous space-launch ability to the US, more testing will be needed. Much has been made about the company’s efforts with VTVL rocket systems – such as the Falcon 9 Grasshopper (pictured above) – but the Dragon V2 takes things to another level.
As revealed by SpaceX in May of this year, the Dragon V2 capsule is designed to ferry crew members and supplies into orbit, and then land propulsively (i.e. under its own power) back to Earth before refueling and flying again. This is made possible thanks to the addition of eight side-mounted SuperDraco engines.
Compared to the standard Draco Engine, which is designed to give the Dragon Capsule (and the upper stages of the Falcon 9 rocket) attitude control in space, the SuperDraco is 100 times more powerful.
According to SpaceX, each SuperDraco is capable of producing 16,000 pounds of thrust and can be restarted multiple times if necessary. In addition, the engines have the ability to deep throttle, providing astronauts with precise control and enormous power.
With eight engines in total, that would provide a Dragon V2 with 120,000 pounds of axial thrust, giving it the ability to land anywhere without the need of a parachute (though they do come equipped with a backup chute).
Between this and ongoing developments with the Falcon 9 reusable rocket system, employees in McGregor are likely to have their hands full in the coming years. The expansion is expected to be complete by 2018.
It’s is no secret that Earth is the only inhabited planet in our Solar System. All the planets besides Earth lack a breathable atmosphere for terrestrial beings, but also, many of them are too hot or too cold to sustain life. A “habitable zone” which exists within every system of planets orbiting a star. Those planets that are too close to their sun are molten and toxic, while those that are too far outside it are icy and frozen.
But at the same time, forces other than position relative to our Sun can affect surface temperatures. For example, some planets are tidally locked, which means that they have one of their sides constantly facing towards the Sun. Others are warmed by internal geological forces and achieve some warmth that does not depend on exposure to the Sun’s rays. So just how hot and cold are the worlds in our Solar System? What exactly are the surface temperatures on these rocky worlds and gas giants that make them inhospitable to life as we know it?
Mercury:
Of our eight planets, Mercury is closest to the Sun. As such, one would expect it to experience the hottest temperatures in our Solar System. However, since Mercury also has no atmosphere and it also spins very slowly compared to the other planets, the surface temperature varies quite widely.
What this means is that the side exposed to the Sun remains exposed for some time, allowing surface temperatures to reach up to a molten 465 °C. Meanwhile, on the dark side, temperatures can drop off to a frigid -184°C. Hence, Mercury varies between extreme heat and extreme cold and is not the hottest planet in our Solar System.
Venus:
That honor goes to Venus, the second closest planet to the Sun which also has the highest average surface temperatures – reaching up to 460 °C on a regular basis. This is due in part to Venus’ proximity to the Sun, being just on the inner edge of the habitability zone, but also to Venus’ thick atmosphere, which is composed of heavy clouds of carbon dioxide and sulfur dioxide.
These gases create a strong greenhouse effect which traps a significant portion of the Sun’s heat in the atmosphere and turns the planet surface into a barren, molten landscape. The surface is also marked by extensive volcanoes and lava flows, and rained on by clouds of sulfuric acid. Not a hospitable place by any measure!
Earth:
Earth is the third planet from the Sun, and so far is the only planet that we know of that is capable of supporting life. The average surface temperature here is about 14 °C, but it varies due to a number of factors. For one, our world’s axis is tilted, which means that one hemisphere is slanted towards the Sun during certain times of the year while the other is slanted away.
This not only causes seasonal changes, but ensures that places located closer to the equator are hotter, while those located at the poles are colder. It’s little wonder then why the hottest temperature ever recorded on Earth was in the deserts of Iran (70.7 °C) while the lowest was recorded in Antarctica (-89.2 °C).
Mars:
Mars’ average surface temperature is -55 °C, but the Red Planet also experiences some variability, with temperatures ranging as high as 20 °C at the equator during midday, to as low as -153 °C at the poles. On average though, it is much colder than Earth, being just on the outer edge of the habitable zone, and because of its thin atmosphere – which is not sufficient to retain heat.
In addition, its surface temperature can vary by as much as 20 °C due to Mars’ eccentric orbit around the Sun (meaning that it is closer to the Sun at certain points in its orbit than at others).
Jupiter:
Since Jupiter is a gas giant, it has no solid surface, so it has no surface temperature. But measurements taken from the top of Jupiter’s clouds indicate a temperature of approximately -145°C. Closer to the center, the planet’s temperature increases due to atmospheric pressure.
At the point where atmospheric pressure is ten times what it is on Earth, the temperature reaches 21°C, what we Earthlings consider a comfortable “room temperature”. At the core of the planet, the temperature is much higher, reaching as much as 35,700°C – hotter than even the surface of the Sun.
Saturn:
Due to its distance from the Sun, Saturn is a rather cold gas giant planet, with an average temperature of -178 °Celsius. But because of Saturn’s tilt, the southern and northern hemispheres are heated differently, causing seasonal temperature variation.
And much like Jupiter, the temperature in the upper atmosphere of Saturn is cold, but increases closer to the center of the planet. At the core of the planet, temperatures are believed to reach as high as 11,700 °C.
Uranus:
Uranus is the coldest planet in our Solar System, with a lowest recorded temperature of -224°C. Despite its distance from the Sun, the largest contributing factor to its frigid nature has to do with its core.
Much like the other gas giants in our Solar System, the core of Uranus gives off far more heat than is absorbed from the Sun. However, with a core temperature of approximately 4,737 °C, Uranus’ interior gives of only one-fifth the heat that Jupiter’s does and less than half that of Saturn.
Neptune:
With temperatures dropping to -218°C in Neptune’s upper atmosphere, the planet is one of the coldest in our Solar System. And like all of the gas giants, Neptune has a much hotter core, which is around 7,000°C.
In short, the Solar System runs the gambit from extreme cold to extreme hot, with plenty of variance and only a few places that are temperate enough to sustain life. And of all of those, it is only planet Earth that seems to strike the careful balance required to sustain it perpetually.
At this time of year, festive displays of light are to be expected. This tradition has clearly not been lost on the galaxies NHC 2207 and IC 2163. Just in time for the holidays, these colliding galaxies, which are located within the Canis Major constellation (some 130 million light-years from Earth,) were seen putting on a spectacular lights display for us folks here on Earth!
And while this galaxy has been known to produce a lot of intense light over the years, the image above is especially luminous. A composite using data from the Chandra Observatory and the Hubble and Spitzer Space Telescopes, it shows the combination of visible, x-ray, and infrared light coming from the galactic pair.
In the past fifteen years, NGC 2207 and IC 2163 have hosted three supernova explosions and produced one of the largest collections of super bright X-ray lights in the known universe. These special objects – known as “ultraluminous X-ray sources” (ULXs) – have been found using data from NASA’s Chandra X-ray Observatory.
While the true nature of ULXs is still being debated, it is believed that they are a peculiar type of star X-ray binary. These consist of a star in a tight orbit around either a neutron star or a black hole. The strong gravity of the neutron star or black hole pulls matter from the companion star, and as this matter falls toward the neutron star or black hole, it is heated to millions of degrees and generates X-rays.
Data obtained from Chandra has determined that – much like the Milky Way Galaxy – NGC 2207 and IC 2163 are sprinkled with many star X-ray binaries. In the new Chandra image, this x-ray data is shown in pink, which shows the sheer prevalence of x-ray sources within both galaxies.
Meanwhile, optical light data from the Hubble Space Telescope is rendered in red, green, and blue (also appearing as blue, white, orange, and brown due to color combinations,) and infrared data from the Spitzer Space Telescope is shown in red.
The Chandra observatory spent far more time observing these galaxies than any previous ULX study, roughly five times as much. As a result, the study team – which consisted of researchers from Harvard University, MIT, and Sam Houston State University – were able to confirm the existence of 28 ULXs between NGC 2207 and IC 2163, seven of which had never before been seen.
In addition, the Chandra data allowed the team of scientists to observe the correlation between X-ray sources in different regions of the galaxy and the rate at which stars are forming in those same regions.
As the new Chandra image shows, the spiral arms of the galaxies – where large amounts of star formation is known to be occurring – show the heaviest concentrations of ULXs, optical light, and infrared. This correlation also suggests that the companion star in the star X-ray binaries is young and massive.
This in turn presents another possibility which has to do with star formation during galactic mergers. When galaxies come together, they produce shock waves that cause clouds of gas within them to collapse, leading to periods of intense star formation and the creation of star clusters.
The fact that the ULXs and the companion stars are young (the researchers estimate that they are only 10 million years old) would seem to confirm that they are the result of NGC 2207 and IC 2163 coming together. This seem a likely explanation since the merger between these two galaxies is still in its infancy, which is attested to by the fact that the galaxies are still separate.
They are expected to collide soon, a process which will make them look more like the Mice Galaxies (pictured above). In about one billion years time, they are expected to finish the process, forming a spiral galaxy that would no doubt resemble our own.
For most of here on planet Earth, sunrise, sunset, and the cycle of day and night (aka. the diurnal cycle) are just simple facts of life. As a result of seasonal changes that happen with every passing year, the length of day and night can vary – and be either longer or shorter – by just a few hours. But in some regions of the world (i.e. the poles) the Sun does not set during certain times of the year. And there are also seasonal periods where a single night can last many days.
Naturally, this gives rise to certain questions. Namely, what causes the cycle of day and night, and why don’t all places on the planet experience the same patterns? As with many other seasonal experiences, the answer has to do with two facts: One, the Earth rotates on its axis as it orbits the Sun. And two, the fact that Earth’s axis is tilted.
Earth’s Rotation:
Earth’s rotation occurs from west to east, which is why the Sun always appears to be rising on the eastern horizon and setting on the western. If you could view the Earth from above, looking down at the northern polar region, the planet would appear to be rotating counter-clockwise. However, viewed from the southern polar region, it appears to be rotating clockwise.
The Earth rotates once in about 24 hours with respect to the Sun and once every 23 hours 56 minutes and 4 seconds with respect to the stars. What’s more, its central axis is aligned with two stars. The northern axis points outward to Polaris, hence why it is called “the North Star”, while its southern axis points to Sigma Octantis.
Axial Tilt:
As already noted, due to the Earth’s axial tilt (or obliquity), day and night are not evenly divided. If the Earth’s axis were perpendicular to its orbital plane around the Sun, all places on Earth would experience equal amounts of day and night (i.e. 12 hours of day and night, respectively) every day during the year and there would be no seasonal variability.
Instead, at any given time of the year, one hemisphere is pointed slightly more towards the Sun, leaving the other pointed away. During this time, one hemisphere will be experiencing warmer temperatures and longer days while the other will experience colder temperatures and longer nights.
Seasonal Changes:
Of course, since the Earth is rotating around the Sun and not just on its axis, this process is reversed during the course of a year. Every six months, the Earth undergoes a half orbit and changes positions to the other side of the Sun, allowing the other hemisphere to experience longer days and warmer temperatures.
Consequently, in extreme places like the North and South pole, daylight or nighttime can last for days. Those times of the year when the northern and southern hemispheres experience their longest days and nights are called solstices, which occur twice a year for the northern and southern hemispheres.
The Summer Solstice takes place between June 20th and 22nd in the northern hemisphere and between December 20th and 23rd each year in the southern hemisphere. The Winter Solstice occurs at the same time but in reverse – between Dec. 20th and 23rd for the northern hemisphere and June 20th and 22nd for the southern hemisphere.
According to NOAA, around the Winter Solstice at the North Pole there will be no sunlight or even twilight beginning in early October, and the darkness lasts until the beginning of dawn in early March. Conversely, around the Summer Solstice, the North Pole stays in full sunlight all day long throughout the entire summer (unless there are clouds). After the Summer Solstice, the sun starts to sink towards the horizon.
Another common feature in the cycle of day and night is the visibility of the Moon, the stars, and other celestial bodies. Technically, we don’t always see the Moon at night. On certain days, when the Moon is well-positioned between the Earth and the Sun, it is visible during the daytime. However, the stars and other planets of our Solar System are only visible at night after the Sun has fully set.
The reason for this is because the light of these objects is too faint to be seen during daylight hours. The Sun, being the closest star to us and the most radiant object visible from Earth, naturally obscures them when it is overhead. However, with the Earth tilted away from the Sun, we are able to see the Moon radiating the Sun’s light more clearly, and the stars light is detectable.
On an especially clear night, and assuming light pollution is not a major factor, the glowing band of the Milky Way and other clouds of dust and gas may also be visible in the night sky. These objects are more distant than the stars in our vicinity of the Galaxy, and therefore have less luminosity and are more difficult to see.
Another interesting thing about the cycle of day and night is that it is getting slower with time. This is due to the tidal effects the Moon has on Earth’s rotation, which is making days longer (but only marginally). According to atomic clocks around the world, the modern day is about 1.7 milliseconds longer than it was a century ago – a change which may require the addition of more leap seconds in the future.
We have many interesting articles on Earth’s Rotation here at Universe Today. To learn more about solstices here in Universe Today, be sure to check out our articles on the Shortest Day of the Year and the Summer Solstice.
Since they were first announced in 2012, NASA has been a major contender in the DARPA Robotics Challenge (DRC). This competition – which involves robots navigating obstacle courses using tools and vehicles – was first conceived by DARPA to see just how capable robots could be at handling disaster response.
The Finals for this challenge will be taking place on June 5th and 6th, 2015, at Fairplex in Pomona, California. And after making it this far with their RoboSimian design, NASA was faced with a difficult question. Should their robotic primate continue to represent them, or should that honor go to their recently unveiled Surrogate robot?
As the saying goes “you dance with the one who brung ya.” In short, NASA has decided to stick with RoboSimian as they advance into the final round of obstacles and tests in their bid to win the DRC and the $2 million prize.
Surrogate’s unveiling took place this past October 24th at NASA’s Jet Propulsion Laboratory in Pasadena, California. The appearance of this robot on stage, to the them song of 2001: A Space Odyssey, was held on the same day that Thomas Rosenbaum was inaugurated as the new president of the California Institute of Technology.
In honor of the occasion, Surrogate (aka “Surge”) strutted its way across the stage to present a digital tablet to Rosenbaum, which he used to push a button that initiated commands for NASA’s Mars rover Curiosity. Despite the festive nature of the occasion, this scene was quite calm compared to what the robot was designed for.
“Surge and its predecessor, RoboSimian, were designed to extend humanity’s reach, going into dangerous places such as a nuclear power plant during a disaster scenario such as we saw at Fukushima. They could take simple actions such as turning valves or flipping switches to stabilize the situation or mitigate further damage,” said Brett Kennedy, principal investigator for the robots at JPL.
RoboSimian was originally created for the DARPA Robotics Challenge, and during the trial round last December, the JPL team’s robot won a spot to compete in the finals, which will be held in Pomona, California, in June 2015.
With the support of the Defense Threat Reduction Agency and the Robotics Collaborative Technology Alliance, the Surrogate robot began construction in 2014. Its designers began by incorporating some of RoboSimian’s extra limbs, and then added a wheeled base, twisty spine, an upper torso, and a head for holding sensors.
Additional components include a the hat-like appendage on top, which is in fact a LiDAR (Light Detection and Ranging) device. This device spins and shoots out laser beams in a 360-degree field to map the surrounding environment in 3-D.
Choosing between them was a tough call, and took the better part of the last six months. On the one hand, Surrogate was designed to be more like a human. It has an upright spine, two arms and a head, standing about 1.4 meters (4.5 feet) tall and weighing about 91 kilograms (200 pounds). Its major strength is in how it handles objects, and its flexible spine allows for extra manipulation capabilities. But the robot moves on tracks, which doesn’t allow it to move over tall objects, such as flights of stairs, ladders, rocks, and rubble.
RoboSimian, by contrast, is more ape-like, moving around on four limbs. It is better suited to travel over complicated terrain and is an adept climber. In addition, Surrogate has only one set of “eyes” – two cameras that allow for stereo vision – mounted to its head, whereas RoboSimian has up to seven sets of eyes mounted all over its body.
The robots also run on almost identical computer code, and the software that plans their motion is very similar. As in a video game, each robot has an “inventory” of objects with which it can interact. Engineers have to program the robots to recognize these objects and perform pre-set actions on them, such as turning a valve or climbing over blocks.
In the end, they came to a decision. RoboSimian will represent the team in Pomona.
“It comes down to the fact that Surrogate is a better manipulation platform and faster on benign surfaces, but RoboSimian is an all-around solution, and we expect that the all-around solution is going to be more competitive in this case,” Kennedy said.
The RoboSimian team at JPL is collaborating with partners at the University of California, Santa Barbara, and Caltech to get the robot to walk more quickly. JPL researchers also plan to put a LiDAR on top of RoboSimian in the future. These efforts seek to improve the robot in the long-run, but are also aimed at getting it ready to face the challenges of the DARPA Robot Challenge Finals.
Specifically, it will be faced with such tasks as driving a vehicle and getting out of it, negotiating debris blocking a doorway, cutting a hole in a wall, opening a valve, and crossing a field with cinderblocks or other debris. There will also be a surprise task.
Although RoboSimian is now the focus of Kennedy’s team, Surrogate won’t be forgotten.
“We’ll continue to use it as an example of how we can take RoboSimian limbs and reconfigure them into other platforms,” Kennedy said.
When someone mentions “different dimensions,” we tend to think of things like parallel universes – alternate realities that exist parallel to our own but where things work differently. However, the reality of dimensions and how they play a role in the ordering of our Universe is really quite different from this popular characterization.
To break it down, dimensions are simply the different facets of what we perceive to be reality. We are immediately aware of the three dimensions that surround us – those that define the length, width, and depth of all objects in our universes (the x, y, and z axes, respectively).
Beyond these three visible dimensions, scientists believe that there may be many more. In fact, the theoretical framework of Superstring Theory posits that the Universe exists in ten different dimensions. These different aspects govern the Universe, the fundamental forces of nature, and all the elementary particles contained within.
The first dimension, as already noted, is that which gives it length (aka. the x-axis). A good description of a one-dimensional object is a straight line, which exists only in terms of length and has no other discernible qualities. Add to that a second dimension, the y-axis (or height), and you get an object that becomes a 2-dimensional shape (like a square).
The third dimension involves depth (the z-axis) and gives all objects a sense of area and a cross-section. The perfect example of this is a cube, which exists in three dimensions and has a length, width, depth, and hence volume. Beyond these three dimensions reside the seven that are not immediately apparent to us but can still be perceived as having a direct effect on the Universe and reality as we know it.
Scientists believe that the fourth dimension is time, which governs the properties of all known matter at any given point. Along with the three other dimensions, knowing an object’s position in time is essential to plotting its position in the Universe. The other dimensions are where the deeper possibilities come into play, and explaining their interaction with the others is where things get particularly tricky for physicists.
According to Superstring Theory, the fifth and sixth dimensions are where the notion of possible worlds arises. If we could see on through to the fifth dimension, we would see a world slightly different from our own, giving us a means of measuring the similarity and differences between our world and other possible ones.
In the sixth, we would see a plane of possible worlds, where we could compare and position all the possible universes that start with the same initial conditions as this one (i.e., the Big Bang). In theory, if you could master the fifth and sixth dimensions, you could travel back in time or go to different futures.
In the seventh dimension, you have access to the possible worlds that start with different initial conditions. Whereas in the fifth and sixth, the initial conditions were the same, and subsequent actions were different, everything is different from the very beginning of time. The eighth dimension again gives us a plane of such possible universe histories. Each begins with different initial conditions and branches out infinitely (hence why they are called infinities).
In the ninth dimension, we can compare all the possible universe histories, starting with all the different possible laws of physics and initial conditions. In the tenth and final dimension, we arrive at the point where everything possible and imaginable is covered. Beyond this, nothing can be imagined by us lowly mortals, which makes it the natural limitation of what we can conceive in terms of dimensions.
The existence of these additional six dimensions, which we cannot perceive, is necessary for String Theory for there to be consistency in nature. The fact that we can perceive only four dimensions of space can be explained by one of two mechanisms: either the extra dimensions are compactified on a very small scale, or else our world may live on a 3-dimensional submanifold corresponding to a brane, on which all known particles besides gravity would be restricted (aka. brane theory).
If the extra dimensions are compactified, then the extra six dimensions must be in the form of a Calabi–Yau manifold (shown above). While imperceptible as far as our senses are concerned, they would have governed the formation of the Universe from the very beginning. Hence why scientists believe that by peering back through time and using telescopes to observe light from the early Universe (i.e., billions of years ago), they might be able to see how the existence of these additional dimensions could have influenced the evolution of the cosmos.
Much like other candidates for a grand unifying theory – aka the Theory of Everything (TOE) – the belief that the Universe is made up of ten dimensions (or more, depending on which model of string theory you use) is an attempt to reconcile the standard model of particle physics with the existence of gravity. In short, it is an attempt to explain how all known forces within our Universe interact and how other possible universes themselves might work.
There are also some other great resources online. There is a great video that explains the ten dimensions in detail. You can also look at the PBS website for the TV show Elegant Universe. It has a great page on the ten dimensions.
Just how did the Earth — our home and the place where life as we know it evolved — come to be created in the first place? In some fiery furnace atop a great mountain? On some divine forge with the hammer of the gods shaping it out of pure ether? How about from a great ocean known as Chaos, where something was created out of nothing and then filled with all living creatures?
If any of those accounts sound familiar, they are some of the ancient legends that have been handed down through the years that attempt to describe how our world came to be. And interestingly enough, some of these ancient creation stories contain an element of scientific fact to them.
Heat is an interesting form of energy. Not only does it sustain life, make us comfortable and help us prepare our food, but understanding its properties is key to many fields of scientific research. For example, knowing how heat is transferred and the degree to which different materials can exchange thermal energy governs everything from building heaters and understanding seasonal change to sending ships into space.
Heat can only be transferred through three means: conduction, convection and radiation. Of these, conduction is perhaps the most common, and occurs regularly in nature. In short, it is the transfer of heat through physical contact. It occurs when you press your hand onto a window pane, when you place a pot of water on an active element, and when you place an iron in the fire.
This transfer occurs at the molecular level — from one body to another — when heat energy is absorbed by a surface and causes the molecules of that surface to move more quickly. In the process, they bump into their neighbors and transfer the energy to them, a process which continues as long as heat is still being added.
The process of heat conduction depends on four basic factors: the temperature gradient, the cross section of the materials involved, their path length, and the properties of those materials.
A temperature gradient is a physical quantity that describes in which direction and at what rate the temperature changes in a specific location. Temperature always flows from the hottest to coldest source, due to the fact that cold is nothing but the absence of heat energy. This transfer between bodies continues until the temperature difference decays, and a state known as thermal equilibrium occurs.
Cross-section and path length are also important factors. The greater the size of the material involved in the transfer, the more heat is needed to warm it. Also, the more surface area that is exposed to open air, the greater likelihood for heat loss. So shorter objects with a smaller cross-section are the best means of minimizing the loss of heat energy.
Last, but certainly not least, is the physical properties of the materials involved. Basically, when it comes to conducting heat, not all substances are created equal. Metals and stone are considered good conductors since they can speedily transfer heat, whereas materials like wood, paper, air, and cloth are poor conductors of heat.
These conductive properties are rated based on a “coefficient” which is measured relative to silver. In this respect, silver has a coefficient of heat conduction of 100, whereas other materials are ranked lower. These include copper (92), iron (11), water (0.12), and wood (0.03). At the opposite end of the spectrum is a perfect vacuum, which is incapable of conducting heat, and is therefore ranked at zero.
Materials that are poor conductors of heat are called insulators. Air, which has a conduction coefficient of .006, is an exceptional insulator because it is capable of being contained within an enclosed space. This is why artificial insulators make use of air compartments, such as double-pane glass windows which are used for cutting heating bills. Basically, they act as buffers against heat loss.
Feather, fur, and natural fibers are all examples of natural insulators. These are materials that allows birds, mammals and human beings to stay warm. Sea otters, for example, live in ocean waters that are often very cold and their luxuriously thick fur keeps them warm. Other sea mammals like sea lions, whales and penguins rely on thick layers of fat (aka. blubber) – a very poor conductor – to prevent heat loss through their skin.
This same logic is applied to insulating homes, buildings, and even spacecraft. In these cases, methods involve either trapped air pockets between walls, fiber-glass (which traps air within it) or high-density foam. Spacecraft are a special case, and use insulation in the form of foam, reinforced carbon composite material, and silica fiber tiles. All of these are poor conductors of heat, and therefore prevent heat from being lost in space and also prevent the extreme temperatures caused by atmospheric reentry from entering the crew cabin.
See this video demonstration of the heat tiles on the Space Shuttle:
The laws governing conduction of heat are very similar to Ohm’s Law, which governs electrical conduction. In this case, a good conductor is a material that allows electrical current (i.e. electrons) to pass through it without much trouble. An electric insulator, by contrast, is any material whose internal electric charges do not flow freely, and therefore make it very hard to conduct an electric current under the influence of an electric field.
In most cases, materials that are poor conductors of heat are also poor conductors of electricity. For instance, copper is good at conducting both heat and electricity, hence why copper wires are used so widely in the manufacture of electronics. Gold and silver are even better, and where price is not an issue, these materials are used in the construction of electrical circuits as well.
And when one is looking to “ground” a charge (i.e. neutralize it), they send it through a physical connection to the Earth, where the charge is lost. This is common with electrical circuits where exposed metal is a factor, ensuring that people who accidentally come into contact are not electrocuted.
Insulating materials, such as rubber on the soles of shoes, is worn to ensure that people working with sensitive materials or around electrical sources are protected from electrical charges. Other insulating materials like glass, polymers, or porcelain are commonly used on power lines and high-voltage power transmitters to keep power flowing to the circuits (and nothing else!)
In short, conduction comes down to the transfer of heat or the transfer of an electrical charge. Both happen as a result of a substance’s ability to allow molecules to transfer energy across them.
During the Hadean Eon, some 4.5 billion years ago, the world was a much different place than it is today. As the name Hades would suggest (Greek for “underworld”), it was a hellish period for Earth, marked by intense volcanism and intense meteoric impacts. It was also during this time that outgassing and volcanic activity produced the primordial atmosphere composed of carbon dioxide, hydrogen and water vapor.
Little of this primordial atmosphere remains, and geothermal evidence suggests that the Earth’s atmosphere may have been completely obliterated at least twice since its formation more than 4 billion years ago. Until recently, scientists were uncertain as to what could have caused this loss.
But a new study from MIT, Hebrew Univeristy, and Caltech indicates that the intense bombardment of meteorites in this period may have been responsible.
This meteoric bombardment would have taken place at around the same time that the Moon was formed. The intense bombardment of space rocks would have kicked up clouds of gas with enough force to permanent eject the atmosphere into space. Such impacts may have also blasted other planets, and even peeled away the atmospheres of Venus and Mars.
In fact, the researchers found that small planetesimals may be much more effective than large impactors – such as Theia, whose collision with Earth is believed to have formed the Moon – in driving atmospheric loss. Based on their calculations, it would take a giant impact to disperse most of the atmosphere; but taken together, many small impacts would have the same effect.
Hilke Schlichting, an assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences, says understanding the drivers of Earth’s ancient atmosphere may help scientists to identify the early planetary conditions that encouraged life to form.
“[This finding] sets a very different initial condition for what the early Earth’s atmosphere was most likely like,” Schlichting says. “It gives us a new starting point for trying to understand what was the composition of the atmosphere, and what were the conditions for developing life.”
What’s more, the group examined how much atmosphere was retained and lost following impacts with giant, Mars-sized and larger bodies and with smaller impactors measuring 25 kilometers or less.
What they found was that a collision with an impactor as massive as Mars would have the necessary effect of generating a massive a shockwave through the Earth’s interior and potentially ejecting a significant fraction of the planet’s atmosphere.
However, the researchers determined that such an impact was not likely to have occurred, since it would have turned Earth’s interior into a homogenous slurry. Given the appearance of diverse elements observed within the Earth’s interior, such an event does not appear to have happened in the past.
A series of smaller impactors, by contrast, would generate an explosion of sorts, releasing a plume of debris and gas. The largest of these impactors would be forceful enough to eject all gas from the atmosphere immediately above the impact zone. Only a fraction of this atmosphere would be lost following smaller impacts, but the team estimates that tens of thousands of small impactors could have pulled it off.
Such a scenario did likely occur 4.5 billion years ago during the Hadean Eon. This period was one of galactic chaos, as hundreds of thousands of space rocks whirled around the solar system and many are believed to have collided with Earth.
“For sure, we did have all these smaller impactors back then,” Schlichting says. “One small impact cannot get rid of most of the atmosphere, but collectively, they’re much more efficient than giant impacts, and could easily eject all the Earth’s atmosphere.”
However, Schlichting and her team realized that the sum effect of small impacts may be too efficient at driving atmospheric loss. Other scientists have measured the atmospheric composition of Earth compared with Venus and Mars; and compared to Venus, Earth’s noble gases have been depleted 100-fold. If these planets had been exposed to the same blitz of small impactors in their early history, then Venus would have no atmosphere today.
She and her colleagues went back over the small-impactor scenario to try and account for this difference in planetary atmospheres. Based on further calculations, the team identified an interesting effect: Once half a planet’s atmosphere has been lost, it becomes much easier for small impactors to eject the rest of the gas.
The researchers calculated that Venus’ atmosphere would only have to start out slightly more massive than Earth’s in order for small impactors to erode the first half of the Earth’s atmosphere, while keeping Venus’ intact. From that point, Schlichting describes the phenomenon as a “runaway process — once you manage to get rid of the first half, the second half is even easier.”
This gave rise to another important question: What eventually replaced Earth’s atmosphere? Upon further calculations, Schlichting and her team found the same impactors that ejected gas also may have introduced new gases, or volatiles.
“When an impact happens, it melts the planetesimal, and its volatiles can go into the atmosphere,” Schlichting says. “They not only can deplete, but replenish part of the atmosphere.”
The group calculated the amount of volatiles that may be released by a rock of a given composition and mass, and found that a significant portion of the atmosphere may have been replenished by the impact of tens of thousands of space rocks.
“Our numbers are realistic, given what we know about the volatile content of the different rocks we have,” Schlichting notes.
Jay Melosh, a professor of earth, atmospheric, and planetary sciences at Purdue University, says Schlichting’s conclusion is a surprising one, as most scientists have assumed the Earth’s atmosphere was obliterated by a single, giant impact. Other theories, he says, invoke a strong flux of ultraviolet radiation from the sun, as well as an “unusually active solar wind.”
“How the Earth lost its primordial atmosphere has been a longstanding problem, and this paper goes a long way toward solving this enigma,” says Melosh, who did not contribute to the research. “Life got started on Earth about this time, and so answering the question about how the atmosphere was lost tells us about what might have kicked off the origin of life.”
Going forward, Schlichting hopes to examine more closely the conditions underlying Earth’s early formation, including the interplay between the release of volatiles from small impactors and from Earth’s ancient magma ocean.
“We want to connect these geophysical processes to determine what was the most likely composition of the atmosphere at time zero, when the Earth just formed, and hopefully identify conditions for the evolution of life,” Schlichting says.
Schlichting and her colleagues have published their results in the February edition of the journal Icarus.
The Milky Way Galaxy is an immense and very interesting place. Not only does it measure some 120,000–180,000 light-years in diameter, it is home to planet Earth, the birthplace of humanity. Our Solar System resides roughly 27,000 light-years away from the Galactic Center, on the inner edge of one of the spiral-shaped concentrations of gas and dust particles called the Orion Arm.
But within these facts about the Milky Way lie some additional tidbits of information, all of which are sure to impress and inspire. Here are ten such facts, listed in no particular order:
1. It’s Warped:
For starters, the Milky Way is a disk about 120,000 light years across with a central bulge that has a diameter of 12,000 light years (see the Guide to Space article for more information). The disk is far from perfectly flat though, as can be seen in the picture below. In fact, it is warped in shape, a fact which astronomers attribute to the our galaxy’s two neighbors -the Large and Small Magellanic clouds.
These two dwarf galaxies — which are part of our “Local Group” of galaxies and may be orbiting the Milky Way — are believed to have been pulling on the dark matter in our galaxy like in a game of galactic tug-of-war. The tugging creates a sort of oscillating frequency that pulls on the galaxy’s hydrogen gas, of which the Milky Way has lots of (for more information, check out How the Milky Way got its Warp).
2. It Has a Halo, but You Can’t Directly See It:
Scientists believe that 90% of our galaxy’s mass consists of dark matter, which gives it a mysterious halo. That means that all of the “luminous matter” – i.e. that which we can see with the naked eye or a telescopes – makes up less than 10% of the mass of the Milky Way. Its halo is not the conventional glowing sort we tend to think of when picturing angels or observing comets.
In this case, the halo is actually invisible, but its existence has been demonstrated by running simulations of how the Milky Way would appear without this invisible mass, and how fast the stars inside our galaxy’s disk orbit the center.
The heavier the galaxy, the faster they should be orbiting. If one were to assume that the galaxy is made up only of matter that we can see, then the rotation rate would be significantly less than what we observe. Hence, the rest of that mass must be made up of an elusive, invisible mass – aka. “dark matter” – or matter that only interacts gravitationally with “normal matter”.
To see some images of the probable distribution and density of dark matter in our galaxy, check out The Via Lactea Project.
3. It has Over 200 Billion Stars:
As galaxies go, the Milky Way is a middleweight. The largest galaxy we know of, which is designated IC 1101, has over 100 trillion stars, and other large galaxies can have as many as a trillion. Dwarf galaxies such as the aforementioned Large Magellanic Cloud have about 10 billion stars. The Milky Way has between 100-400 billion stars; but when you look up into the night sky, the most you can see from any one point on the globe is about 2,500. This number is not fixed, however, because the Milky Way is constantly losing stars through supernovae, and producing new ones all the time (about seven per year).
4. It’s Really Dusty and Gassy:
Though it may not look like it to the casual observer, the Milky Way is full of dust and gas. This matter makes up a whopping 10-15% of the luminous/visible matter in our galaxy, with the remainder being the stars. Our galaxy is roughly 100,000 light years across, and we can only see about 6,000 light years into the disk in the visible spectrum. Still, when light pollution is not significant, the dusty ring of the Milky Way can be discerned in the night sky.
The thickness of the dust deflects visible light (as is explained here) but infrared light can pass through the dust, which makes infrared telescopes like the Spitzer Space Telescope extremely valuable tools in mapping and studying the galaxy. Spitzer can peer through the dust to give us extraordinarily clear views of what is going on at the heart of the galaxy and in star-forming regions.
5. It was Made From Other Galaxies:
The Milky Way wasn’t always as it is today – a beautiful, warped spiral. It became its current size and shape by eating up other galaxies, and is still doing so today. In fact, the Canis Major Dwarf Galaxy is the closest galaxy to the Milky Way because its stars are currently being added to the Milky Way’s disk. And our galaxy has consumed others in its long history, such as the Sagittarius Dwarf Galaxy.
6. Every Picture You’ve Seen of the Milky Way Isn’t It:
Currently, we can’t take a picture of the Milky Way from above. This is due to the fact that we are inside the galactic disk, about 26,000 light years from the galactic center. It would be like trying to take a picture of your own house from the inside. This means that any of the beautiful pictures you’ve ever seen of a spiral galaxy that is supposedly the Milky Way is either a picture of another spiral galaxy, or the rendering of a talented artist.
Imaging the Milky Way from above is a long, long way off. However, this doesn’t mean that we can’t take breathtaking images of the Milky Way from our vantage point!
7. There is a Black Hole at the Center:
Most larger galaxies have a supermassive black hole (SMBH) at the center, and the Milky Way is no exception. The center of our galaxy is called Sagittarius A*, a massive source of radio waves that is believed to be a black hole that measures 22,5 million kilometers (14 million miles) across – about the size of Mercury’s orbit. But this is just the black hole itself.
All of the mass trying to get into the black hole – called the accretion disk – forms a disk that has 4.6 million times the mass of our Sun and would fit inside the orbit of the Earth. Though like other black holes, Sgr A* tries to consume anything that happens to be nearby, star formation has been detected near this behemoth astronomical phenomenon.
8. It’s Almost as Old as the Universe Itself:
The most recent estimates place the age of the Universe at about 13.7 billion years. Our Milky Way has been around for about 13.6 billion of those years, give or take another 800 million. The oldest stars in our the Milky Way are found in globular clusters, and the age of our galaxy is determined by measuring the age of these stars, and then extrapolating the age of what preceded them.
Though some of the constituents of the Milky Way have been around for a long time, the disk and bulge themselves didn’t form until about 10-12 billion years ago. And that bulge may have formed earlier than the rest of the galaxy.
9. It’s Part of the Virgo Supercluster:
As big as it is, the Milky Way is part of an even larger galactic structures. Our closest neighbors include the Large and Small Magellanic Clouds, and the Andromeda Galaxy – the closest spiral galaxy to the Milky Way. Along with some 50 other galaxies, the Milky Way and its immediate surroundings make up a cluster known as the Local Group.
And yet, this is still just a small fraction of our stellar neighborhood. Farther out, we find that the Milky Way is part of an even larger grouping of galaxies known as the Virgo Supercluster. Superclusters are groupings of galaxies on very large scales that measure in the hundreds of millions of light years in diameter. In between these superclusters are large stretches of open space where intrepid explorers or space probes would encounter very little in the way of galaxies or matter.
In the case of the Virgo Supercluster, at least 100 galaxy groups and clusters are located within it massive 33 megaparsec (110 million light-year) diameter. And a 2014 study indicates that the Virgo Supercluster is only a lobe of a greater supercluster, Laniakea, which is centered on the Great Attractor.
10. It’s on the move:
The Milky Way, along with everything else in the Universe, is moving through space. The Earth moves around the Sun, the Sun around the Milky Way, and the Milky Way as part of the Local Group, which is moving relative to the Cosmic Microwave Background (CMB) radiation – the radiation left over from the Big Bang.
The CMB is a convenient reference point to use when determining the velocity of things in the universe. Relative to the CMB, the Local Group is calculated to be moving at a speed of about 600 km/s, which works out to about 2.2 million km/h. Such speeds stagger the mind and squash any notions of moving fast within our humble, terrestrial frame of reference!
For many more facts about the Milky Way, visit the Guide to Space, listen to the Astronomy Cast episode on the Milky Way, or visit the Students for the Exploration and Development of Space at seds.org.