Searching for Life in the Multiverse

Multiverse Theory
Artist concept of the multiverse. Credit: Florida State University

[/caption]
Other intelligent and technologically capable alien civilizations may exist in our Universe, but the problems with finding and communicating with them is that they are simply too far away for any meaningful two-way conversations. But what about the prospect of finding if life exists in other universes outside of our own?

Theoretical physics has brought us the notion that our single universe is not necessarily all there is. The “multiverse” idea is a hypothetical mega-universe full of numerous smaller universes, including our own.

In this month’s Scientific American, Alejandro Jenkins from Florida State University and Gilad Perez, a theorist at the Weizmann Institute of Science in Israel, discuss how multiple other universes—each with its own laws of physics—may have emerged from the same primordial vacuum that gave rise to ours. Assuming they exist, many of those universes may contain intricate structures and perhaps even some forms of life. But the latest theoretical research suggests that our own universe may not be as “finely tuned” for the emergence of life as previously thought.

Jenkns and Perez write about a provocative hypothesis known as the anthropic principle, which states that the existence of intelligent life (capable of studying physical processes) imposes constraints on the possible form of the laws of physics.

Alejandro Jenkins. Credit: Florida State University

“Our lives here on Earth — in fact, everything we see and know about the universe around us — depend on a precise set of conditions that makes us possible,” Jenkins said. “For example, if the fundamental forces that shape matter in our universe were altered even slightly, it’s conceivable that atoms never would have formed, or that the element carbon, which is considered a basic building block of life as we know it, wouldn’t exist. So how is it that such a perfect balance exists? Some would attribute it to God, but of course, that is outside the realm of physics.”

The theory of “cosmic inflation,” which was developed in the 1980s in order to solve certain puzzles about the structure of our universe, predicts that ours is just one of countless universes to emerge from the same primordial vacuum. We have no way of seeing those other universes, although many of the other predictions of cosmic inflation have recently been corroborated by astrophysical measurements.

Given some of science’s current ideas about high-energy physics, it is plausible that those other universes might each have different physical interactions. So perhaps it’s no mystery that we would happen to occupy the rare universe in which conditions are just right to make life possible. This is analogous to how, out of the many planets in our universe, we occupy the rare one where conditions are right for organic evolution.

“What theorists like Dr. Perez and I do is tweak the calculations of the fundamental forces in order to predict the resulting effects on possible, alternative universes,” Jenkins said. “Some of these results are easy to predict; for example, if there was no electromagnetic force, there would be no atoms and no chemical bonds. And without gravity, matter wouldn’t coalesce into planets, stars and galaxies.

“What is surprising about our results is that we found conditions that, while very different from those of our own universe, nevertheless might allow — again, at least hypothetically — for the existence of life. (What that life would look like is another story entirely.) This actually brings into question the usefulness of the anthropic principle when applied to particle physics, and might force us to think more carefully about what the multiverse would actually contain.”

A brief overview of the article is available for free on Scientific American’s website.

Source: Florida State University

Megaparsec

velocity vs distance, from Hubble's 1929 paper

[/caption]
A megaparsec is a million parsecs (mega- is a prefix meaning million; think of megabyte, or megapixel), and as there are about 3.3 light-years to a parsec, a megaparsec is rather a long way. The standard abbreviation is Mpc.

Why do astronomers need to have such a large unit? When discussing distances like the size of a galaxy cluster, or a supercluster, or a void, the megaparsec is handy … just as it’s handy to use the astronomical unit (au) for solar system distances (for single galaxies, 1,000 parsecs – a kiloparsec, kpc – is a more natural scale; for cosmological distances, a gigaparsec (Gpc) is sometimes used).

Reminder: a parsec (a parallax of one arc-second, or arcsec) is a natural distance unit (for astronomers at least) because the astronomical unit (the length of the semi-major axis of the Earth’s orbit around the Sun, sorta) and arcsec are everyday units (again, for astronomers at least). Fun fact: even though the first stellar parallax distance was published in 1838, it wasn’t until 1913 that the word ‘parsec’ appeared in print!

As a parsec is approximately 3.09 x 1016 meters, a megaparsec is about 3.09 x 1022 meters.

You’ll most likely come across megaparsec first, and most often, in regard to the Hubble constant, which is the value of the slope of the straight line in a graph of the Hubble relationship (or Hubble’s Law) – redshift vs distance. As redshift is in units of kilometers per second (km/s), and as distance is in units of megaparsecs (for the sorts of distances used in the Hubble relationship), the Hubble constant is nearly always stated in units of km/s/Mpc (e.g. 72 +/- 8 km/s/Mpc, or 72 +/- 8 km s-1 Mpc-1 – that’s its estimated value from the Hubble Key Project).

John Huchra’s page on the Hubble constant is great for seeing megaparsecs in action.

Given the ubiquity of megaparsecs in extragalactic astronomy, hardly any Universe Today article on this topic is without its mention! Some examples: Chandra Confirms the Hubble Constant, Radio Astronomy Will Get a Boost With the Square Kilometer Array, and Astronomers Find New Way to Measure Cosmic Distances.

Questions Show #7, an Astronomy Cast episode, has megaparsecs in action, as does this other Questions Show.

Quintessence

Quintessence is one idea – hypothesis – of what dark energy is (remember that dark energy is the shorthand expression of the apparent acceleration of the expansion of the universe … or the form of mass-energy which causes this observed acceleration, in cosmological models built with Einstein’s theory of general relativity).

The word quintessence means fifth essence, and is kinda cute … remember Earth, Water, Fire, and Air, the ‘four essences’ of the Ancient Greeks? Well, in modern cosmology, there are also four essences: normal matter, radiation (photons), cold dark matter, and neutrinos (which are hot dark matter!).

Quintessence covers a range of hypotheses (or models); the main difference between quintessence as a (possible) explanation for dark energy and the cosmological constant Λ (which harks back to Einstein and the early years of the 20th century) is that quintessence varies with time (albeit slooowly), and can also vary with location (space). One version of quintessence is phantom energy, in which the energy density increases with time, and leads to a Big Rip end of the universe.

Quintessence, as a scalar field, is not the least bit unusual in physics (the Newtonian gravitational potential field is one example, of a real scalar field; the Higgs field of the Standard Model of particle physics is an example of a complex scalar field); however, it has some difficulties in common with the cosmological constant (in a nutshell, how can it be so small).

Can quintessence be observed; or, rather, can quintessence be distinguished from a cosmological constant? In astronomy, yes … by finding a way to observed (and measure) the acceleration of the universe at widely different times (quintessence and Λ predict different results). Another way might be to observe variations in the fundamental constants (e.g. the fine structure constant) or violations of Einstein’s equivalence principle.

One project seeking to measure the acceleration of the universe more accurately was ESSENCE (“Equation of State: SupErNovae trace Cosmic Expansion”).

In 1999, CERN Courier published a nice summary of cosmology as it was understood then, a year after the discovery of dark energy The quintessence of cosmology (it’s well worth a read, though a lot has happened in the past decade).

Universe Today articles? Yep! For example Will the Universe Expand Forever?, More Evidence for Dark Energy, and Hubble Helps Measure the Pace of Dark Energy.

Astronomy Cast episodes relevant to quintessence include What is the universe expanding into?, and A Universe of Dark Energy.

Source: NASA

Baryon

Particle Collider
Today, CERN announced that the LHCb experiment had revealed the existence of two new baryon subatomic particles. Credit: CERN/LHC/GridPP

[/caption]
Particles made up of three quarks are called baryons; the two best known baryons are the proton (made up of two up quarks and one down) and the neutron (two down quarks and one up). Together with the mesons – particles comprised of a quark and an antiquark – baryons form the hadrons (you’ve heard of hadrons, they’re part of the name of the world’s most powerful particle collider, the Large Hadron Collider, the LHC).

Because they’re made up of quarks, baryons ‘feel’ the strong force (or strong nuclear force as it is also called), which is mediated by gluons. The other kind of particle which makes up ordinary matter is leptons, which are not – as far as we know – made up of anything (and as they do not contain quarks, they do not participate in the strong interaction … which is another way of saying they do not experience the strong force); the electron is one kind of lepton. Baryons and leptons are fermions, so obey the Pauli exclusion principle (which, among other things, says that there can be no more than one fermion in a particular quantum state at any time … and ultimately why you do not fall through your chair).

In the kinds of environments we are familiar with in everyday life, the only stable baryon is the proton; in the environment of the nuclei of most atoms, the neutron is also stable (and in the extreme environment of a neutron star too); there are, however, hundreds of different kinds of unstable baryons.

One big, open question in cosmology is how baryons were formed – baryogenesis – and why are there essentially no anti-baryons in the universe. For every baryon, there is a corresponding anti-baryon … there is, for example, the anti-proton, the anti-baryon counterpart to the proton, made up of two up anti-quarks and one down anti-quark. So if there were equal numbers of baryons and anti-baryons to start with, how come there are almost none of the latter today?

Astronomers often use the term ‘baryonic matter’, to refer to ordinary matter; it’s a bit of a misnomer, because it includes electrons (which are leptons) … and it generally excludes neutrinos (and anti-neutrinos), which are also leptons! Perhaps a better term might be matter which interacts via electromagnetism (i.e. feels the electromagnetic force), but that’s a bit of a mouthful. Non-baryonic matter is what (cold) dark matter (CDM) is composed of; CDM does not interact electromagnetically.

The Particle Data Group maintains summary tables of the properties of all known baryons. A relatively new area of research in astrophysics (and cosmology) is baryon acoustic oscillations (BAO); read more about it at this Los Alamos National Laboratory website …

… and in the Universe Today article New Search for Dark Energy Goes Back in Time. Other Universe Today stories featuring baryons explicitly include Is Dark Matter Made Up of Sterile Neutrinos?, and Astronomers on Supernova High Alert.

Sources:
Wikipedia
Hyperphysics

Early Galaxy Pinpoints Reionization Era

This is a composite of false color images of the galaxies found at the early epoch around 800 million years after the Big Bang. The upper left panel presents the galaxy confirmed in the 787 million year old universe. These galaxies are in the Subaru Deep Field. Credit: M. Ouchi et al.

[/caption]
Astronomers looking to pinpoint when the reionozation of the Universe took place have found some of the earliest galaxies about 800 million years after the Big Bang. 22 early galaxies were found using a method that looks for far-away redshifting sources that disappear or “drop-out” at a specific wavelength. The age of one galaxy was confirmed by a characteristic neutral hydrogen signature at 787 million years after the Big Bang. The finding is the first age-confirmation of a so-called dropout galaxy at that distant time and pinpoints when the reionization epoch likely began.

The reionization period is about the farthest back in time that astronomers can observe. The Big Bang, 13.7 billion years ago, created a hot, murky universe. Some 400,000 years later, temperatures cooled, electrons and protons joined to form neutral hydrogen, and the murk cleared. Some time before 1 billion years after the Big Bang, neutral hydrogen began to form stars in the first galaxies, which radiated energy and changed the hydrogen back to being ionized. Although not the thick plasma soup of the earlier period just after the Big Bang, this star formation started the reionization epoch.

Astronomers know that this era ended about 1 billion years after the Big Bang, but when it began has eluded them.

We look for ‘dropout’ galaxies,” said Masami Ouchi, who led a US and Japanese team of astronomers looking back at the reionization epoch. “We use progressively redder filters that reveal increasing wavelengths of light and watch which galaxies disappear from or ‘dropout’ of images made using those filters. Older, more distant galaxies ‘dropout’ of progressively redder filters and the specific wavelengths can tell us the galaxies’ distance and age. What makes this study different is that we surveyed an area that is over 100 times larger than previous ones and, as a result, had a larger sample of early galaxies (22) than past surveys. Plus, we were able to confirm one galaxy’s age,” he continued. “Since all the galaxies were found using the same dropout technique, they are likely to be the same age.”

Ouchi’s team was able to conduct such a large survey because they used a custom-made, super-red filter and other unique technological advancements in red sensitivity on the wide-field camera of the 8.3-meter Subaru Telescope. They made their observations from 2006 to 2009 in the Subaru Deep Field and Great Observatories Origins Deep Survey North field. They then compared their observations with data gathered in other studies.

Astronomers have wondered whether the universe underwent reionization instantaneously or gradually over time, but more importantly, they have tried to isolate when the universe began reionization. Galaxy density and brightness measurements are key to calculating star-formation rates, which tell a lot about what happened when. The astronomers looked at star-formation rates and the rate at which hydrogen was ionized.

Using data from their study and others, they determined that the star-formation rates were dramatically lower from 800 million years to about one billion years after the Big Bang, then thereafter. Accordingly, they calculated that the rate of ionization would be very slow during this early time, because of this low star-formation rate.

“We were really surprised that the rate of ionization seems so low, which would constitute a contradiction with the claim of NASA’s WMAP satellite. It concluded that reionization started no later than 600 million years after the Big Bang,” remarked Ouchi. “We think this riddle might be explained by more efficient ionizing photon production rates in early galaxies. The formation of massive stars may have been much more vigorous then than in today’s galaxies. Fewer, massive stars produce more ionizing photons than many smaller stars,” he explained.

The research will be published in a December issue of the Astrophysical Journal.

Source: EurekAlert

New CMB Measurements Support Standard Model

The measure of polarized light from the early Universe allowed researchers to better plot the location of matter - the left image - which later became the stars and galaxies we have today. Image Credit: Sarah Church/Walter Gear

New measurements of the cosmic microwave background (CMB) – the leftover light from the Big Bang – lend further support the Standard Cosmological Model and the existence of dark matter and dark energy, limiting the possibility of alternative models of the Universe. Researchers from Stanford University and Cardiff University produced a detailed map of the composition and structure of matter as it would have looked shortly after the Big Bang, which shows that the Universe would not look as it does today if it were made up solely of ‘normal matter’.

By measuring the way the light of the CMB is polarized, a team led by Sarah Church of the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University and by Walter Gear, head of the School of Physics and Astronomy at Cardiff University in the United Kingdom were able construct a map of the way the Universe would have looked shortly after matter came into existence after the Big Bang. Their findings lend evidence to the predictions of the Standard Model in which the Universe is composed of 95% dark matter and energy, and only 5% of ordinary matter.

Polarization is a feature of light rays in which the oscillation of the light wave lies in right angles to the direction in which the light is traveling. Though most light is unpolarized, light that has interacted with matter can become polarized. The leftover light from the Big Bang – the CMB – has now cooled to a few degrees above 0 Kelvin, but it still retains the same polarization it had in the early Universe, once it had cooled enough to become transparent to light. By measuring this polarization, the researchers were able to extrapolate the location, structure, and velocity of matter in the early Universe with unprecedented precision. The gravitational collapse of large clumps of matter in the early universe creates certain resonances in the polarization that allowed the researchers to create a map of the matter composition.

Dr. Gear said, “The pattern of oscillations in the power spectra allow us to discriminate, as “real” and “dark” matter affect the position and amplitudes of the peaks in different ways. The results are also consistent with many other pieces of evidence for dark matter, such as the rotation rate of galaxies, and the distribution of galaxies in clusters.”

The measurements made by the QUaD experiment further constrain those made by previous experiments to measure properties of the CMB, such as WMAP and ACBAR. In comparison to these previous experiments, the The QUaD experiment, located at the South Pole, allowed researchers to measure the polarization of the CMB with very high precision. Image Credit: Sarah Churchmeasurements come closer to fitting what is predicted by the Standard Cosmologicl Model by more than an order of magnitude, said Dr. Gear. This is a very important step on the path to verifying whether our model of the Universe is correct.

The researchers used the QUaD experiment at the South Pole to make their observations. The QUaD telescope is a bolometer, essentially a thermometer that measures how certain types of radiation increase the temperature of the metals in the detector. The detector itself has to be near 1 degree Kelvin to eliminate noise radiation from the surrounding environment, which is why it is located at the frigid South Pole, and placed inside of a cryostat.

Paper co-author Walter Gear said in an email interview:

“The polarization is imprinted at the time the Universe becomes transparent to light, about 400,000 years after the big bang, rather than right after the big bang before matter existed. There are major efforts now to try to find what is called the “B-mode” signal”  which is a more complicated polarization pattern that IS imprinted right after the big-bang. QuaD places the best current upper limit on this but is still more than an order of magnitude away in sensitivity from even optimistic predictions of what that signal might be. That is the next generation of experiments’s goal.”

The results, published in a paper titled Improved Measurements of the Temperature and Polarization of the Cosmic Microwave Background from QUaD in the November 1st Astrophysical Journal, fit the predictions of the Standard Model remarkably well, providing further evidence for the existence of dark matter and energy, and constraining alternative models of the Universe.

Source: SLAC, email interview with Dr. Walter Gear

If We Live in a Multiverse, How Many Are There?

Artist concept of the cyclic universe.

[/caption]
Theoretical physics has brought us the notion that our single universe is not necessarily the only game in town. Satellite data from WMAP, along with string theory and its 11- dimensional hyperspace idea has produced the concept of the multiverse, where the Big Bang could have produced many different universes instead of a single uniform universe. The idea has gained popularity recently, so it was only a matter of time until someone asked the question of how many multiverses could possibly exist. The number, according to two physicists, could be “humongous.”

Andrei Linde and Vitaly Vanchurin at Stanford University in California, did a few back-of- the- envelope calculations, starting with the idea that the Big Bang was essentially a quantum process which generated quantum fluctuations in the state of the early universe. The universe then underwent a period of rapid growth called inflation during which these perturbations were “frozen,” creating different initial classical conditions in different parts of the cosmos. Since each of these regions would have a different set of laws of low energy physics, they can be thought of as different universes.

Linde and Vanchurin then estimated how many different universes could have appeared as a result of this effect. Their answer is that this number must be proportional to the effect that caused the perturbations in the first place, a process called slow roll inflation, — the solution Linde came up with previously to answer the problem of the bubbles of universes colliding in the early inflation period. In this model, inflation occurred from a scalar field rolling down a potential energy hill. When the field rolls very slowly compared to the expansion of the universe, inflation occurs and collisions end up being rare.

Using all of this (and more – see their paper here) Linde and Vanchurin calculate that the number of universes in the multiverse and could be at least 10^10^10^7, a number which is definitely “humungous,” as they described it.

The next question, then, is how many universes could we actually see? Linde and Vanchurin say they had to invoke the Bekenstein limit, where the properties of the observer become an important factor because of a limit to the amount of information that can be contained within any given volume of space, and by the limits of the human brain.

The total amount of information that can be absorbed by one individual during a lifetime is about 10^16 bits. So a typical human brain can have 10^10^16 configurations and so could never distinguish more than that number of different universes.

The number of multiverses the human brain could distinguish. Credit: Linde and Vanchurin
The number of multiverses the human brain could distinguish. Credit: Linde and Vanchurin

“So, the total number of possibilities accessible to any given observer is limited not only by the entropy of perturbations of metric produced by inflation and by the size of the cosmological horizon, but also by the number of degrees of freedom of an observer,” the physicists write.

“We have found that the strongest limit on the number of different locally distinguishable geometries is determined mostly by our abilities to distinguish between different universes and to remember our results,” wrote Linde and Vanchurin. “Potentially it may become very important that when we analyze the probability of existencse of a universe of a given type, we should be talking about a consistent pair: the universe and an observer who makes the rest of the universe “alive” and the wave function of the rest of the universe time-dependant.”

So their conclusion is that the limit does not depend on the properties of the multiverse itself, but on the properties of the observer.

They hope to further study this concept to see if this probability if proportional to the observable entropy of inflation.

Sources: ArXiv, Technology Review Blog

What! No Parallel Universe? Cosmic Cold Spot Just Data Artifact

Region in space detected by WMAP cooler than its surroundings. But not really. Rudnick/NRAO/AUI/NSF, NASA.

Rats! Another perplexing space mystery solved by science. New analysis of the famous “cold spot” in the cosmic microwave background reveals, and confirms, actually, that the spot is just an artifact of the statistical methods used to find it. That means there is no supervoid lurking in the CMB, and no parallel universe lying just beyond the edge of our own. What fun is that?

Back in 2004, astronomers studying data from the Wilkinson Microwave Anisotropy Probe (WMAP) found a region of the cosmic microwave background in the southern hemisphere in the direction of the constellation of Eridanus that was significantly colder than the rest by about 70 microkelvin. The probability of finding something like that was extremely low. If the Universe really is homogeneous and isotropic, then all points in space ought to experience the same physical development, and appear the same. This just wasn’t supposed to be there.

Some astronomers suggested the spot could be a supervoid, a remnant of an early phase transition in the universe. Others theorized it was a window into a parallel universe.

Well, it turns out, it wasn’t there.

Ray Zhang and Dragan Huterer at the University of Michigan in Ann Arbor say that the cold spot is simply an artifact of the statistical method–called Spherical Mexican Hat Wavelets–used to analyze the WMAP data. Use a different method of analysis and the cold spot disappears (or at least is no colder than expected).

“We trace this apparent discrepancy to the fact that WMAP cold spot’s temperature profile just happens to favor the particular profile given by the wavelet,” the duo says in their paper. “We find no compelling evidence for the anomalously cold spot in WMAP at scales between 2 and 8 degrees.”

This confirms another paper from 2008 also by Huterer along with colleague Kendrick Smith from the University of Cambridge who showed that the huge void could be considered as a statistical fluke because it had stars both in front of and behind it.

And in fact, one of the earlier papers suggesting the cold spot by Lawrence Rudnick from the University of Minnesota does indeed say that statistical uncertainties have not been accounted for.

Oh well. Now, on to the next cosmological mysteries like dark matter and dark energy!

Zhang and Huterer’s paper.

Huterer and Smith’s paper (2008)

Rudnick’s paper 2007

Original paper “finding” the cold spot

Sources: Technology Review Blog, Science

Hubble’s Law

velocity vs distance, from Hubble's 1929 paper

[/caption]
“The distance to objects beyond the Local Group is closely related to how fast they seem to be receding from us,” that’s Hubble’s law in a nutshell.

Edwin Hubble, the astronomer the Hubble Space Telescope is named after, first described the relationship which later bore his name in a paper in 1929; here is one of the ways he described it, in that paper: “The data in the table [of “nebulae”, i.e. galaxies] indicate a linear correlation between distances and velocities“; in numerical form, v = Hd (v is the speed at which a distant object is receding from us, d is its distance, and H is the Hubble constant).

Today the Hubble law is usually expressed as a relationship between redshift and distance, partly because redshift is what astronomers can measure directly.

Hubble’s Law, which is an empirical relationship, was the first concrete evidence that Einstein’s theory of General Relativity applied to the universe as a whole, as proposed only two years earlier by Georges Lemaître (interestingly, Lemaître’s paper also includes an estimate of the Hubble constant!); the universal applicability of General Relativity is the heart of the Big Bang theory, and the way we see the predicted expansion of space is as the speed at which things seem to be receding being proportional to their distance, i.e. Hubble’s Law.

Although other astronomers, such as Vesto Silpher, did much of the work needed to measure the galaxy redshifts, Hubble was the one who developed techniques for estimating the distance to the galaxies, and who pulled it all together to show how distance and speed were related.

Hubble’s Law is not exact; the measured redshift of some galaxies is different from what Hubble’s Law says it should be, given their distances. This is particularly noticeable for galaxy clusters, and is explained as the motion of galaxies within their local groups or clusters, due to their mutual gravitation.

Because the exact value of the Hubble constant, H, is so important in extragalactic astronomy and cosmology – it leads to an estimate of the age of the universe, helps test theories of Dark Matter and Dark Energy, and much more – a great deal of effort has gone into working it out. Today it is estimated to be 71 kilometers per second per megaparsec, plus or minus 7; this is about 21 km/sec per million light-years. What does this mean? An object a million light-years away would be receding from us at 21 km/sec; an object 10 million light-years away, 210 km/sec, etc.

Perhaps the most dramatic revision to the Hubble Law came in 1998, when two teams independently announced that they’d discovered that the rate of expansion of the universe is accelerating; the shorthand name for this observation is Dark Energy.

Harvard University’s Professor of Cosmology John Huchra maintains a webpage on the history of the Hubble constant, and this page from Ned Wright’s Cosmology Tutorial explains how the Hubble law and cosmology are related.

There are several Universe Today stories about the Hubble relationship and the Hubble constant; for example Astronomers Closing in on Dark Energy with Refined Hubble Constant, and Cosmologists Improve on Standard Candles Measurement.

And we have done some Astronomy Casts on it too, How Old is the Universe? and, How Big is the Universe?

Sources:
UT-Knoxville
NASA
Cornell Astronomy

What is Entropy?

After some time, this cold glass will reach thermal equilibrium

Perhaps there’s no better way to understand entropy than to grasp the second law of thermodynamics, and vice versa. This law states that the entropy of an isolated system that is not in equilibrium will increase as time progresses until equilibrium is finally achieved.

Let’s try to elaborate a little on this equilibrium thing. Note that in the succeeding examples, we’ll assume that they’re both isolated systems.

First example. Imagine putting a hot body and a cold body side by side. What happens after some time? That’s right. They both end up in the same temperature; one that is lower than the original temperature of the hotter one and higher than the original temperature of the colder one.

Second example. Ever heard of a low pressure area? It’s what weather reporters call a particular region that’s characterized by strong winds and perhaps some rain. This happens because all fluids flow from a region of high pressure to a region of low pressure. Thus, when the fluid, air in this case, comes rushing in, they do so in the form of strong winds. This goes on until the pressures in the adjacent regions even out.

In both cases, the physical quantities which started to be uneven between the two bodies/regions even out in the end, i.e., when equilibrium is achieved. The measurement of the extent of this evening-out process is called entropy.

During the process of attaining equilibrium, it is possible to tap into the system to perform work, as in a heat engine. Notice, however, that work can only be done for as long as there is a difference in temperature. Without it, like when maximum entropy has already been achieved, there is no way that work can be performed.

Since the concept of entropy applies to all isolated systems, it has been studied not only in physics but also in information theory, mathematics, as well as other branches of science and applied science.

Because the accepted view of the universe is that of one that is finite, then it can very well be considered as a closed system. As such, it should also be governed by the second law of thermodynamics. Thus, like in all isolated systems, the entropy of the universe is expected to be increasing.

So what? Well, also just like all isolated systems, the universe is therefore also expected to end up in a useless heap in equilibrium, a.k.a. a heat death, wherein energy can no longer be extracted from anymore. To give you some relief, not everyone involved in the study of cosmology is totally in agreement with entropy’s so-called role in the grand scheme of things though.

You can read more about entropy here in Universe Today. Want to know why time might flow in one direction? Have you ever thought about the time before the Big Bang? The entire entropy concept plays an important role in understanding them.

There’s more about entropy at NASA and Physics World too. Here are a couple of sources there:

Here are two episodes at Astronomy Cast that you might want to check out as well:

Source:
Hyperphysics