Magnetic Levitation

[/caption]Overcoming the pull of gravity and fighting acceleration are major challenges for scientists looking to achieve flight and/or high-speed transportation. One way that they overcome this is the modern and growing technology known as Magnetic Levitation. Relying on rare earth magnets, superconductors, electromagnets and diamagnets, magnetic levitation is now used for maglev trains, magnetic bearings and for product display purposes. Today, maglev transportation is one of the fastest growing means of transportation in industrialized countries. This method has the potential to be faster, quieter and smoother than wheeled mass transit systems and the power needed for levitation is usually not a particularly large percentage of the overall consumption; most of it being used to overcome air drag. In William Gibson’s novel Spook Country, maglev technology was also featured in the form of a “maglev bed”, a bed which used magnets to stay suspended in midair.

Magnetic levitation (aka. maglev or magnetic suspension) is the method by which an object is suspended with no support other than magnetic fields. According to Earnshaw’s theorem (a theory which is usually referenced to magnetic fields), it is impossible to stably levitate against gravity relying solely on static ferromagnetism. However, maglev technology overcomes this through a number of means. These include, but are not limited to, mechanical constraint (or pseudo-levitation), diamagnetism levitation, superconductors, rotational stabilization, servomechanisms, induced currents and strong focusing.

Pseudo-levitation relies on two magnets that are mechanically arranged to repel each other strongly, or are attracted but constrained from touching by a tensile member, such as a string or cable. Another example is the Zippe-type centrifuge where a cylinder is suspended under an attractive magnet, and stabilized by a needle beading from below. Diamagnetic levitation occurs when diamagnetic material is placed in close proximity to material that produces a magnetic field, thus repelling the diamagnetic material. Superconductor-levitation is achieved much the same way, superconductors being a perfect diamagne. Due to the Meissner effect, superconductors also have the property of having completely expelled their magnetic fields, allowing for further stability.

The first commercial maglev people mover was simply called “MAGLEV” and officially opened in 1984 near Birmingham, England. It operated on an elevated 600-metre (2,000 ft) section of monorail track between Birmingham International Airport and Birmingham International railway station, running at speeds up to 42 km/h (26 mph). Perhaps the most well-known implementation of high-speed maglev technology currently in operation is the Shanghai Maglev Train, a working model of the German-built Transrapid train that transports people 30 km (19 mi) to the airport in just 7 minutes 20 seconds, achieving a top speed of 431 km/h and averaging 250 km/h.

We have written many articles about magnetic levitation for Universe Today. Here’s an article about the uses of electromagnets, and here’s an article about how magnets work.

If you’d like more info on the magnetic levitation, check out these articles from How Stuff Works and Hyperphysics.

We’ve also recorded an entire episode of Astronomy Cast all about Magnetism. Listen here, Episode 42: Magnetism Everywhere.

Sources:
http://en.wikipedia.org/wiki/Magnetic_levitation
http://hyperphysics.phy-astr.gsu.edu/hbase/solids/maglev.html
http://www.rare-earth-magnets.com/t-magnetic-levitation.aspx
http://en.wikipedia.org/wiki/Earnshaw%27s_theorem
http://en.wikipedia.org/wiki/Maglev_train
http://en.wikipedia.org/wiki/Meissner_effect

Helmholtz Coil

[/caption]A magnetic field is a pretty awesome thing. As a fundamental force of the universe, they are something without which, planetary orbits, moving electrical charges, or even elementary particles could not exist. It is therefore intrinsic to scientific research that we be able to generate magnetic fields ourselves for the purpose of studying electromagnetism and its fundamental characteristics. One way to do this is with a device known as the Helmholtz Coil, an instrument that is named in honor of German physicist Hermann von Helmholtz (1821-1894), a scientist and philosopher who made fundamental contributions to the fields of physiology, optics, mathematics, and meteorology in addition to electrodynamics.

A Helmholtz coil is a device for producing a region of nearly uniform magnetic field. It consists of two identical circular magnetic coils that are placed symmetrically, one on each side of the experimental area along a common axis, and separated by a distance (h) equal to the radius (R) of the coil. Each coil carries an equal electrical current flowing in the same direction. A number of variations exist, including use of rectangular coils, and numbers of coils other than two. However, a two-coil Helmholtz pair is the standard model, with coils that are circular and in shape and flat on the sides. In such a device, electric current is passed through the coil for the purpose of creating a very uniform magnetic field.

Helmholtz coils are used for a variety of purposes. In one instance, they were used in an argon tube experiment to measure the charge to mass ratio (e:m)of electrons. In addition, they are often used to measure the strength and fields of permanent magnets. In order to do this, the coil pair is connected to a fluxmeter, a device which contains measuring coils and electronics that evaluate the change of voltage in the measuring coils to calculate the overall magnetic flux.In some applications, a Helmholtz coil is used to cancel out Earth’s magnetic field, producing a region with a magnetic field intensity much closer to zero. This can be used to see how electrical charges and magnetic fields operate when not acted on by the gravitational pull of the Earth or other celestial bodies.

In a Helmholtz girl, the magnetic flux density of a field generated (represented by B) can be expressed mathematically by the equation:

Where R is the radius of the coils, n is the number of turns in each coil, I is the current flowing through the coils, and ?0 is the permeability of free space (1.26 x 10-6 T • m/A).

We have written many articles about the Helmholtz Coil for Universe Today. Here’s an article about the right hand rule magnetic field, and here’s an article about magnetic field.

If you’d like more info on the Helmholtz Coil, check out an article from Hyperphysics. Also, here’s another article about the Helmholtz Coil.

We’ve also recorded an entire episode of Astronomy Cast all about Magnetism. Listen here, Episode 42: Magnetism Everywhere.

Sources:
http://en.wikipedia.org/wiki/Helmholtz_coil
http://www.oersted.com/helmholtz_coils_1.shtml
http://hyperphysics.phy-astr.gsu.edu/hbase/magnetic/helmholtz.html
http://physicsx.pr.erau.edu/HelmholtzCoils/index.html
http://www.youtube.com/watch?v=nu5kwkmj870
http://www.circuitcellar.com/library/print/0606/Wotiz191/5.htm

Dipole Moment

[/caption]It has long been known that all molecules possess two equal and opposite charges which are separated by a certain distance. This separation of positive and negative charges is what is referred to as an electric dipole, meaning that it essentially has two poles. In the case of such polar molecules, the center of negative charge does not coincide with the center of positive charge. The extent of polarity in such covalent molecules can be described by the term Dipole Moment, which is essentially the measure of polarity in a polar covalent bond.

The simplest example of a dipole is a water molecule. A molecule of water is polar because of the unequal sharing of its electrons in a “bent” structure. The water molecule forms an angle, with hydrogen atoms at the tips and oxygen at the vertex. Since oxygen has a higher electronegativity than hydrogen, the side of the molecule with the oxygen atom has a partial negative charge while the hydrogen, in the center, has a partial positive charge. Because of this, the direction of the dipole moment points towards the oxygen.

In the language of physics, the electric dipole moment is a measure of the separation of positive and negative electrical charges in a system of charges, that is, a measure of the charge system’s overall polarity – i.e. the separation of the molecules electric charge, which leads to a dipole. Mathematically, and in the simple case of two point charges, one with charge +q and one with charge ?q, the electric dipole moment p can be expressed as:p=qd, where d is the displacement vector pointing from the negative charge to the positive charge. Thus, the electric dipole moment vector p points from the negative charge to the positive charge.

Another way to look at it is to represent the Dipole Moment by the Greek letter m, m = ed, where e is the electrical charge and d is the distance of separation. It is expressed in the units of Debye and written as D (where 1 Debye = 1 x 10-18e.s.u cm). A dipole moment is a vector quantity and is therefore represented by a small arrow with a tail at the positive center and head pointing towards a negative center. In the case of a Water molecule, the Dipole moment is 1.85 D, whereas a molecule of hydrochloric acid is 1.03 D and can be represented as:

We have written many articles about dipole moment for Universe Today. Here’s an article about what water is made of, and here’s an article about molecules.

If you’d like more info on dipole moment, check out these articles from Hyperphysics and Science Daily.

We’ve also recorded an entire episode of Astronomy Cast all about Molecules in Space. Listen here, Episode 116: Molecules in Space.

Sources:
http://en.wikipedia.org/wiki/Electric_dipole_moment
http://en.wikipedia.org/wiki/Dipole
http://www.tutorvista.com/content/chemistry/chemistry-iii/chemical-bonding/degree-polarity.php
http://hyperphysics.phy-astr.gsu.edu/hbase/electric/dipole.html#c1
http://en.wikipedia.org/wiki/Water_molecule

What is Fermi Energy?

When it comes to physics, the concept of energy is a tricky thing, subject to many different meanings and dependent on many possible contexts. For example, when speaking of atoms and particles, energy comes in several forms, such as electrical energy, heat energy, and light energy.

But when one gets into the field of quantum mechanics, a far more complex and treacherous realm, things get even trickier. In this realm, scientists rely on concepts such as Fermi Energy, a concept that usually refers to the energy of the highest occupied quantum state in a system of fermions at absolute zero temperature.

Fermions:

Fermions take their name from famed 20th century Italian physicist Enrico Fermi. These are subatomic particles that are usually associated with matter, whereas subatomic particles like bosons are force carriers (associated with gravity, nuclear forces, electromagnetism, etc.) These particles (which can take the form of electrons, protons and neutrons) obey the Pauli Exclusion Principle, which states that no two fermions can occupy the same (one-particle) quantum state.

Neils Bohr's model a nitrogen atom. Credit: britannica.com
Neils Bohr’s model a nitrogen atom. Credit: britannica.com

In a system containing many fermions (like electrons in a metal), each fermion will have a different set of quantum numbers. Fermi energy, as a concept, is important in determining the electrical and thermal properties of solids. The value of the Fermi level at absolute zero (-273.15 °C) is called the Fermi energy and is a constant for each solid. The Fermi level changes as the solid is warmed and as electrons are added to or withdrawn from the solid.

Calculating Fermi Energy:

To determine the lowest energy a system of fermions can have (aka. it’s lowest possible Fermi energy), we first group the states into sets with equal energy, and order these sets by increasing energy. Starting with an empty system, we then add particles one at a time, consecutively filling up the unoccupied quantum states with the lowest energy.

When all the particles have been put in, the Fermi energy is the energy of the highest occupied state. What this means is that even if we have extracted all possible energy from a metal by cooling it to near absolute zero temperature (0 kelvin), the electrons in the metal are still moving around. The fastest ones are moving at a velocity corresponding to a kinetic energy equal to the Fermi energy.

Bosons, fermions and other particles after a collsion. Credit: CERN
Image showing bosons, fermions and other particles created by a high-energy collision. Credit: CERN

Applications:

The Fermi energy is one of the important concepts of condensed matter physics. It is used, for example, to describe metals, insulators, and semiconductors. It is a very important quantity in the physics of superconductors, in the physics of quantum liquids like low temperature helium (both normal and superfluid 3He), and it is quite important to nuclear physics and to understand the stability of white dwarf stars against gravitational collapse.

Confusingly, the term “Fermi energy” is often used to describe a different but closely-related concept, the Fermi level (also called chemical potential). The Fermi energy and chemical potential are the same at absolute zero, but differ at other temperatures.

We have written many interesting articles about quantum physics here at Universe Today. Here’s What is the Bohr Atomic Model?, Quantum Entanglement Explained, What is the Electron Cloud Model, What is the Double Slit Experiment?, What is Loop Quantum Gravity? and Unifying the Quantum Principle – Flowing Along in Four Dimensions.

If you’d like more info on Fermi Energy, check out these articles from Hyperphysics and Science World.

We’ve also recorded an entire episode of Astronomy Cast all about Quantum Mechanics. Listen here, Episode 138: Quantum Mechanics.

Sources:

Magnetic Energy

Magnetic Energy
Magnetic Energy Flow. Image Credit: dhost.info

[/caption]During the 19th century, one of the greatest discoveries in the history of physics was made by an Scottish physicist named James Clerk Maxwell. It was at this time, while studying the perplexing nature of magnetism and electricity, that he proposed a radical new theory. Electricity and magnetism, long thought to be separate forces, were in actuality closely associated with each other. That is, every electrical current has associated with it a magnetic field and every changing magnetic field creates its own electrical current. Maxwell went on to express this in a set of partial differential equations, known as Maxwell’s Equations, and form the basis for both electrical and Magnetic Energy.

In fact, thanks to Maxwell’s work, magnetic and electric energy are more appropriately considered as a single force. Together, they are what is known as electromagnetic energy – i.e. a form of energy that has both electrical and magnetic components. It is created when one runs a magnetic current through a wire or any other conducive material, creating a magnetic field. The magnetic energy generated can be used to attract other metal parts (as in the case in many modern machines that have moving parts) or can be used to generate electricity and store power (hydroelectric dams and batteries).

Since the 19th century, scientists have gone on to understand that many types of energy are in fact forms of electromagnetic energy. These include x rays, gamma rays, visible light (i.e. photons), ultraviolet light, infrared radiation, radio waves, and microwaves. These forms of electromagnetic energy differ from each other only in terms of the wavelength and frequency. Those forms which have shorter waves and higher frequencies tend to be the more harmful varieties, such as x-rays and gamma rays, while those that have longer waves and shorter frequencies, such as radio waves, tend to be more benign.

In mathematical terms, the equation for measuring the output of a magnetic field can be expressed as follows: V = L dI/dt + RI where V is volume, L is inductance, R is resistance, I is charge, dI represents change in charge, and dt represents change over time.

Here are some articles about Magnetic Energy written for Universe Today.
Behind the Power and Beauty of Northern Lights
Magnetic Fields in Inter-cluster Space: Measured at Last

If you’d like more info on Magnetic Energy, check out these articles:
Wikipedia Entry on Magnetic Energy
More info about magnetic energy

We’ve also recorded an entire episode of Astronomy Cast all about Magnetism. Listen here, Episode 42: Magnetism Everywhere.

Sources:
http://en.wikipedia.org/wiki/Magnetic_energy
http://en.wikipedia.org/wiki/James_Clerk_Maxwell
http://en.wikipedia.org/wiki/Maxwell%27s_equations
http://fi.edu/guide/hughes/10types/typesmagnetic.html
http://farside.ph.utexas.edu/teaching/em/lectures/node84.html
http://science.jrank.org/pages/2489/Energy-Magnetic-energy.html

How Satellites Work

GPS Satellite
According to a new proposal, GPS satellites may be the key to finding dark matter. Credit: NASA

[/caption]
In 1957, the Soviet Union launched the world’s first satellite, known as Sputnik. This changed the course of world history and led the United States, their chief rival in the Space Race, to mount a massive effort of its own to put manned craft in orbit and land a man on the moon. Since then, the presence of satellites in our atmosphere has become commonplace, which has muted the sense of awe and wonder involved. However, for many, especially students studying in engineering and aerospace programs, the question of How Satellites Work is still one of vital importance.

Satellite perform a wide array of functions. Some are observational, such as the Hubble Space Telescope – providing scientists with images of distant stars, nebulas, galaxies, and other deep space phenomena. Others are dedicated to scientific research, particularly the behavior of organisms in low-gravity environments. Then there are communications satellites which relay telecommunications signals back and forth across the globe. GPS satellites offer navigational aid and tracking aides to people looking to transport goods or navigate their way across land and oceans. And military satellites are used to observe and monitor enemy installations and formations on the ground while also helping the airforce and navy guide their ordinance to enemy targets.

Satellites are deployed by attaching them to rockets which then ferry them into orbit around the planet. Once deployed, they are typically powered by rechargeable batteries which are recharged through solar panels. Other satellites have internal fuel cells that convert chemical energy to electrical energy, while a few rely on nuclear power. Small thrusters provide attitude, altitude, and propulsion control to modify and stabilize the satellite’s position in space.

When it comes to classifying the orbit of a satellite, scientists use a varying list to describe the particular nature of their orbits. For example, Centric classifications refer to the object which the satellite orbits (i.e. planet Earth, the Moon, etc). Altitude classifications determine how far the satellite is from Earth, whether it is in low, medium or high orbit. Inclination refers to whether the satellite is in orbit around the equatorial plane, the polar regions, or the polar-sun orbit that passes the equator at the same local time on every pass so as to stay in the light. Eccentricity classifications describe whether the orbit is circular or elliptical, while Synchronous classifications describe whether or not the satellite’s rotation matches the rotational period of the object (i.e. a standard day).

Depending on the nature of their purpose, satellites also carry a wide range of components inside their housing. This can include radio equipment, storage containers, camera equipment, and even weaponry. In addition, satellites typically have an on-board computer to send and receive information from their controllers on the ground, as well as compute their positions and calculate course corrections.

We have written many articles about satellites for Universe Today. Here’s an article about the satellites in space, and here’s an article about exploring satellites with Google Earth.

If you’d like more info on satellites, check out these articles:
National Geographics article about Orbital Objects
Satellites and Space Weather

We’ve also recorded an episode of Astronomy Cast about the space shuttle. Listen here, Episode 127: The US Space Shuttle.

Sources:
http://en.wikipedia.org/wiki/Satellite
http://en.wikipedia.org/wiki/List_of_orbits
http://www.gma.org/surfing/sats.html
http://science.howstuffworks.com/satellite5.htm
http://www.howstuffworks.com/satellite.htm

How Satellites Stay in Orbit

GPS Satellite
According to a new proposal, GPS satellites may be the key to finding dark matter. Credit: NASA

[/caption]
An artificial satellite is a marvel of technology and engineering. The only thing comparable to the feat in technological terms is the scientific know-how that goes into placing, and keeping, one in orbit around the Earth. Just consider what scientists need to understand in order to make this happen: first, there’s gravity, then a comprehensive knowledge of physics, and of course the nature of orbits themselves. So really, the question of How Satellites Stay in Orbit, is a multidisciplinary one that involves a great of technical and academic knowledge.

First, to understand how a satellite orbits the Earth, it is important to understand what orbit entails. Johann Kepler was the first to accurately describe the mathematical shape of the orbits of planets. Whereas the orbits of planets about the Sun and the Moon about the Earth were thought to be perfectly circular, Kepler stumbled onto the concept of elliptical orbits. In order for an object to stay in orbit around the Earth, it must have enough speed to retrace its path. This is as true of a natural satellite as it is of an artificial one. From Kepler’s discovery, scientists were also able to infer that the closer a satellite is to an object, the stronger the force of attraction, hence it must travel faster in order to maintain orbit.

Next comes an understanding of gravity itself. All objects possess a gravitational field, but it is only in the case of particularly large objects (i.e. planets) that this force is felt. In Earth’s case, the gravitational pull is calculated to 9.8 m/s2. However, that is a specific case at the surface of the planet. When calculating objects in orbit about the Earth, the formula v=(GM/R)1/2 applies, where v is velocity of the satellite, G is the gravitational constant, M is the mass of the planet, and R is the distance from the center of the Earth. Relying on this formula, we are able to see that the velocity required for orbit is equal to the square root of the distance from the object to the center of the Earth times the acceleration due to gravity at that distance. So if we wanted to put a satellite in a circular orbit at 500 km above the surface (what scientists would call a Low Earth Orbit LEO), it would need a speed of ((6.67 x 10-11 * 6.0 x 1024)/(6900000))1/2 or 7615.77 m/s. The greater the altitude, the less velocity is needed to maintain the orbit.

So really, a satellites ability to maintain its orbit comes down to a balance between two factors: its velocity (or the speed at which it would travel in a straight line), and the gravitational pull between the satellite and the planet it orbits. The higher the orbit, the less velocity is required. The nearer the orbit, the faster it must move to ensure that it does not fall back to Earth.

We have written many articles about satellites for Universe Today. Here’s an article about artificial satellites, and here’s an article about geosynchronous orbit.

If you’d like more info on satellites, check out these articles:
Orbital Objects
List of satellites in geostationary orbit

We’ve also recorded an episode of Astronomy Cast about the space shuttle. Listen here, Episode 127: The US Space Shuttle.

Sources:
http://en.wikipedia.org/wiki/Satellite
http://science.howstuffworks.com/satellite6.htm
http://www.bu.edu/satellite/classroom/lesson05-2.html
http://library.thinkquest.org/C007258/Keep_Orbit.htm#

What Is The Double Slit Experiment?

Double Slit Experiment
Double Slit Experiment

Light… is it a particle or a wave? What fundamental mechanics govern the behavior of it? And most importantly, does the mere act of observation alter this behavior? This is the conundrum quantum physicists have been puzzling over for many centuries, ever since photon-wave mechanics was theorized and the Double Slit experiment was first conducted.

Also known as Young’s experiment, this involved particle beams or coherent waves passing through two closely-spaced slits, the purpose of which was to measure the resulting impacts on a screen behind them. In quantum mechanics the double-slit experiment demonstrated the inseparability of the wave and particle natures of light and other quantum particles.

The Double Slit Experiment was first conducting by Thomas Young back in 1803, although Sir Isaac Newton is said to have performed a similar experiment in his own time. During the original experiments, Newton shone light on a small hair, whereas Young used a slip of card with a slit cut into it. More recently, scientists have used a point light source to illuminate a thin plate with two parallel slits, and the light passing through the slits strikes a screen behind them.

Relying on classical particle theory, the results of the experiment should have corresponded to the slits, the impacts on the screen appearing in two vertical lines. However, this was not the case. The results showed in many circumstances a pattern of interference, something which could only occur if wave patterns had been involved.

Classical particles do not interfere with each other; they merely collide. If classical particles are fired in a straight line through a slit they will all strike the screen in a pattern the same size and shape as the slit. Where there are two open slits, the resulting pattern will simply be the sum of the two single-slit patterns (two vertical lines). But again and again, the experiment demonstrated that the coherent beams of light were interfering, creating a pattern of bright and dark bands on the screen.

However, the bands on the screen were always found to be absorbed as though it were composed of discrete particles (aka. photons). To make matters even more confusing, measuring devices were put in place to observe the photons as they passed through the slits. When this was done, the photons appeared in the form of particles and their impacts on the screen corresponded to the slits, tiny particle-sized spots distributed in straight vertical lines.

By placing an observation device in place, the wave function of the photons collapsed and the light behaved as classical particles once more! This could only be resolved by claiming that light behaves as both a particle and a wave, and that observing them causes the range of behavioral possibilities to narrow to the point where their behavior become predictable once more.

The Double Slit experiment not only gave rise to the particle-wave theory of photons, it also made scientists aware of the incredible, confounding world of quantum mechanics, where nothing is predictable, everything is relative, and the observer is no longer a passive subject, but an active participant with the power to change the outcome. For an animated demonstration of the Double Slit experiment, click here.

We have written many articles about the Double Slit Experiment for Universe Today. Here’s an a forum discussion about a home-made double slit experiment, and here’s an article about the wave-particle duality.

If you’d like more info on the double slit experiment, check out these articles from Physorg.com and Space.com.

We’ve also recorded an entire episode of Astronomy Cast all about Quantum Mechanics. Listen here, Episode 138: Quantum Mechanics.

Dielectric Constant

Dielectric Constant
Dielectric Constant. Image Credit: doitpoms.ac.uk

[/caption]Take a look at that ceramic toilette or sink in your bathroom. Ever think to yourself that it has something in common with glass, mica, plastic, or even dry air? Ever consider that it might be useful in the construction of capacitors. Probably not, but that’s because this material has a property that is often overlooked. It is a dielectric, meaning a substance that is a poor conductor of electricity, but a good means of electrical storage. Whether we are talking about ceramic, glass, air, or even vacuum (another good dielectric), scientists use what is called the Dielectric Constant, which is the ratio of permittivity of a substance to the permittivity of free space. Or, in layman’s terms, the ratio of the amount of electrical energy stored in a material by an applied voltage, relative to that stored in a vacuum.

Confused? Well, perhaps a little explanation is necessary to dispel some of the technical roadblocks to understanding. First of all, a dielectric is defined as an insulating material or a very poor conductor of electric current. When dielectrics are placed in an electric field, practically no current flows in them because, unlike metals, they have no loosely bound, or free, electrons that may drift through the material. Instead, electric polarization occurs, where the positive charges within the dielectric are displaced minutely in the direction of the electric field, and the negative charges are displaced minutely in the direction opposite to the electric field. This slight separation of charge, or polarization, reduces the electric field within the dielectric itself. This property, as already mentioned, makes it a poor conductor, but a good storage medium.
In practice, most dielectric materials are solid. But, as already mentioned, dry air is also dielectric, as are most pure, dry gases such as helium and nitrogen. These have a low dielectric constant, whereas things like metal oxides have a high constant. Materials with moderate dielectric constants include ceramics, distilled water, paper, mica, polyethylene, and glass. As the dielectric constant increases, the electric flux density increases (the total amount of electrical charge per area), but only if all other factors remain unchanged. This in turn enables objects of a given size, such as sets of metal plates, to hold their electric charge for long periods of time, and/or to hold large quantities of charge.
Because of they constitute good insulating material (or dielectric), metal oxides, dry air and vacuum are often used in the construction of high-energy capacitors as well as radio-frequency transmission lines, where electrical energy is stored at radio frequencies.

We have written many articles about the dielectric constant for Universe Today. Here’s an article about how microwaves work, and here’s an article about the table-top test of general relativity.

If you’d like more info on dielectric constant, check out these articles from Hyperphysics and Web Physics.

We’ve also recorded an entire episode of Astronomy Cast all about Electromagnetism. Listen here, Episode 103: Electromagnetism.

Sources:
http://en.wikipedia.org/wiki/Dielectric
http://en.wikipedia.org/wiki/Relative_permittivity
http://en.wikipedia.org/wiki/Flux
http://en.wikipedia.org/wiki/Electrostatic
http://www.britannica.com/EBchecked/topic/162637/dielectric-constant
http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci546287,00.html
http://www.britannica.com/EBchecked/topic/162630/dielectric

What is an Enhanced Greenhouse Effect?

Enhanced Greenhouse Effect
Greenhouse Effect vs. Enhanced Greenhouse Effect. Image Credit: environment.act.gov.au

Every day, solar radiation reaches the surface of our planet from the sun. It is then converted into thermal radiation which is then absorbed by atmospheric greenhouse gases (such as carbon dioxide) and is re-radiated in all directions. Known as the Greenhouse Effect, this process is essential to life as we know it. Without it, Earth’s surface temperature would be significantly lower and many life forms would cease to exist. However, where human agency is involved, this effect has been shown to have a downside. Indeed, when excess amounts of greenhouse gases are put into the atmosphere, this natural warming effect is boosted to the point where it can have damaging, even disastrous consequences for life here on Earth. This process is known as the Enhanced Greenhouse Effect, where the natural process of warming caused by solar radiation and greenhouse gases is heightened by anthropogenic (i.e. human) factors.

The effect of CO2 and other greenhouse gases on the global climate was first publicized in 1896 by Swedish scientist Svante Arrhenius. It was he that first developed a theory to explain the ice ages, as well as the first scientist to speculate that changes in the levels of carbon dioxide in the atmosphere could substantially alter the surface temperature of the Earth. This was expanded upon in the mid-20th century by Guy Stewart Callendar, an English steam engineer and inventor who was also interested in the link between increased CO2 levels in the atmosphere and rising global temperatures. Thanks to his research in the field, the link between the two came to be known for a time as the “Callendar effect”.
As the 20th century rolled on, a scientific consensus emerged that recognized this phenomenon as a reality and increasingly urgent problem. Relying on ice core data, atmospheric surveys performed by NASA, the Mauna Loa observatory and countless other research institutes all over the planet, scientists now believe there is a direct link between human agency and the rise in global mean temperatures over the fifty and even two-hundred years. This is due largely to increased production of CO2 through fossil fuel burning and other activities such as cement production and tropical deforestation. In addition, methane production has also been successfully linked to an increase in global temperatures, which is the result of growing consumption of meat and the need to clear large areas of tropical rainforests in order to make room for pasture land.

According to the latest Assessment Report from the Intergovernmental Panel on Climate Change which was released in 2007, “most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”. If left unchecked, it is unclear what the exact consequences would be, but most scenarios predict a steep drop in worldwide food production, widespread drought, glacial depletion, the near to total depletion of the polar ice cap, and the possibility that the process could become irreversible.
Getting toasty in here!

We have written many articles about enhanced greenhouse effect for Universe Today. Here’s an article about greenhouse effect, and here’s an article about atmospheric gases.

If you’d like more info on Enhanced Greenhouse Effect, check out these articles from USA Today and Earth Observatory.

We’ve also recorded an episode of Astronomy Cast all about planet Earth. Listen here, Episode 51: Earth.

Sources:
http://en.wikipedia.org/wiki/Greenhouse_effect
http://www.science.org.au/nova/016/016key.htm
http://en.wikipedia.org/wiki/Radiative_forcing
http://en.wikipedia.org/wiki/Svante_Arrhenius
http://en.wikipedia.org/wiki/Callendar_effect
http://en.wikipedia.org/wiki/History_of_climate_change_science