How Satellites Stay in Orbit

GPS Satellite
According to a new proposal, GPS satellites may be the key to finding dark matter. Credit: NASA

[/caption]
An artificial satellite is a marvel of technology and engineering. The only thing comparable to the feat in technological terms is the scientific know-how that goes into placing, and keeping, one in orbit around the Earth. Just consider what scientists need to understand in order to make this happen: first, there’s gravity, then a comprehensive knowledge of physics, and of course the nature of orbits themselves. So really, the question of How Satellites Stay in Orbit, is a multidisciplinary one that involves a great of technical and academic knowledge.

First, to understand how a satellite orbits the Earth, it is important to understand what orbit entails. Johann Kepler was the first to accurately describe the mathematical shape of the orbits of planets. Whereas the orbits of planets about the Sun and the Moon about the Earth were thought to be perfectly circular, Kepler stumbled onto the concept of elliptical orbits. In order for an object to stay in orbit around the Earth, it must have enough speed to retrace its path. This is as true of a natural satellite as it is of an artificial one. From Kepler’s discovery, scientists were also able to infer that the closer a satellite is to an object, the stronger the force of attraction, hence it must travel faster in order to maintain orbit.

Next comes an understanding of gravity itself. All objects possess a gravitational field, but it is only in the case of particularly large objects (i.e. planets) that this force is felt. In Earth’s case, the gravitational pull is calculated to 9.8 m/s2. However, that is a specific case at the surface of the planet. When calculating objects in orbit about the Earth, the formula v=(GM/R)1/2 applies, where v is velocity of the satellite, G is the gravitational constant, M is the mass of the planet, and R is the distance from the center of the Earth. Relying on this formula, we are able to see that the velocity required for orbit is equal to the square root of the distance from the object to the center of the Earth times the acceleration due to gravity at that distance. So if we wanted to put a satellite in a circular orbit at 500 km above the surface (what scientists would call a Low Earth Orbit LEO), it would need a speed of ((6.67 x 10-11 * 6.0 x 1024)/(6900000))1/2 or 7615.77 m/s. The greater the altitude, the less velocity is needed to maintain the orbit.

So really, a satellites ability to maintain its orbit comes down to a balance between two factors: its velocity (or the speed at which it would travel in a straight line), and the gravitational pull between the satellite and the planet it orbits. The higher the orbit, the less velocity is required. The nearer the orbit, the faster it must move to ensure that it does not fall back to Earth.

We have written many articles about satellites for Universe Today. Here’s an article about artificial satellites, and here’s an article about geosynchronous orbit.

If you’d like more info on satellites, check out these articles:
Orbital Objects
List of satellites in geostationary orbit

We’ve also recorded an episode of Astronomy Cast about the space shuttle. Listen here, Episode 127: The US Space Shuttle.

Sources:
http://en.wikipedia.org/wiki/Satellite
http://science.howstuffworks.com/satellite6.htm
http://www.bu.edu/satellite/classroom/lesson05-2.html
http://library.thinkquest.org/C007258/Keep_Orbit.htm#

What Is The Double Slit Experiment?

Double Slit Experiment
Double Slit Experiment

Light… is it a particle or a wave? What fundamental mechanics govern the behavior of it? And most importantly, does the mere act of observation alter this behavior? This is the conundrum quantum physicists have been puzzling over for many centuries, ever since photon-wave mechanics was theorized and the Double Slit experiment was first conducted.

Also known as Young’s experiment, this involved particle beams or coherent waves passing through two closely-spaced slits, the purpose of which was to measure the resulting impacts on a screen behind them. In quantum mechanics the double-slit experiment demonstrated the inseparability of the wave and particle natures of light and other quantum particles.

The Double Slit Experiment was first conducting by Thomas Young back in 1803, although Sir Isaac Newton is said to have performed a similar experiment in his own time. During the original experiments, Newton shone light on a small hair, whereas Young used a slip of card with a slit cut into it. More recently, scientists have used a point light source to illuminate a thin plate with two parallel slits, and the light passing through the slits strikes a screen behind them.

Relying on classical particle theory, the results of the experiment should have corresponded to the slits, the impacts on the screen appearing in two vertical lines. However, this was not the case. The results showed in many circumstances a pattern of interference, something which could only occur if wave patterns had been involved.

Classical particles do not interfere with each other; they merely collide. If classical particles are fired in a straight line through a slit they will all strike the screen in a pattern the same size and shape as the slit. Where there are two open slits, the resulting pattern will simply be the sum of the two single-slit patterns (two vertical lines). But again and again, the experiment demonstrated that the coherent beams of light were interfering, creating a pattern of bright and dark bands on the screen.

However, the bands on the screen were always found to be absorbed as though it were composed of discrete particles (aka. photons). To make matters even more confusing, measuring devices were put in place to observe the photons as they passed through the slits. When this was done, the photons appeared in the form of particles and their impacts on the screen corresponded to the slits, tiny particle-sized spots distributed in straight vertical lines.

By placing an observation device in place, the wave function of the photons collapsed and the light behaved as classical particles once more! This could only be resolved by claiming that light behaves as both a particle and a wave, and that observing them causes the range of behavioral possibilities to narrow to the point where their behavior become predictable once more.

The Double Slit experiment not only gave rise to the particle-wave theory of photons, it also made scientists aware of the incredible, confounding world of quantum mechanics, where nothing is predictable, everything is relative, and the observer is no longer a passive subject, but an active participant with the power to change the outcome. For an animated demonstration of the Double Slit experiment, click here.

We have written many articles about the Double Slit Experiment for Universe Today. Here’s an a forum discussion about a home-made double slit experiment, and here’s an article about the wave-particle duality.

If you’d like more info on the double slit experiment, check out these articles from Physorg.com and Space.com.

We’ve also recorded an entire episode of Astronomy Cast all about Quantum Mechanics. Listen here, Episode 138: Quantum Mechanics.

Dielectric Constant

Dielectric Constant
Dielectric Constant. Image Credit: doitpoms.ac.uk

[/caption]Take a look at that ceramic toilette or sink in your bathroom. Ever think to yourself that it has something in common with glass, mica, plastic, or even dry air? Ever consider that it might be useful in the construction of capacitors. Probably not, but that’s because this material has a property that is often overlooked. It is a dielectric, meaning a substance that is a poor conductor of electricity, but a good means of electrical storage. Whether we are talking about ceramic, glass, air, or even vacuum (another good dielectric), scientists use what is called the Dielectric Constant, which is the ratio of permittivity of a substance to the permittivity of free space. Or, in layman’s terms, the ratio of the amount of electrical energy stored in a material by an applied voltage, relative to that stored in a vacuum.

Confused? Well, perhaps a little explanation is necessary to dispel some of the technical roadblocks to understanding. First of all, a dielectric is defined as an insulating material or a very poor conductor of electric current. When dielectrics are placed in an electric field, practically no current flows in them because, unlike metals, they have no loosely bound, or free, electrons that may drift through the material. Instead, electric polarization occurs, where the positive charges within the dielectric are displaced minutely in the direction of the electric field, and the negative charges are displaced minutely in the direction opposite to the electric field. This slight separation of charge, or polarization, reduces the electric field within the dielectric itself. This property, as already mentioned, makes it a poor conductor, but a good storage medium.
In practice, most dielectric materials are solid. But, as already mentioned, dry air is also dielectric, as are most pure, dry gases such as helium and nitrogen. These have a low dielectric constant, whereas things like metal oxides have a high constant. Materials with moderate dielectric constants include ceramics, distilled water, paper, mica, polyethylene, and glass. As the dielectric constant increases, the electric flux density increases (the total amount of electrical charge per area), but only if all other factors remain unchanged. This in turn enables objects of a given size, such as sets of metal plates, to hold their electric charge for long periods of time, and/or to hold large quantities of charge.
Because of they constitute good insulating material (or dielectric), metal oxides, dry air and vacuum are often used in the construction of high-energy capacitors as well as radio-frequency transmission lines, where electrical energy is stored at radio frequencies.

We have written many articles about the dielectric constant for Universe Today. Here’s an article about how microwaves work, and here’s an article about the table-top test of general relativity.

If you’d like more info on dielectric constant, check out these articles from Hyperphysics and Web Physics.

We’ve also recorded an entire episode of Astronomy Cast all about Electromagnetism. Listen here, Episode 103: Electromagnetism.

Sources:
http://en.wikipedia.org/wiki/Dielectric
http://en.wikipedia.org/wiki/Relative_permittivity
http://en.wikipedia.org/wiki/Flux
http://en.wikipedia.org/wiki/Electrostatic
http://www.britannica.com/EBchecked/topic/162637/dielectric-constant
http://searchcio-midmarket.techtarget.com/sDefinition/0,,sid183_gci546287,00.html
http://www.britannica.com/EBchecked/topic/162630/dielectric

What is an Enhanced Greenhouse Effect?

Enhanced Greenhouse Effect
Greenhouse Effect vs. Enhanced Greenhouse Effect. Image Credit: environment.act.gov.au

Every day, solar radiation reaches the surface of our planet from the sun. It is then converted into thermal radiation which is then absorbed by atmospheric greenhouse gases (such as carbon dioxide) and is re-radiated in all directions. Known as the Greenhouse Effect, this process is essential to life as we know it. Without it, Earth’s surface temperature would be significantly lower and many life forms would cease to exist. However, where human agency is involved, this effect has been shown to have a downside. Indeed, when excess amounts of greenhouse gases are put into the atmosphere, this natural warming effect is boosted to the point where it can have damaging, even disastrous consequences for life here on Earth. This process is known as the Enhanced Greenhouse Effect, where the natural process of warming caused by solar radiation and greenhouse gases is heightened by anthropogenic (i.e. human) factors.

The effect of CO2 and other greenhouse gases on the global climate was first publicized in 1896 by Swedish scientist Svante Arrhenius. It was he that first developed a theory to explain the ice ages, as well as the first scientist to speculate that changes in the levels of carbon dioxide in the atmosphere could substantially alter the surface temperature of the Earth. This was expanded upon in the mid-20th century by Guy Stewart Callendar, an English steam engineer and inventor who was also interested in the link between increased CO2 levels in the atmosphere and rising global temperatures. Thanks to his research in the field, the link between the two came to be known for a time as the “Callendar effect”.
As the 20th century rolled on, a scientific consensus emerged that recognized this phenomenon as a reality and increasingly urgent problem. Relying on ice core data, atmospheric surveys performed by NASA, the Mauna Loa observatory and countless other research institutes all over the planet, scientists now believe there is a direct link between human agency and the rise in global mean temperatures over the fifty and even two-hundred years. This is due largely to increased production of CO2 through fossil fuel burning and other activities such as cement production and tropical deforestation. In addition, methane production has also been successfully linked to an increase in global temperatures, which is the result of growing consumption of meat and the need to clear large areas of tropical rainforests in order to make room for pasture land.

According to the latest Assessment Report from the Intergovernmental Panel on Climate Change which was released in 2007, “most of the observed increase in globally averaged temperatures since the mid-20th century is very likely due to the observed increase in anthropogenic greenhouse gas concentrations”. If left unchecked, it is unclear what the exact consequences would be, but most scenarios predict a steep drop in worldwide food production, widespread drought, glacial depletion, the near to total depletion of the polar ice cap, and the possibility that the process could become irreversible.
Getting toasty in here!

We have written many articles about enhanced greenhouse effect for Universe Today. Here’s an article about greenhouse effect, and here’s an article about atmospheric gases.

If you’d like more info on Enhanced Greenhouse Effect, check out these articles from USA Today and Earth Observatory.

We’ve also recorded an episode of Astronomy Cast all about planet Earth. Listen here, Episode 51: Earth.

Sources:
http://en.wikipedia.org/wiki/Greenhouse_effect
http://www.science.org.au/nova/016/016key.htm
http://en.wikipedia.org/wiki/Radiative_forcing
http://en.wikipedia.org/wiki/Svante_Arrhenius
http://en.wikipedia.org/wiki/Callendar_effect
http://en.wikipedia.org/wiki/History_of_climate_change_science

What is Electromagnetic Induction?

Electromagnetic Induction
Electromagnetic Induction. Image Credit: ionaphysics.org

It is hard to imagine a world without electricity. At one time, electricity was a humble offering, providing humanity with unnatural light that did not depend on gas lamps or kerosene lanterns. Today, it has grown to become the basis of our comfort, providing our heat, lighting and climate control, and powering all of our appliances, be they for cooking, cleaning, or entertainment. And beneath most of the machines that make it possible is a simple law known as Electromagnetic Induction, a law which describes the operation of generators, electric motors, transformers, induction motors, synchronous motors, solenoids, and most other electrical machines. Scientifically speaking it refers to the production of voltage across a conductor (a wire or similar piece of conducting material) that is moving through a magnetic field.

Though many people have been thought to have contributed to the discovery of this phenomenon, it is Michael Faraday who is credited with first making the discovery in 1831. Known as Faraday’s law, it states that “The induced electromotive force (EMF) in any closed circuit is equal to the time rate of change of the magnetic flux through the circuit”. In practice, this means that an electric current will be induced in any closed circuit when the magnetic flux (i.e. the amount of magnetic field) passing through a surface bounded by the conductor changes. This applies whether the field itself changes in strength or the conductor is moved through it.
Whereas it was already known at this time that an electric current produced a magnetic field, Faraday demonstrated that the reverse was also true. In short, he proved that one could generate an electric current by passing a wire through a magnetic field. To test this hypothesis, Faraday wrapped a piece of metal wire around a paper cylinder and then connected the coil to a galvanometer (a device used to measure electric current). He then moved a magnet back and forth inside the cylinder and recorded through the galvanometer that an electrical current was being induced in the wire. He confirmed from this that a moving magnetic field was necessary to induce an electrical field, because when the magnet stopped moving, the current also ceased.
Today, electromagnetic induction is used to power many electrical devices. One of the most widely known uses is in electrical generators (such as hydroelectric dams) where mechanical power is used to move a magnetic field past coils of wire to generate voltage.
In mathematical form, Faraday’s law states that: ? = – d?B/dt, where ? is the electromotive force and ?B is the magnetic flux, and d and t represent distance and time.

We have written many articles about electromagnetic induction for Universe Today. Here’s an article about electromagnets, and here’s an article about generators.

If you’d like more info on electromagnetic induction, check out these articles from All About Circuits and Physics 24/7.

We’ve also recorded an entire episode of Astronomy Cast all about Electromagnetism. Listen here, Episode 103: Electromagnetism.

Sources:
http://en.wikipedia.org/wiki/Electromagnetic_induction
http://en.wikipedia.org/wiki/Faraday%27s_law_of_induction
http://en.wikipedia.org/wiki/Magnetic_flux
http://micro.magnet.fsu.edu/electromag/java/faraday2/
http://www.scienceclarified.com/El-Ex/Electromagnetic-Induction.html
http://en.wikipedia.org/wiki/Galvanometer

Convex Lens

Convex Lens

As every child is sure to find out at some point in their life, lenses can be an endless source of fun. They can be used for everything from examining small objects and type to focusing the sun’s rays. In the latter case, hopefully they choose to be humanitarian and burn things like paper and grass rather than ants! But the fact remains, a Convex Lens is the source of this scientific marvel. Typically made of glass or transparent plastic, a convex lens has at least one surface that curves outward like the exterior of a sphere. Of all lenses, it is the most common given its many uses.

A convex lens is also known as a converging lens. A converging lens is a lens that converges rays of light that are traveling parallel to its principal axis. They can be identified by their shape which is relatively thick across the middle and thin at the upper and lower edges. The edges are curved outward rather than inward. As light approaches the lens, the rays are parallel. As each ray reaches the glass surface, it refracts according to the effective angle of incidence at that point of the lens. Since the surface is curved, different rays of light will refract to different degrees; the outermost rays will refract the most. This runs contrary to what occurs when a divergent lens (otherwise known as concave, biconcave or plano-concave) is employed. In this case, light is refracted away from the axis and outward.

Lenses are classified by the curvature of the two optical surfaces. If the lens is biconvex or plano-convex, the lens is called positive or converging. Most convex lenses fall into this category. A lens is biconvex (or double convex, or just convex) if both surfaces are convex. These types of lenses are used in the manufacture of magnifying glasses. If both surfaces have the same radius of curvature, the lens is known as an equiconvex biconvex. If one of the surfaces is flat, the lens is plano-convex (or plano-concave depending on the curvature of the other surface). A lens with one convex and one concave side is convex-concave or meniscus. These lenses are used in the manufacture of corrective lenses.

For an illustrated example of how images are formed with a convex lens, click here.

We have written many articles about lenses for Universe Today. Here’s an article about the concave lens, and here’s an article about telescope lens.

If you’d like more info on convex lens, check out these articles from The Physics Classroom and Wikipedia.

We’ve also recorded an episode of Astronomy Cast all about the Telescope. Listen here, Episode 33: Choosing and Using a Telescope.

Sources:
http://en.wikipedia.org/wiki/Lens_(optics)
http://homepage.mac.com/cbakken/obookshelf/cvreal.html
http://www.play-hookey.com/optics/lens_convex.html
http://www.answers.com/topic/convex-lens-1
http://www.physicsclassroom.com/class/refrn/u14l5a.cfm
http://www.tutorvista.com/content/science/science-ii/refraction-light/formation-convex.php

Conservation of Mass

Conservation of Mass
Conservation of Mass. Image Credit: www.efm.leeds.ac.uk

[/caption]While it may offend anyone currently trying to lose that holiday weight, it is a classic physical law that in a closed system, mass can neither be created nor destroyed. Feeling discouraged yet? Well, don’t! Strictly speaking, this law does NOT mean you can’t drop pounds, just that within an isolated system (which your body is not) mass cannot be created/destroyed, although it may be rearranged in space, and changed into different types of particles. This law is known as the Conversation of Mass, otherwise known as the principal of mass/matter conservation. More specifically, the law states that the mass of an isolated system cannot be changed as a result of processes acting inside the system. This implies that for any chemical process in a closed system, the mass of the reactants must equal the mass of the products. The law is considered “classical” in that it does not take into consideration more recent physical laws, such as special relativity or quantum mechanics, but still applies in many contexts.

This law is rooted in classical Greek philosophy, which states that “nothing can come from nothing”, often stated in its Latin form: ex nihlionihlio fit. The basic premise here, first espoused by Empedocles (ca. 490–430 BCE), is that no new matter can come into existence where none was present before. It was further elaborated on by Epicurus, Parmenedes, and a number of Indian and Arab philosophers. However, it was not until the 18th century with Antoine Lavoisier that it graduated from the field of cosmology and became a scientific law. Lavoisier was the first to clearly outlined it in his seminal work TraitéÉlémentaire de Chimie (Elementary Treatise on Chemistry) in 1789.

Historically, the conservation of mass and weight was obscure for millennia because of the buoyant effect of the Earth’s atmosphere on the weight of gases. In addition, when a substance burns, mass appears to be lost since ashes weight less than the original substance. These effects were not understood until careful experiments in which chemical reactions such as rusting were performed in sealed glass ampules, whereby it was found that the chemical reaction did not change the weight of the sealed container. Once understood, the conservation of mass was of great importance in changing alchemy to modern chemistry. When chemists realized that substances never disappeared from measurement with the scales (once buoyancy effects were held constant, or had otherwise been accounted for), they could for the first time embark on quantitative studies of the transformations of substances.

The historical concept of both matter and mass conservation is widely used in many fields such as chemistry, mechanics, and fluid dynamics. In relativity, the mass-energy equivalence theorem states that mass conservation is equivalent to energy conservation, which is the first law of thermodynamics.

We have written many articles about the conservation of mass for Universe Today. Here’s an article about nuclear fusion, and here’s an article about the atom.

If you’d like more info on the law of conservation of mass, check out these articles from NASA Glenn Research Center and Engineering Toolbox.

We’ve also recorded an entire episode of Astronomy Cast all about the Atom. Listen here, Episode 164: Inside the Atom.

Sources:
http://en.wikipedia.org/wiki/Conservation_of_mass
http://www.grc.nasa.gov/WWW/K-12/airplane/mass.html
http://en.wikipedia.org/wiki/Nothing_comes_from_nothing
http://en.wikipedia.org/wiki/Antoine_Lavoisier
http://en.wikipedia.org/wiki/Jain_philosophy

What is Conductance?

Conductance
Electricity. Image Source: juniorcitizen.org.uk

Electricity is an amazing, and potentially very dangerous, thing. In addition to powering our appliances, heating our homes, starting our cars and providing us with unnatural lighting during the evenings, it is also one of the fundamental forces upon which the Universe is based. Knowing what governs it is crucial to using it for our benefit, as well as understanding how the Universe works.

For those of us looking to understand it – perhaps for the sake of becoming an electrical engineer, a skilled do-it-yourselfer,  or just satisfying scientific curiosity – some basic concepts need to be kept in mind. For example, we need to understand a little thing known as conductance, and quality that is related to resistance; which taken together govern the flow of electrical current.

Definition:

Conductance is the measure of how easily electricity flows along a certain path through an electrical element, and since electricity is so often explained in terms of opposites, conductance is considered the opposite of resistance. In terms of resistance and conductance, the reciprocal relationship between the two can be expressed through the following equation: R = 1/G, G=1/R; where R equals resistance and G equals conduction.

Another way to represent this is: W=1/S, S=1/W, where W (the Greek letter omega) represents resistance and S represents Siemens, ergo the measure of conductance. In addition, Siemens can be measured by comparing them to their equivalent of one ampere (A) per volt (V).

In other words, when a current of one ampere (1A) passes through a component across which a voltage of one volt (1V) exists, then the conductance of that component is one Siemens (1S). This can be expressed through the equation: G = I/E, where G represents conductance and E is the voltage across the component (expressed in volts).

The temperature of the material is definitely a factor, but assuming a constant temperature, the conductance of a material can be calculated.

Measurement:

The SI (International System) derived unit of conductance is known as the Siemens, named after the German inventor and industrialist Ernst Werner von Siemens. Since conductance is the opposite of resistance, it is usually expressed as the reciprocal of one ohm – a unit of electrical resistance named after George Simon Ohm – or one mho (ohm spelt backwards).

Recently, this term was re-designated to Siemens, expressed by the notational symbol S. The factors that affect the magnitude of resistance are exactly the same for conductance, but they affect conductance in the opposite manner. Therefore, conductance is directly proportional to area, and inversely proportional to the length of the material.

We have written many articles about conductance for Universe Today. Here’s What are Electrons?, Who Discovered Electricity?, What is Static Electricity?, What is Electromagnetic Induction?, and What are the Uses of Electromagnets?

If you’d like more info on Conductance, check out All About Circuits for another article about conductance.

We’ve also recorded an entire episode of Astronomy Cast all about Electromagnetism. Listen here, Episode 103: Electromagnetism.

Sources:

Concave Lens

Concave Mirror
Concave Lens

[/caption]For centuries, human beings have been able to do some pretty remarkable things with lenses. Although we can’t be sure when or how the first person stumbled onto the concept, it is clear that at some point in the past, ancient people (probably from the Near East) realized that they could manipulate light using a shaped piece of glass. Over the centuries, how and for what purpose lenses were used began to increase, as people discovered that they could accomplish different things using differently shaped lenses. In addition to making distant objects appear nearer (i.e. the telescope), they could also be used to make small objects appear larger and blurry objects appear clear (i.e. magnifying glasses and corrective lenses). The lenses used to accomplish these tasks fall into two categories of simple lenses: Convex and Concave Lenses.

A concave lens is a lens that possesses at least one surface that curves inwards. It is a diverging lens, meaning that it spreads out light rays that have been refracted through it. A concave lens is thinner at its centre than at its edges, and is used to correct short-sightedness (myopia). The writings of Pliny the Elder (23–79) makes mention of what is arguably the earliest use of a corrective lens. According to Pliny, Emperor Nero was said to watch gladiatorial games using an emerald, presumably concave shaped to correct for myopia.

After light rays have passed through the lens, they appear to come from a point called the principal focus. This is the point onto which the collimated light that moves parallel to the axis of the lens is focused. The image formed by a concave lens is virtual, meaning that it will appear to be farther away than it actually is, and therefore smaller than the object itself. Curved mirrors often have this effect, which is why many (especially on cars) come with a warning: Objects in mirror are closer than they appear. The image will also be upright, meaning not inverted, as some curved reflective surfaces and lenses have been known to do.

The lens formula that is used to work out the position and nature of an image formed by a lens can be expressed as follows: 1/u + 1/v = 1/f, where u and v are the distances of the object and image from the lens, respectively, and f is the focal length of the lens.

We have written many articles about concave lens for Universe Today. Here’s an article about the telescope mirror, and here’s an article about the astronomical telescope.

If you’d like more info on the Concave Lens, check out NASA’s The Most Dreadful Weapon, and here’s a link to Build a Telescope Page.

We’ve also recorded an entire episode of Astronomy Cast all about the Telescope. Listen here, Episode 150: Telescopes, The Next Level.

Sources:
http://en.wiktionary.org/wiki/concave
http://www.physics.mun.ca/~jjerrett/lenses/concave.html
http://encyclopedia.farlex.com/concave+lens
http://en.wikipedia.org/wiki/Collimated_light
http://en.wikipedia.org/wiki/Virtual_image

What is the Coefficient of Friction?

Friction
Friction. Image Source: Wikipedia

Ever watch a car spin its wheels and notice all the smoke and tire marks it leaves behind? How about going down a slide? You might have noticed that if it were wet, you travelled farther than if the surface was dry. Ever wonder just how far you’d get if you tried to slide on wet concrete (don’t this, by the way!). Why is it that some surfaces are easy to slide across while others are just destined to stop you short? It comes down to a little thing known as friction, which is essentially the force that resists surfaces from sliding against each other. When it comes to measuring friction, the tool which scientists use is called the Coefficient of Friction or COH.

The COH is the value which describes the ratio of the force of friction between two bodies and the force pressing them together. They range from near zero to greater than one, depending on the types of materials used.For example, ice on steel has a low coefficient of friction, while rubber on pavement (i.e. car tires on the road) has a comparatively high one. In short, rougher surfaces tend to have higher effective values whereas smoother surfaces have lower due to the friction they generate when pressed together.

There are essentially two kind of coefficients; static and kinetic. The static coefficient of friction is the coefficient of friction that applies to objects that are motionless. The kinetic or sliding coefficient of friction is the coefficient of friction that applies to objects that are in motion.The coefficient of friction is not always the same for objects that are motionless and objects that are in motion; motionless objects often experience more friction than moving ones, requiring more force to put them in motion than to sustain them in motion.

Most dry materials in combination have friction coefficient values between 0.3 and 0.6. Values outside this range are rarer, but teflon, for example, can have a coefficient as low as 0.04. A value of zero would mean no friction at all, which is elusive at best, whereas a value above 1 would mean that the force required to slide an object along the surface is greater than the normal force of the surface on the object.

Mathematically, frictional force can be expressed asFf= ? N, where Ff = frictional force (N, lb), ? = static (?s) or kinetic (?k) frictional coefficient, N = normal force (N, lb).

We have written many articles about the coefficient of friction for Universe Today. Here’s an article about friction, and here’s an article about aerobraking.

If you’d like more info on the Friction, check out Hyperphysics, and here’s a link to Friction Games for Kids by Science Kids.

We’ve also recorded an entire episode of Astronomy Cast all about Gravity. Listen here, Episode 102: Gravity.

Sources:
http://en.wikipedia.org/wiki/Friction
http://www.engineeringtoolbox.com/friction-coefficients-d_778.html
http://www.thefreedictionary.com/coefficient+of+friction