Under Mount Ikeno, Japan, in an old mine that sits one-thousand meters (3,300 feet) beneath the surface, lies the Super-Kamiokande Observatory (SKO). Since 1996, when it began conducting observations, researchers have been using this facility’s Cherenkov detector to look for signs of proton decay and neutrinos in our galaxy. This is no easy task, since neutrinos are very difficult to detect.
But thanks to a new computer system that will be able to monitor neutrinos in real-time, the researchers at the SKO will be able to research these mysteries particles more closely in the near future. In so doing, they hope to understand how stars form and eventually collapse into black holes, and sneak a peak at how matter was created in the early Universe.
Neutrinos, put simply, are one of the fundamental particles that make up the Universe. Compared to other fundamental particles, they have very little mass, no charge, and only interact with other types of particles via the weak nuclear force and gravity. They are created in a number of ways, most notably through radioactive decay, the nuclear reactions that power a star, and in supernovae.
In accordance with the standard Big Bang model, the neutrinos left over from the creation of the Universe are the most abundant particles in existence. At any given moment, trillions of these particles are believed to be moving around us and through us. But because of the way they interact with matter (i.e. only weakly) they are extremely difficult to detect.
For this reason, neutrino observatories are built deep underground to avoid interference from cosmic rays. They also rely on Cherenkov detectors, which are essentially massive water tanks that have thousands of sensors lining their walls. These attempt to detect particles as they are slowed down to the local speed of light (i.e. the speed of light in water), which is made evident by the presence of a glow – known as Cherenkov radiation.
The detector at the SKO is currently the largest in the world. It consists of a cylindrical stainless steel tank that is 41.4 m (136 ft) tall and 39.3 m (129 ft) in diameter, and holds over 45,000 metric tons (50,000 US tons) of ultra-pure water. In the interior, 11,146 photomultiplier tubes are mounted, which detect light in the ultraviolet, visible, and near-infrared ranges of the electromagnetic spectrum with extreme sensitivity.
For years, researchers at the SKO have used the facility to examine solar neutrinos, atmospheric neutrinos and man-made neutrinos. However, those that are created by supernovas are very difficult to detect, since they appear suddenly and difficult to distinguish from other kinds. However, with the newly-added computer system, the Super Komiokande researchers are hoping that will change.
“Supernova explosions are one of the most energetic phenomena in the universe and most of this energy is released in the form of neutrinos. This is why detecting and analyzing neutrinos emitted in these cases, other than those from the Sun or other sources, is very important for understanding the mechanisms in the formation of neutron stars –a type of stellar remnant– and black holes”.
Basically, the new computer system is designed to analyze the events recorded in the depths of the observatory in real-time. If it detects an abnormally large flows of neutrinos, it will quickly alert the experts manning the controls. They will then be able to assess the significance of the signal within minutes and see if it is actually coming from a nearby supernova.
“During supernova explosions an enormous number of neutrinos is generated in an extremely small space of time – a few seconds – and this why we need to be ready,” Labarga added. “This allows us to research the fundamental properties of these fascinating particles, such as their interactions, their hierarchy and the absolute value of their mass, their half-life, and surely other properties that we still cannot even imagine.”
Equally as important is the fact this system will give the SKO the ability to issue early warnings to research centers around the world. Ground-based observatories, where astronomers are keen to watch the creation of cosmic neutrinos by supernova, will then be able to point all of their optical instruments towards the source in advance (since the electromagnetic signal will take longer to arrive).
Through this collaborative effort, astrophysicists may be able to better understand some of the most elusive neutrinos of all. Discerning how these fundamental particles interact with others could bring us one step closer to a Grand Unified Theory – one of the major goals of the Super-Kamiokande Observatory.
To date, only a few neutrino detectors exist in the world. These include the Irvine-Michigan-Brookhaven (IMB) detector in Ohio, the Subdury Neutrino Observatory (SNOLAB) in Ontario, Canada, and the Super Kamiokande Observatory in Japan.
Everyone knows just how fun magnets can be. As a child, who among us didn’t love to see if we could make our silverware stick together? And how about those little magnetic rocks that we could arrange to form just about any shape because they stuck together? Well, magnetism is not just an endless source of fun or good for scientific experiments; it’s also one of basic physical laws upon which the universe is based.
The attraction known as magnetism occurs when a magnetic field is present, which is a field of force produced by a magnetic object or particle. It can also be produced by a changing electric field and is detected by the force it exerts on other magnetic materials. Hence why the area of study dealing with magnets is known as electromagnetism.
Definition:
Magnetic fields can be defined in a number of ways, depending on the context. However, in general terms, it is an invisible field that exerts magnetic force on substances which are sensitive to magnetism. Magnets also exert forces and torques on each other through the magnetic fields they create.
They can be generated within the vicinity of a magnet, by an electric current, or a changing electrical field. They are dipolar in nature, which means that they have both a north and south magnetic pole. The Standard International (SI) unit used to measure magnetic fields is the Tesla, while smaller magnetic fields are measured in terms of Gauss (1 Tesla = 10,000 Guass).
Mathematically, a magnetic field is defined in terms of the amount of force it exerted on a moving charge. The measurement of this force is consistent with the Lorentz Force Law, which can be expressed as F= qvB, where F is the magnetic force, q is the charge, v is the velocity, and the magnetic field is B. This relationship is a vector product, where F is perpendicular (->) to all other values.
Field Lines:
Magnetic fields may be represented by continuous lines of force (or magnetic flux) that emerge from north-seeking magnetic poles and enter south-seeking poles. The density of the lines indicate the magnitude of the field, being more concentrated at the poles (where the field is strong) and fanning out and weakening the farther they get from the poles.
A uniform magnetic field is represented by equally-spaced, parallel straight lines. These lines are continuous, forming closed loops that run from north to south, and looping around again. The direction of the magnetic field at any point is parallel to the direction of nearby field lines, and the local density of field lines can be made proportional to its strength.
Magnetic field lines resemble a fluid flow, in that they are streamlined and continuous, and more (or fewer lines) appear depending on how closely a field is observed. Field lines are useful as a representation of magnetic fields, allowing for many laws of magnetism (and electromagnetism) to be simplified and expressed in mathematical terms.
A simple way to observe a magnetic field is to place iron filings around an iron magnet. The arrangements of these filings will then correspond to the field lines, forming streaks that connect at the poles. They also appear during polar auroras, in which visible streaks of light line up with the local direction of the Earth’s magnetic field.
History of Study:
The study of magnetic fields began in 1269 when French scholar Petrus Peregrinus de Maricourt mapped out the magnetic field of a spherical magnet using iron needles. The places where these lines crossed he named “poles” (in reference to Earth’s poles), which he would go on to claim that all magnets possessed.
During the 16th century, English physicist and natural philosopher William Gilbert of Colchester replicated Peregrinus’ experiment. In 1600, he published his findings in a treaties (De Magnete) in which he stated that the Earth is a magnet. His work was intrinsic to establishing magnetism as a science.
In 1750, English clergyman and philosopher John Michell stated that magnetic poles attract and repel each other. The force with which they do this, he observed, is inversely proportional to the square of the distance, otherwise known as the inverse square law.
In 1785, French physicist Charles-Augustin de Coulomb experimentally verified Earths’ magnetic field. This was followed by 19th century French mathematician and geometer Simeon Denis Poisson created the first model of the magnetic field, which he presented in 1824.
By the 19th century, further revelations refined and challenged previously-held notions. For example, in 1819, Danish physicist and chemist Hans Christian Orsted discovered that an electric current creates a magnetic field around it. In 1825, André-Marie Ampère proposed a model of magnetism where this force was due to perpetually flowing loops of current, instead of the dipoles of magnetic charge.
In 1831, English scientist Michael Faraday showed that a changing magnetic field generates an encircling electric field. In effect, he discovered electromagnetic induction, which was characterized by Faraday’s law of induction (aka. Faraday’s Law).
Between 1861 and 1865, Scottish scientist James Clerk Maxwell published his theories on electricity and magnetism – known as the Maxwell’s Equations. These equations not only pointed to the interrelationship between electricity and magnetism, but showed how light itself is an electromagnetic wave.
The field of electrodynamics was extended further during the late 19th and 20th centuries. For instance, Albert Einstein (who proposed the Law of Special Relativity in 1905), showed that electric and magnetic fields are part of the same phenomena viewed from different reference frames. The emergence of quantum mechanics also led to the development of quantum electrodynamics (QED).
Examples:
A classic example of a magnetic field is the field created by an iron magnet. As previously mentioned, the magnetic field can be illustrated by surrounding it with iron filings, which will be attracted to its field lines and form in a looping formation around the poles.
Larger examples of magnetic fields include the Earth’s magnetic field, which resembles the field produced by a simple bar magnet. This field is believed to be the result of movement in the Earth’s core, which is divided between a solid inner core and molten outer core which rotates in the opposite direction of Earth. This creates a dynamo effect, which is believed to power Earth’s magnetic field (aka. magnetosphere).
Such a field is called a dipole field because it has two poles – north and south, located at either end of the magnet – where the strength of the field is at its maximum. At the midpoint between the poles the strength is half of its polar value, and extends tens of thousands of kilometers into space, forming the Earth’s magnetosphere.
Other celestial bodies have been shown to have magnetic fields of their own. This includes the gas and ice giants of the Solar System – Jupiter, Saturn, Uranus and Neptune. Jupiter’s magnetic field is 14 times as powerful as that of Earth, making it the strongest magnetic field of any planetary body. Jupiter’s moon Ganymede also has a magnetic field, and is the only moon in the Solar System known to have one.
Mars is believed to have once had a magnetic field similar to Earth’s, which was also the result of a dynamo effect in its interior. However, due to either a massive collision, or rapid cooling in its interior, Mars lost its magnetic field billions of years ago. It is because of this that Mars is believed to have lost most of its atmosphere, and the ability to maintain liquid water on its surface.
When it comes down to it, electromagnetism is a fundamental part of our Universe, right up there with nuclear forces and gravity. Understanding how it works, and where magnetic fields occur, is not only key to understanding how the Universe came to be, but may also help us to find life beyond Earth someday.
Have you ever taken a look at a piece of firewood and said to yourself, “gee, I wonder how much energy it would take to split that thing apart”? Chances are, no you haven’t, few people do. But for physicists, asking how much energy is needed to separate something into its component pieces is actually a pretty important question.
In the field of physics, this is what is known as binding energy, or the amount of mechanical energy it would take to disassemble an atom into its separate parts. This concept is used by scientists on many different levels, which includes the atomic level, the nuclear level, and in astrophysics and chemistry.
Nuclear Force:
As anyone who remembers their basic chemistry or physics surely knows, atoms are composed of subatomic particles known as nucleons. These consist of positively-charged particles (protons) and neutral particles (neutrons) that are arranged in the center (in the nucleus). These are surrounded by electrons which orbit the nucleus and are arranged in different energy levels.
The reason why subatomic particles that have fundamentally different charges are able to exist so close together is because of the presence of Strong Nuclear Force – a fundamental force of the universe that allows subatomic particles to be attracted at short distances. It is this force that counteracts the repulsive force (known as the Coulomb Force) that causes particles to repel each other.
Therefore, any attempt to divide the nucleus into the same number of free unbound neutrons and protons – so that they are far/distant enough from each other that the strong nuclear force can no longer cause the particles to interact – will require enough energy to break these nuclear bonds.
Thus, binding energy is not only the amount of energy required to break strong nuclear force bonds, it is also a measure of the strength of the bonds holding the nucleons together.
Nuclear Fission and Fusion:
In order to separate nucleons, energy must be supplied to the nucleus, which is usually accomplished by bombarding the nucleus with high energy particles. In the case of bombarding heavy atomic nuclei (like uranium or plutonium atoms) with protons, this is known as nuclear fission.
However, binding energy also plays a role in nuclear fusion, where light nuclei together (such as hydrogen atoms), are bound together under high energy states. If the binding energy for the products is higher when light nuclei fuse, or when heavy nuclei split, either of these processes will result in a release of the “extra” binding energy. This energy is referred to as nuclear energy, or loosely as nuclear power.
It is observed that the mass of any nucleus is always less than the sum of the masses of the individual constituent nucleons which make it up. The “loss” of mass which results when nucleons are split to form smaller nucleus, or merge to form a larger nucleus, is also attributed to a binding energy. This missing mass may be lost during the process in the form of heat or light.
Once the system cools to normal temperatures and returns to ground states in terms of energy levels, there is less mass remaining in the system. In that case, the removed heat represents exactly the mass “deficit”, and the heat itself retains the mass which was lost (from the point of view of the initial system). This mass appears in any other system which absorbs the heat and gains thermal energy.
Types of Binding Energy:
Strictly speaking, there are several different types of binding energy, which is based on the particular field of study. When it comes to particle physics, binding energy refers to the energy an atom derives from electromagnetic interaction, and is also the amount of energy required to disassemble an atom into free nucleons.
In the case of removing electrons from an atom, a molecule, or an ion, the energy required is known as “electron binding energy” (aka. ionization potential). In general, the binding energy of a single proton or neutron in a nucleus is approximately a million times greater than the binding energy of a single electron in an atom.
In astrophysics, scientists employ the term “gravitational binding energy” to refer to the amount of energy it would take to pull apart (to infinity) an object held together by gravity alone – i.e. any stellar object like a star, a planet, or a comet. It also refers to the amount of energy that is liberated (usually in the form of heat) during the accretion of such an object from material falling from infinity.
Finally, there is what is known as “bond” energy, which is a measure of the bond strength in chemical bonds, and is also the amount of energy (heat) it would take to break a chemical compound down into its constituent atoms. Basically, binding energy is the very thing that binds our Universe together. And when various parts of it are broken apart, it is the amount of energy needed to carry it out.
The study of binding energy has numerous applications, not the least of which are nuclear power, electricity, and chemical manufacture. And in the coming years and decades, it will be intrinsic in the development of nuclear fusion!
Since ancient times, philosophers and scholars have sought to understand light. In addition to trying to discern its basic properties (i.e. what is it made of – particle or wave, etc.) they have also sought to make finite measurements of how fast it travels. Since the late-17th century, scientists have been doing just that, and with increasing accuracy.
In so doing, they have gained a better understanding of light’s mechanics and the important role it plays in physics, astronomy and cosmology. Put simply, light moves at incredible speeds and is the fastest moving thing in the Universe. Its speed is considered a constant and an unbreakable barrier, and is used as a means of measuring distance. But just how fast does it travel?
Speed of Light (c):
Light travels at a constant speed of 1,079,252,848.8 (1.07 billion) km per hour. That works out to 299,792,458 m/s, or about 670,616,629 mph (miles per hour). To put that in perspective, if you could travel at the speed of light, you would be able to circumnavigate the globe approximately seven and a half times in one second. Meanwhile, a person flying at an average speed of about 800 km/h (500 mph), would take over 50 hours to circle the planet just once.
To put that into an astronomical perspective, the average distance from the Earth to the Moon is 384,398.25 km (238,854 miles ). So light crosses that distance in about a second. Meanwhile, the average distance from the Sun to the Earth is ~149,597,886 km (92,955,817 miles), which means that light only takes about 8 minutes to make that journey.
Little wonder then why the speed of light is the metric used to determine astronomical distances. When we say a star like Proxima Centauri is 4.25 light years away, we are saying that it would take – traveling at a constant speed of 1.07 billion km per hour (670,616,629 mph) – about 4 years and 3 months to get there. But just how did we arrive at this highly specific measurement for “light-speed”?
History of Study:
Until the 17th century, scholars were unsure whether light traveled at a finite speed or instantaneously. From the days of the ancient Greeks to medieval Islamic scholars and scientists of the early modern period, the debate went back and forth. It was not until the work of Danish astronomer Øle Rømer (1644-1710) that the first quantitative measurement was made.
In 1676, Rømer observed that the periods of Jupiter’s innermost moon Io appeared to be shorter when the Earth was approaching Jupiter than when it was receding from it. From this, he concluded that light travels at a finite speed, and estimated that it takes about 22 minutes to cross the diameter of Earth’s orbit.
Christiaan Huygens used this estimate and combined it with an estimate of the diameter of the Earth’s orbit to obtain an estimate of 220,000 km/s. Isaac Newton also spoke about Rømer’s calculations in his seminal work Opticks (1706). Adjusting for the distance between the Earth and the Sun, he calculated that it would take light seven or eight minutes to travel from one to the other. In both cases, they were off by a relatively small margin.
Later measurements made by French physicists Hippolyte Fizeau (1819 – 1896) and Léon Foucault (1819 – 1868) refined these measurements further – resulting in a value of 315,000 km/s (192,625 mi/s). And by the latter half of the 19th century, scientists became aware of the connection between light and electromagnetism.
This was accomplished by physicists measuring electromagnetic and electrostatic charges, who then found that the numerical value was very close to the speed of light (as measured by Fizeau). Based on his own work, which showed that electromagnetic waves propagate in empty space, German physicist Wilhelm Eduard Weber proposed that light was an electromagnetic wave.
The next great breakthrough came during the early 20th century/ In his 1905 paper, titled “On the Electrodynamics of Moving Bodies”, Albert Einstein asserted that the speed of light in a vacuum, measured by a non-accelerating observer, is the same in all inertial reference frames and independent of the motion of the source or observer.
Using this and Galileo’s principle of relativity as a basis, Einstein derived the Theory of Special Relativity, in which the speed of light in vacuum (c) was a fundamental constant. Prior to this, the working consensus among scientists held that space was filled with a “luminiferous aether” that was responsible for its propagation – i.e. that light traveling through a moving medium would be dragged along by the medium.
This in turn meant that the measured speed of the light would be a simple sum of its speed through the medium plus the speed of that medium. However, Einstein’s theory effectively made the concept of the stationary aether useless and revolutionized the concepts of space and time.
Not only did it advance the idea that the speed of light is the same in all inertial reference frames, it also introduced the idea that major changes occur when things move close the speed of light. These include the time-space frame of a moving body appearing to slow down and contract in the direction of motion when measured in the frame of the observer (i.e. time dilation, where time slows as the speed of light approaches).
His observations also reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, simplified the mathematical calculations by doing away with extraneous explanations used by other scientists, and accorded with the directly observed speed of light.
During the second half of the 20th century, increasingly accurate measurements using laser inferometers and cavity resonance techniques would further refine estimates of the speed of light. By 1972, a group at the US National Bureau of Standards in Boulder, Colorado, used the laser inferometer technique to get the currently-recognized value of 299,792,458 m/s.
Role in Modern Astrophysics:
Einstein’s theory that the speed of light in vacuum is independent of the motion of the source and the inertial reference frame of the observer has since been consistently confirmed by many experiments. It also sets an upper limit on the speeds at which all massless particles and waves (which includes light) can travel in a vacuum.
One of the outgrowths of this is that cosmologists now treat space and time as a single, unified structure known as spacetime – in which the speed of light can be used to define values for both (i.e. “lightyears”, “light minutes”, and “light seconds”). The measurement of the speed of light has also become a major factor when determining the rate of cosmic expansion.
Beginning in the 1920’s with observations of Lemaitre and Hubble, scientists and astronomers became aware that the Universe is expanding from a point of origin. Hubble also observed that the farther away a galaxy is, the faster it appears to be moving. In what is now referred to as the Hubble Parameter, the speed at which the Universe is expanding is calculated to 68 km/s per megaparsec.
This phenomena, which has been theorized to mean that some galaxies could actually be moving faster than the speed of light, may place a limit on what is observable in our Universe. Essentially, galaxies traveling faster than the speed of light would cross a “cosmological event horizon”, where they are no longer visible to us.
Also, by the 1990’s, redshift measurements of distant galaxies showed that the expansion of the Universe has been accelerating for the past few billion years. This has led to theories like “Dark Energy“, where an unseen force is driving the expansion of space itself instead of objects moving through it (thus not placing constraints on the speed of light or violating relativity).
Along with special and general relativity, the modern value of the speed of light in a vacuum has gone on to inform cosmology, quantum physics, and the Standard Model of particle physics. It remains a constant when talking about the upper limit at which massless particles can travel, and remains an unachievable barrier for particles that have mass.
Perhaps, someday, we will find a way to exceed the speed of light. While we have no practical ideas for how this might happen, the smart money seems to be on technologies that will allow us to circumvent the laws of spacetime, either by creating warp bubbles (aka. the Alcubierre Warp Drive), or tunneling through it (aka. wormholes).
Until that time, we will just have to be satisfied with the Universe we can see, and to stick to exploring the part of it that is reachable using conventional methods.
Lightning has struck twice – maybe three times – and scientists from the Laser Interferometer Gravitational-wave Observatory, or LIGO, hope this is just the beginning of a new era of understanding our Universe. This “lightning” came in the form of the elusive, hard-to-detect gravitational waves, produced by gigantic events, such as a pair of black holes colliding. The energy released from such an event disturbs the very fabric of space and time, much like ripples in a pond. Today’s announcement is the second set of gravitational wave ripples detected by LIGO, following the historic first detection announced in February of this year.
“This collision happened 1.5 billion years ago,” said Gabriela Gonzalez of Louisiana State University at a press conference to announce the new detection, “and with this we can tell you the era of gravitational wave astronomy has begun.”
LIGO’s first detection of gravitational waves from merging black holes occurred Sept. 14, 2015 and it confirmed a major prediction of Albert Einstein’s 1915 general theory of relativity. The second detection occurred on Dec. 25, 2015, and was recorded by both of the twin LIGO detectors.
While the first detection of the gravitational waves released by the violent black hole merger was just a little “chirp” that lasted only one-fifth of a second, this second detection was more of a “whoop” that was visible for an entire second in the data. Listen in this video:
“This is what we call gravity’s music,” said González as she played the video at today’s press conference.
While gravitational waves are not sound waves, the researchers converted the gravitational wave’s oscillation and frequency to a sound wave with the same frequency. Why were the two events so different?
From the data, the researchers concluded the second set of gravitational waves were produced during the final moments of the merger of two black holes that were 14 and 8 times the mass of the Sun, and the collision produced a single, more massive spinning black hole 21 times the mass of the Sun. In comparison, the black holes detected in September 2015 were 36 and 29 times the Sun’s mass, merging into a black hole of 62 solar masses.
The scientists said the higher-frequency gravitational waves from the lower-mass black holes hit the LIGO detectors’ “sweet spot” of sensitivity.
“It is very significant that these black holes were much less massive than those observed in the first detection,” said Gonzalez. “Because of their lighter masses compared to the first detection, they spent more time—about one second—in the sensitive band of the detectors. It is a promising start to mapping the populations of black holes in our universe.”
LIGO allows scientists to study the Universe in a new way, using gravity instead of light. LIGO uses lasers to precisely measure the position of mirrors separated from each other by 4 kilometers, about 2.5 miles, at two locations that are over 3,000 km apart, in Livingston, Louisiana, and Hanford, Washington. So, LIGO doesn’t detect the black hole collision event directly, it detects the stretching and compressing of space itself. The detections so far are the result of LIGO’s ability to measure the perturbation of space with an accuracy of 1 part in a thousand billion billion. The signal from the lastest event, named GW151226, was produced by matter being converted into energy, which literally shook spacetime like Jello.
LIGO team member Fulvio Ricci, a physicist at the University of Rome La Sapienzaa said there was a third “candidate” detection of an event in October, which Ricci said he prefers to call a “trigger,” but it was much less significant and the signal to noise not large enough to officially count as a detection.
But still, the team said, the two confirmed detections point to black holes being much more common in the Universe than previously believed, and they might frequently come in pairs.
“The second discovery “has truly put the ‘O’ for Observatory in LIGO,” said Albert Lazzarini, deputy director of the LIGO Laboratory at Caltech. “With detections of two strong events in the four months of our first observing run, we can begin to make predictions about how often we might be hearing gravitational waves in the future. LIGO is bringing us a new way to observe some of the darkest yet most energetic events in our universe.”
LIGO is now offline for improvements. Its next data-taking run will begin this fall and the improvements in detector sensitivity could allow LIGO to reach as much as 1.5 to two times more of the volume of the universe compared with the first run. A third site, the Virgo detector located near Pisa, Italy, with a design similar to the twin LIGO detectors, is expected to come online during the latter half of LIGO’s upcoming observation run. Virgo will improve physicists’ ability to locate the source of each new event, by comparing millisecond-scale differences in the arrival time of incoming gravitational wave signals.
For an excellent overview of gravitational waves, their sources, and their detection, check out Markus Possel’s excellent series of articles we featured on UT in February:
Ever since Democritus – a Greek philosopher who lived between the 5th and 4th century’s BCE – argued that all of existence was made up of tiny indivisible atoms, scientists have been speculating as to the true nature of light. Whereas scientists ventured back and forth between the notion that light was a particle or a wave until the modern era, the 20th century led to breakthroughs that showed us that it behaves as both.
These included the discovery of the electron, the development of quantum theory, and Einstein’s Theory of Relativity. However, there remains many unanswered questions about light, many of which arise from its dual nature. For instance, how is it that light can be apparently without mass, but still behave as a particle? And how can it behave like a wave and pass through a vacuum, when all other waves require a medium to propagate?
Theory of Light to the 19th Century:
During the Scientific Revolution, scientists began moving away from Aristotelian scientific theories that had been seen as accepted canon for centuries. This included rejecting Aristotle’s theory of light, which viewed it as being a disturbance in the air (one of his four “elements” that composed matter), and embracing the more mechanistic view that light was composed of indivisible atoms.
In many ways, this theory had been previewed by atomists of Classical Antiquity – such as Democritus and Lucretius – both of whom viewed light as a unit of matter given off by the sun. By the 17th century, several scientists emerged who accepted this view, stating that light was made up of discrete particles (or “corpuscles”). This included Pierre Gassendi, a contemporary of René Descartes, Thomas Hobbes, Robert Boyle, and most famously, Sir Isaac Newton.
Newton’s corpuscular theory was an elaboration of his view of reality as an interaction of material points through forces. This theory would remain the accepted scientific view for more than 100 years, the principles of which were explained in his 1704 treatise “Opticks, or, a Treatise of the Reflections, Refractions, Inflections, and Colours of Light“. According to Newton, the principles of light could be summed as follows:
Every source of light emits large numbers of tiny particles known as corpuscles in a medium surrounding the source.
These corpuscles are perfectly elastic, rigid, and weightless.
This represented a challenge to “wave theory”, which had been advocated by 17th century Dutch astronomer Christiaan Huygens. . These theories were first communicated in 1678 to the Paris Academy of Sciences and were published in 1690 in his “Traité de la lumière“ (“Treatise on Light“). In it, he argued a revised version of Descartes views, in which the speed of light is infinite and propagated by means of spherical waves emitted along the wave front.
Double-Slit Experiment:
By the early 19th century, scientists began to break with corpuscular theory. This was due in part to the fact that corpuscular theory failed to adequately explain the diffraction, interference and polarization of light, but was also because of various experiments that seemed to confirm the still-competing view that light behaved as a wave.
The most famous of these was arguably the Double-Slit Experiment, which was originally conducted by English polymath Thomas Young in 1801 (though Sir Isaac Newton is believed to have conducted something similar in his own time). In Young’s version of the experiment, he used a slip of paper with slits cut into it, and then pointed a light source at them to measure how light passed through it.
According to classical (i.e. Newtonian) particle theory, the results of the experiment should have corresponded to the slits, the impacts on the screen appearing in two vertical lines. Instead, the results showed that the coherent beams of light were interfering, creating a pattern of bright and dark bands on the screen. This contradicted classical particle theory, in which particles do not interfere with each other, but merely collide.
The only possible explanation for this pattern of interference was that the light beams were in fact behaving as waves. Thus, this experiment dispelled the notion that light consisted of corpuscles and played a vital part in the acceptance of the wave theory of light. However subsequent research, involving the discovery of the electron and electromagnetic radiation, would lead to scientists considering yet again that light behaved as a particle too, thus giving rise to wave-particle duality theory.
Electromagnetism and Special Relativity:
Prior to the 19th and 20th centuries, the speed of light had already been determined. The first recorded measurements were performed by Danish astronomer Ole Rømer, who demonstrated in 1676 using light measurements from Jupiter’s moon Io to show that light travels at a finite speed (rather than instantaneously).
By the late 19th century, James Clerk Maxwell proposed that light was an electromagnetic wave, and devised several equations (known as Maxwell’s equations) to describe how electric and magnetic fields are generated and altered by each other and by charges and currents. By conducting measurements of different types of radiation (magnetic fields, ultraviolet and infrared radiation), he was able to calculate the speed of light in a vacuum (represented as c).
In 1905, Albert Einstein published “On the Electrodynamics of Moving Bodies”, in which he advanced one of his most famous theories and overturned centuries of accepted notions and orthodoxies. In his paper, he postulated that the speed of light was the same in all inertial reference frames, regardless of the motion of the light source or the position of the observer.
Exploring the consequences of this theory is what led him to propose his theory of Special Relativity, which reconciled Maxwell’s equations for electricity and magnetism with the laws of mechanics, simplified the mathematical calculations, and accorded with the directly observed speed of light and accounted for the observed aberrations. It also demonstrated that the speed of light had relevance outside the context of light and electromagnetism.
For one, it introduced the idea that major changes occur when things move close the speed of light, including the time-space frame of a moving body appearing to slow down and contract in the direction of motion when measured in the frame of the observer. After centuries of increasingly precise measurements, the speed of light was determined to be 299,792,458 m/s in 1975.
Einstein and the Photon:
In 1905, Einstein also helped to resolve a great deal of confusion surrounding the behavior of electromagnetic radiation when he proposed that electrons are emitted from atoms when they absorb energy from light. Known as the photoelectric effect, Einstein based his idea on Planck’s earlier work with “black bodies” – materials that absorb electromagnetic energy instead of reflecting it (i.e. white bodies).
At the time, Einstein’s photoelectric effect was attempt to explain the “black body problem”, in which a black body emits electromagnetic radiation due to the object’s heat. This was a persistent problem in the world of physics, arising from the discovery of the electron, which had only happened eight years previous (thanks to British physicists led by J.J. Thompson and experiments using cathode ray tubes).
At the time, scientists still believed that electromagnetic energy behaved as a wave, and were therefore hoping to be able to explain it in terms of classical physics. Einstein’s explanation represented a break with this, asserting that electromagnetic radiation behaved in ways that were consistent with a particle – a quantized form of light which he named “photons”. For this discovery, Einstein was awarded the Nobel Prize in 1921.
Wave-Particle Duality:
Subsequent theories on the behavior of light would further refine this idea, which included French physicist Louis-Victor de Broglie calculating the wavelength at which light functioned. This was followed by Heisenberg’s “uncertainty principle” (which stated that measuring the position of a photon accurately would disturb measurements of it momentum and vice versa), and Schrödinger’s paradox that claimed that all particles have a “wave function”.
In accordance with quantum mechanical explanation, Schrodinger proposed that all the information about a particle (in this case, a photon) is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. At some location, the measurement of the wave function will randomly “collapse”, or rather “decohere”, to a sharply peaked function. This was illustrated in Schrödinger famous paradox involving a closed box, a cat, and a vial of poison (known as the “Schrödinger Cat” paradox).
According to his theory, wave function also evolves according to a differential equation (aka. the Schrödinger equation). For particles with mass, this equation has solutions; but for particles with no mass, no solution existed. Further experiments involving the Double-Slit Experiment confirmed the dual nature of photons. where measuring devices were incorporated to observe the photons as they passed through the slits.
When this was done, the photons appeared in the form of particles and their impacts on the screen corresponded to the slits – tiny particle-sized spots distributed in straight vertical lines. By placing an observation device in place, the wave function of the photons collapsed and the light behaved as classical particles once more. As predicted by Schrödinger, this could only be resolved by claiming that light has a wave function, and that observing it causes the range of behavioral possibilities to collapse to the point where its behavior becomes predictable.
The development of Quantum Field Theory (QFT) was devised in the following decades to resolve much of the ambiguity around wave-particle duality. And in time, this theory was shown to apply to other particles and fundamental forces of interaction (such as weak and strong nuclear forces). Today, photons are part of the Standard Model of particle physics, where they are classified as boson – a class of subatomic particles that are force carriers and have no mass.
So how does light travel? Basically, traveling at incredible speeds (299 792 458 m/s) and at different wavelengths, depending on its energy. It also behaves as both a wave and a particle, able to propagate through mediums (like air and water) as well as space. It has no mass, but can still be absorbed, reflected, or refracted if it comes in contact with a medium. And in the end, the only thing that can truly divert it, or arrest it, is gravity (i.e. a black hole).
What we have learned about light and electromagnetism has been intrinsic to the revolution which took place in physics in the early 20th century, a revolution that we have been grappling with ever since. Thanks to the efforts of scientists like Maxwell, Planck, Einstein, Heisenberg and Schrodinger, we have learned much, but still have much to learn.
For instance, its interaction with gravity (along with weak and strong nuclear forces) remains a mystery. Unlocking this, and thus discovering a Theory of Everything (ToE) is something astronomers and physicists look forward to. Someday, we just might have it all figured out!
Here on Earth, we tend to take air resistance (aka. “drag”) for granted. We just assume that when we throw a ball, launch an aircraft, deorbit a spacecraft, or fire a bullet from a gun, that the act of it traveling through our atmosphere will naturally slow it down. But what is the reason for this? Just how is air able to slow an object down, whether it is in free-fall or in flight?
Because of our reliance on air travel, our enthusiasm for space exploration, and our love of sports and making things airborne (including ourselves), understanding air resistance is key to understanding physics, and an integral part of many scientific disciplines. As part of the subdiscipline known as fluid dynamics, it applies to fields of aerodynamics, hydrodynamics, astrophysics, and nuclear physics (to name a few).
Definition:
By definition, air resistance describes the forces that are in opposition to the relative motion of an object as it passes through the air. These drag forces act opposite to the oncoming flow velocity, thus slowing the object down. Unlike other resistance forces, drag depends directly on velocity, since it is the component of the net aerodynamic force acting opposite to the direction of the movement.
Another way to put it would be to say that air resistance is the result of collisions of the object’s leading surface with air molecules. It can therefore be said that the two most common factors that have a direct effect upon the amount of air resistance are the speed of the object and the cross-sectional area of the object. Ergo, both increased speeds and cross-sectional areas will result in an increased amount of air resistance.
In terms of aerodynamics and flight, drag refers to both the forces acting opposite of thrust, as well as the forces working perpendicular to it (i.e. lift). In astrodynamics, atmospheric drag is both a positive and a negative force depending on the situation. It is both a drain on fuel and efficiency during lift-off and a fuel savings when a spacecraft is returning to Earth from orbit.
Calculating Air Resistance:
Air resistance is usually calculated using the “drag equation”, which determines the force experienced by an object moving through a fluid or gas at relatively large velocity. This can be expressed mathematically as:
In this equation, FD represents the drag force, p is the density of the fluid, v is the speed of the object relative to sound, A is the cross-section area, and CD is the the drag coefficient. The result is what is called “quadratic drag”. Once this is determined, calculating the amount of power needed to overcome the drag involves a similar process, which can be expressed mathematically as:
Here, Pd is the power needed to overcome the force of drag, Fd is the drag force, v is the velocity, p is the density of the fluid, v is the speed of the object relative to sound, A is the cross-section area, and Cd is the the drag coefficient. As it shows, power needs are the cube of the velocity, so if it takes 10 horsepower to go 80 kph, it will take 80 horsepower to go 160 kph. In short, a doubling of speed requires an application of eight times the amount of power.
Types of Air Resistance:
There are three main types of drag in aerodynamics – Lift Induced, Parasitic, and Wave. Each affects an objects ability to stay aloft as well as the power and fuel needed to keep it there. Lift induced (or just induced) drag occurs as the result of the creation of lift on a three-dimensional lifting body (wing or fuselage). It has two primary components: vortex drag and lift-induced viscous drag.
The vortices derive from the turbulent mixing of air of varying pressure on the upper and lower surfaces of the body. These are needed to create lift. As the lift increases, so does the lift-induced drag. For an aircraft this means that as the angle of attack and the lift coefficient increase to the point of stall, so does the lift-induced drag.
By contrast, parasitic drag is caused by moving a solid object through a fluid. This type of drag is made up of multiple components, which includes “form drag” and “skin friction drag”. In aviation, induced drag tends to be greater at lower speeds because a high angle of attack is required to maintain lift, so as speed increases this drag becomes much less, but parasitic drag increases because the fluid is flowing faster around protruding objects increasing friction. The combined overall drag curve is minimal at some airspeeds and will be at or close to its optimal efficiency.
Wave drag (compressibility drag) is created by the presence of a body moving at high speed through a compressible fluid. In aerodynamics, wave drag consists of multiple components depending on the speed regime of the flight. In transonic flight – at speeds of Mach 0.5 or greater, but still less than Mach 1.0 (aka. speed of sound) – wave drag is the result of local supersonic flow.
Supersonic flow occurs on bodies traveling well below the speed of sound, as the local speed of air on a body increases when it accelerates over the body. In short, aircraft flying at transonic speeds often incur wave drag as a result. This increases as the speed of the aircraft nears the sound barrier of Mach 1.0, before becoming a supersonic object.
In supersonic flight, wave drag is the result of oblique shockwaves formed at the leading and trailing edges of the body. In highly supersonic flows bow waves will form instead. At supersonic speeds, wave drag is commonly separated into two components, supersonic lift-dependent wave drag and supersonic volume-dependent wave drag.
Understanding the role air frictions plays with flight, knowing its mechanics, and knowing the kinds of power needed to overcome it, are all crucial when it comes to aerospace and space exploration. Knowing all this will also be critical when it comes time to explore other planets in our Solar System, and in other star systems altogether!
A unique observatory buried deep in the clear ice of the South Pole region, an orbiting observatory that monitors gamma rays, a powerful outburst from a black hole 10 billion light years away, and a super-energetic neutrino named Big Bird. These are the cast of characters that populate a paper published in Nature Physics, on Monday April 18th.
The observatory that resides deep in the cold dark of the Antarctic ice has one job: to detect neutrinos. Neutrinos are strange, standoffish particles, sometimes called ‘ghost particles’ because they’re so difficult to detect. They’re like the noble gases of the particle world. Though neutrinos vastly outnumber all other atoms in our Universe, they rarely interact with other particles, and they have no electrical charge. This allows them to pass through normal matter almost unimpeded. To even detect them, you need a dark, undisturbed place, isolated from cosmic rays and background radiation.
This explains why they built an observatory in solid ice. This observatory, called the IceCube Neutrino Observatory, is the ideal place to detect neutrinos. On the rare occasion when a neutrino does interact with the ice surrounding the observatory, a charged particle is created. This particle can be either an electron, muon, or tau. If these charged particles are of sufficiently high energy, then the strings of detectors that make up IceCube can detect it. Once this data is analyzed, the source of the neutrinos can be known.
The next actor in this scenario is NASA’s Fermi Gamma-Ray Space Telescope. Fermi was launched in 2008, with a specific job in mind. Its job is to look at some of the exceptional phenomena in our Universe that generate extraordinarily large amounts of energy, like super-massive black holes, exploding stars, jets of hot gas moving at relativistic speeds, and merging neutron stars. These things generate enormous amounts of gamma-ray energy, the part of the electromagnetic spectrum that Fermi looks at exclusively.
Next comes PKS B1424-418, a distant galaxy with a black hole at its center. About 10 billion years ago, this black hole produced a powerful outburst of energy, called a blazar because it’s pointed at Earth. The light from this outburst started arriving at Earth in 2012. For a year, the blazar in PKS B1424-418 shone 15-30 times brighter in the gamma spectrum than it did before the burst.
Detecting neutrinos is a rare occurrence. So far, IceCube has detected about a hundred of them. For some reason, the most energetic of these neutrinos are named after characters on the popular children’s show called Sesame Street. In December 2012, IceCube detected an exceptionally energetic neutrino, and named it Big Bird. Big Bird had an energy level greater than 2 quadrillion electron volts. That’s an enormous amount of energy shoved into a particle that is thought to have less than one millionth the mass of an electron.
Big Bird was clearly a big deal, and scientists wanted to know its source. IceCube was able to narrow the source down, but not pinpoint it. Its source was determined to be a 32 degree wide patch of the southern sky. Though helpful, that patch is still the size of 64 full Moons. Still, it was intriguing, because in that patch of sky was PKS B1424-418, the source of the blazar energy detected by Fermi. However, there are also other blazars in that section of the sky.
The scientists looking for Big Bird’s source needed more data. They got it from TANAMI, an observing program that used the combined power of several networked terrestrial telescopes to create a virtual telescope 9,650 km(6,000 miles) across. TANAMI is a long-term program monitoring 100 active galaxies that are located in the southern sky. Since TANAMI is watching other active galaxies, and the energetic jets coming from them, it was able to exclude them as the source for Big Bird.
The team behind this new paper, including lead author Matthias Kadler of the University of Wuerzberg in Germany, think they’ve found the source for Big Bird. They say, with only a 5 percent chance of being wrong, that PKS B1424-418 is indeed Big Bird’s source. As they say in their paper, “The outburst of PKS B1424–418 provides an energy output high enough to explain the observed petaelectronvolt event (Big Bird), suggestive of a direct physical association.”
So what does this mean? It means that we can pinpoint the source of a neutrino. And that’s good for science. Neutrinos are notoriously difficult to detect, and they’re not that well understood. The new detection method, involving the Fermi Telescope in conjunction with the TANAMI array, will not only be able to locate the source of super-energetic neutrinos, but now the detection of a neutrino by IceCube will generate a real-time alert when the source of the neutrino can be narrowed down to an area about the size of the full Moon.
This promises to open a whole new window on neutrinos, the plentiful yet elusive ‘ghost particles’ that populate the Universe.
Astronomers have found a pair of stellar oddballs out in the edges of our galaxy. The stars in question are a binary pair, and the two companions are moving much faster than anything should be in that part of the galaxy. The discovery was reported in a paper on April 11, 2016, in the Astrophysical Journal Letters.
The binary system is called PB3877, and at 18,000 light years away from us, it’s not exactly in our neighborhood. It’s out past the Scutum-Centaurus Arm, past the Perseus Arm, and even the Outer Arm, in an area called the galactic halo. This binary star also has the high metallicity of younger stars, rather than the low metallicity of the older stars that populate the outer reaches. So PB3877 is a puzzle, that’s for sure.
PB3877 is what’s called a Hyper-Velocity Star (HVS), or rogue star, and though astronomers have found other HVS’s, more than 20 of them in fact, this is the first binary one found. The pair consists of a hot sub-dwarf primary star that’s over five times hotter than the Sun, and a cooler companion star that’s about 1,000 degrees cooler than the Sun.
Hyper-Velocity stars are fast, and can reach speeds of up to 1,198 km. per second, (2.7 million miles per hour,) maybe faster. At that speed, they could cross the distance from the Earth to the Moon in about 5 minutes. But what’s puzzling about this binary star is not just its speed, and its binary nature, but its location.
Hyper-Velocity stars themselves are rare, but PB3877 is even more rare for its location. Typically, hyper velocity stars need to be near enough to the massive black hole at the center of a galaxy to reach their incredible speeds. A star can be drawn toward the black hole, accelerated by the unrelenting pull of the hole, then sling-shotted on its way out of the galaxy. This is the same action that spacecraft can use when they gain a gravity assist by travelling close to a planet.
This video shows how stars can accelerate when their orbit takes them close to the super-massive black holes at the center of the Milky Way.
But the trajectory of PB3877 shows astronomers that it could not have originated near the center of the galaxy. And if it had been ejected by a close encounter with the black hole, how could it have survived with its binary nature intact? Surely the massive pull of the black hole would have destroyed the binary relationship between the two stars in PB3877. Something else has accelerated it to such a high speed, and astronomers want to know what, exactly, did that, and how it kept its binary nature.
Barring a close encounter with the super-massive black hole at the center of the Milky Way, there are a couple other ways that PB3877 could have been accelerated to such a high velocity.
One such way is a stellar interaction or collision. If two stars were travelling at the right vectors, a collision between them could impart energy to one of them and propel it to hyper-velocity. Think of two pool balls on a pool table.
Another possibility is a supernova explosion. It’s possible for one of the stars in a binary pair to go supernova, and eject it’s companion at hyper-velocity speeds. But in these cases, either stellar collision or supernova, things would have to work out just right. And neither possibility explains how a wide-binary system like this could stay intact.
Fraser Cain sheds more light on Hyper-Velocity Stars, or Rogue Stars, in this video.
There is another possibility, and it involves Dark Matter. Dark Matter seems to lurk on the edge of any discussion around something unexplained, and this is a case in point. The researchers think that there could be a massive cocoon or halo of Dark Matter around the binary pair, which is keeping their binary relationship intact.
As for where the binary star PB3788 came from, as they say in the conclusion of their paper, “We conclude that the binary either formed in the halo or was accreted from the tidal debris of a dwarf galaxy by the Milky Way.” And though the source of this star’s formation is an intriguing question, and researchers plan follow up study to verify the supernova ejection possibility, its possible relationship with Dark Matter is also intriguing.
Fusion power has long been considered to be the holy grail of alternative energy. Clean, abundant power, created through a self-sustaining process where atomic nuclei are fused at extremely high temperatures. Achieving this has been the goal of atomic researchers and physicists for over half a century, but progress has been slow. While the science behind fusion power is solid, the process has not exactly been practical.
In short, fusion can only be considered a viable form of power if the amount of energy used to initiate the reaction is less than the energy produced. Luckily, in recent years, a number of positive steps have been taken towards this goal. The latest comes from China, where researchers at the Experimental Advanced Superconducting Tokamak (EAST) recently report that they have achieved a fusion milestone.