Matt Williams is a space journalist and science communicator for Universe Today and Interesting Engineering. He's also a science fiction author, podcaster (Stories from Space), and Taekwon-Do instructor who lives on Vancouver Island with his wife and family.
The explosion in the sciences that took place in the 17th and 18th centuries revolutionized not only the way we think of our world, but of time and space itself. Much of this is owed to individuals like Sir Isaac Newton, a man whose theories came to form the basis of modern physics. Though much of his theories would later come to be challenged with the discovery of relativity and quantum mechanics, they were nonetheless extremely influential because they gave later generations a framework. It is to him, for example, that we are indebted for the notions of Absolute Time and Absolute Space, and how the two were thought to be separate aspects of objective reality.
In his magnum opus, PhilosophiæNaturalis Principia Mathematica (Mathematical Principles for Natural Philosophy), Newton laid the groundwork for the concept of Absolute Space thusly:
“Absolute space, in its own nature, without regard to anything external, remains always similar and immovable. Relative space is some movable dimension or measure of the absolute spaces; which our senses determine by its position to bodies: and which is vulgarly taken for immovable space … Absolute motion is the translation of a body from one absolute place into another: and relative motion, the translation from one relative place into another.”
In other words, Absolute Space is the study of space as an absolute, unmoving reference point for what inertial systems (i.e. planets and other objects) exist within it. Thus, every object has an absolute state of motion relative to absolute space, so that an object must be either in a state of absolute rest, or moving at some absolute speed.
These views were controversial even in Newton’s own time. However, it was with the advent of modern physics and the Theory of Special Relativity, that much of the basis for Newtonian physics would come to be shattered. In essence, special relativity proposed that time and space are not independent realities but different expressions of the same thing. In this model, time and motion are dependent on the observer and there is no fixed point of reference, only relative forms of motion which are determined by comparing them to other points of reference.
However, it would be fair to say that it was Newton’s own definitions of space and time as independent phenomena that allowed for the development of physics as we know it today. By giving physicists clear definitions to work with and challenge, later generations of scientists like Einstein were able to express clearly how space was not absolute since it itself was always in motion, and how one could not divorce space from time.
We have written many articles about absolute space for Universe Today. Here’s an article about what is space, and here’s an article about how cold space is.
When it comes to measurements, the everyday kind that deal with things like air pressure, tire pressure, blood pressure, etc., there is no such thing as an absolute accuracy. And yet, as with most things, scientists are able to come up with a relatively accurate way of gauging these things by measuring them relative to other things. When it comes to air pressure (say for example, inside a tire), this takes the form of measuring it relative to ambient air temperature, or a perfect vacuum. The latter case, where zero pressure is referred against a total vacuum, is known as Absolute Pressure. The name may seem slightly ironic, but since the comparison is against an environment in which there is no air pressure to speak of.
In the larger context of pressure measurement, Absolute Pressure is part of the “zero reference” trinity. This includes Absolute Pressure (AP), Gauge Pressure, and Differential Pressure. As already noted, AP is zero referenced against a perfect vacuum. This is the method of choice when measuring quantities where absolute values must be determined. Gauge Pressure, on the other hand, is referenced against ambient air pressure, and is used for conventional purposes such as measuring tire and blood pressure. Differential Pressure is quite simply the difference between the two points.
Cases where AP are used include atmospheric pressures readings: where one is trying to determine air pressure (expressed in units of atm’s, where one is equal to 101,325 Pa), Mean Sea Level pressure (the air pressure at sea level; on average: 101.325 kPa), or the boiling point of water (which varies based on elevation and differences in air pressure). Another instance of AP being the method of choice is with the measurement of deep vacuum pressures (aka. outer space) where absolute readings are needed since scientists are dealing with a near-total vacuum. Altimeter pressure is another instance, where air pressure is used to determine the altitude of an aircraft and absolute values are needed to ensure both accuracy and safety.
To produce an absolute pressure sensor, manufacturer will seal a high vacuum behind the sensing diaphragm. If the connection of an absolute pressure transmitter is open to the air, it will read the actual barometric pressure (which is roughly 14.7 PSI). This is different from most gauges, such as those used to measure tire pressure, in that such gauges are calibrated to take into account ambient air pressure (i.e. registering 14.7 PSI as zero).
We have written many articles about absolute pressure for Universe Today. Here’s an article about Boyle’s Law, and here’s an article about air density.
[/caption]For the lucky residents of the Southern Hemisphere, or those fortunate enough to enjoy a vacation in Hawaii or Cancun, there’s a stellar delight that few Northerners know about. It’s called the Southern Cross, a small but beautiful constellation located in the southern sky, very close to the neighboring constellation of Centaurus. Originally known by the Latin name Crux, which is due to its cross shape, this constellation is one of the easiest to identify in the night sky. For centuries, it has served as a navigational beacon for sailors, an important symbol to the Egyptians, and played an important role in the spiritual beliefs of the Aborigines and many other cultures in the Southern Hemisphere.
The first recorded example of Crux’s discovery was around 1000 BC during the time of the Ancient Greeks. At the latitude of Athens, Crux was clearly visible, though low in the night sky. At the time, the Greeks identified it as being part of the constellation Centaurus. However, the precession of the equinoxes gradually lowered its stars below the European horizon, and they were eventually forgotten by the inhabitants of northern latitudes. Crux fell into anonymity for northerners until the Age of Discovery (from the early 15th to early 17th centuries) when it was rediscovered by Europeans. The first to do so were the Portuguese, who mapped it for navigation uses while rounding the southern tip of Africa. During this time, Crux was also separated from Centaurus, though it is not altogether clear who was responsible. Some attribute it to the French astronomer Augustin Royer who did it in 1679 while others believe it was Dutch astronomer PetrusPlancius who did the deed in 1613. Regardless, it is believed to have taken place in the 17th century, placing it within the context of European expansion and the revolution that was taking place in the sciences at the time.
In terms of cultural significance, the Crux, like all constellations, played an important role in the belief system of many cultures. In the ancient mountaintop village of Machu Picchu, a stone engraving exists which depicts the constellation. In addition, in Quechua (the language of the Incas) Crux is known as “Chakana”, which literally means “stair”, and holds deep symbolic value in Incan mysticism (the cross represented the three tiers of the world: the underworld, world of the living, and the heavens). To the Aborigines and the Maori, Crux is representative of animist spirits who play a central role in their ancestral beliefs. To the ancient Egyptians, Crux was the place where the Sun Goddess Horus was crucified, and marked the passage of the winter season. The Southern Cross is also featured prominently on the flags of several southern nations, including Australia, Brazil, New Zealand, Papua New Guinea, and Samoa.
We have written many articles about the Southern Cross constellation for Universe Today. Here’s an article about Crux, and here’s an article about constellations.
[/caption]
It was just over a century ago that a little known French scientist named Henri Becquerel came across something new and immensely startling. At the time, while working with phosphorescent materials (i.e. materials that glow in the dark after being subjected to light), he discovered naturally occurring rays that he couldn’t account for. In time, these rays were discovered to be present in several naturally occurring elements, and were dubbed radioactivity. Those metals that exhibited them also came to be known as Radioactive Isotopes.
Radioisotopes, (also known as radioactive isotopes or radionuclides), are atoms with a different number of neutrons than a usual atom. Due to this imbalance, these isotopes have an unstable nucleus that decays, and in the process emitting alpha, beta and gamma rays until the isotope reaches stability. Once it’s stable, the isotope has transformed into another element entirely. Every chemical element has one or more radioisotopes, with over 1,000 isotopes accounted for in total. Approximately 50 of these are found in nature; the rest are produced artificially as the direct result of nuclear reactions or indirectly as the radioactive descendants of these products.
Of the naturally occurring radioisotopes, there are three categories that are used to group them. The first is primordial radionuclides, which originate mainly within the interior of stars and like uranium and thorium, are still present because their half-lives are so long that they have not yet completely decayed. The second group, secondary radionuclides, are radiogenic isotopes derived from the decay of primordial radionuclides and are characterized by their shorter half-lives. The third and final group is known cosmogenic radionuclides, which consists of isotopes like Carbon 14 which are constantly produced in the atmosphere due to cosmic rays. Artificially produced radionuclides, on the other hand, are produced by nuclear reactors, particle accelerators or by radionuclide generators (where a parent isotope, usually produced in a nuclear reactor, is allowed to decay to produce a radioisotope). In addition, nuclear explosions are known to produce artificial radioisotopes as well.
Radioisotopes are used today for a variety of purposes. When it comes to the field of nuclear medicine, radioactive isotopes are used in MRI’s and X-rays for diagnostic purposes, for targeted radiation therapy, and to sterilize medical equipment. In biochemistry and genetics, radionuclides are used in molecular and DNA research in order to “label” molecules and trace chemical and physiological processes. Carbon-14, a naturally occurring cosmogenic isotope, is used for carbon dating by archeologists, paleontologists, and geologists. In agriculture, radiation is used to stop the sprouting of root crops, kill parasites and pests, and in veterinary medicine. And when it comes to industry, radionuclides are used to study the rate of wear and corrosion of metals, to test for leaks and seams, analyze pollutants, study the movement of surface water, measure water runoffs from rain and snow, and the flow rates of streams and rivers.
We have written many articles about radioisotopes for Universe Today. Here’s an article about isotopes, and here’s an article about radioactive decay.
[/caption]In the world of physics, there are few people who have been more influential than Sir Isaac Newton. In addition to his contributions to astronomy, mathematics, and empirical philosophy, he is also the man who pioneered classical physics with his laws of motion. Of these, the first, otherwise known as the Law of Inertia, is the most famous and arguably the most important. In the language of science, this law states that: Every body remains in a state of constant velocity unless acted upon by an external unbalanced force. This means that in the absence of a non-zero net force, the center of mass of a body either remains at rest, or moves at a constant velocity. Put simply, it states that a body will remain at rest or in motion unless acted upon by an external and unbalanced force.
Prior to Aristotle’s theories on inertia, the most generally accepted theory of motion was based on Aristotelian philosophy. This ancient theory stated that, in the absence of an external motivating power, all objects on Earth would come to rest and that moving objects only continue to move so long as long there is a power inducing them to do so. In a void, no motion would be possible since Aristotle’s theory claimed that the motion of objects was dependent on the surrounding medium, that it was responsible for moving the object forward in some way. By the Renaissance, however, this theory was coming to be rejected as scientists began to postulate that both air resistance and the weight of an object would play a role in arresting the motion of that object.
Further advances in astronomy were another nail in this coffin. The Aristotelian division of motion into “mundane” and “celestial” became increasingly problematic in the face of Copernicus’ model in the 16th century, who argued that the earth (and everything on it) was in fact never “at rest”, but was actually in constant motion around the sun.Galileo, in his further development of the Copernican model, recognized these problems and would later go on to conclude that based on this initial premise of inertia, it is impossible to tell the difference between a moving object and a stationary one without some outside point of comparison.
Thus, though Newton was not the first to express the concept of inertia, he would later refine and codify them as the first law of motion in his seminal work PhilosophiaeNaturalis Principia Mathematica (Mathematical Principals of Natural Philosophy) in 1687, in which he stated that: unless acted upon by a net unbalanced force, an object will maintain a constant velocity. Interestingly enough, the term “interia” was not used in the study. It was in fact JohanneKepler who first used it in his Epitome AstronomiaeCopernicanae (Epitome of Copernican Astronomy) published from 1618–1621. Nevertheless, the term would later come to be used and Newton recognized as being the man most directly responsible for its articulation as a theory.
We have written many articles about the law of inertia for Universe Today. Here’s an article about Newton’s Laws of Motion, and here’s an article about Newton’s first law.
If you’d like more info on the law of inertia, check out these articles from How Stuff Works and NASA.
We’ve also recorded an entire episode of Astronomy Cast all about Gravity. Listen here, Episode 102: Gravity.
[/caption]In the last few centuries, in which time we have had several scientific revolutions, our understanding of heat, energy and the exchange thereof has grown exponentially. In particular has been the increasing ability to gauge the amounts of energy involved in particular processes and in turn create theoretical frameworks, units, and even tools with which to measure them. One such concept is the measurement known as Emissivity. Essentially, this is the relative ability of a material’s surface (usually written ? or e) to emit energy as radiation. It is expressed as the ratio of the emissivity of the material in question to the radiation emitted by a blackbody (an idealized physical body that absorbs all incident electromagnetic radiation) at the same temperature. This means that while a true black body would have an emissivity value of 1 (? = 1), any other object, known as a “grey body”, would have an emissivity value of less than 1 (? < 1).
In general, the duller and blacker a material is, the closer its emissivity is to 1. The more reflective a material is, the lower its emissivity. Emissivity also depends on such factors as temperature, emission angle, and wavelength of the radiation. At the opposite end of the spectrum is the material’s absorptivity (or absorptance), which is the measure of radiation absorbed by a material at a particular wavelength. When dealing with non-black surfaces, the relative emissivity follows Kirchhoff's law of thermal radiation which states that emissivity is equal to absorptivity. Essentially an object that does not absorb all incident light will also emit less radiation than an ideal black body.
An important function for emissivity has to do with the Earth’s atmosphere. Like all other “grey bodies”, the Earth’s atmosphere is able to absorb and emit radiation. The overall emissivity of Earth's atmosphere varies according to cloud cover and the concentration of gases that absorb and emit energy in the thermal infrared (i.e. heat energy). In this way, and by using the same criteria by which they are able to calculate the emissivity of “grey bodies”, scientists are able to calculate the amount of thermal radiation emitted by the atmosphere, thereby gaining a better understanding of the Greenhouse Effect.
Every known material has an emissivity coefficient. Those that have a higher coefficient tend to be polished metals, such as aluminum and anodized metals. However, certain materials that are not metals and are non-reflective, such as red bricks, asbestos, concrete and pressed carbon, have equally high coefficients. In addition, naturally occurring materials such as ice, marble, and lime also have high emissivity coefficients.
We have written many articles about emissivity of materials for Universe Today. Here's an article about heat rejection systems, and here's an article about absorptivity.
If you'd like more info on emissivity, check out these articles from Engineering Toolbox and Science World.
We’ve also recorded an entire episode of Astronomy Cast all about Electromagnetism. Listen here, Episode 103: Electromagnetism.
[/caption]Ever since Einstein unveiled his theory of relativity, the speed of light has been considered to be the physical constant of the universe, interrelating space and time. In short, it was the speed at which light and all other forms of electromagnetic radiation were believed to travel at all times in empty space, regardless of the motion of the source or the inertial frame of reference of the observer. But suppose for a second that there was a particle that defied this law, that could exist within the framework of a relativistic universe, but at the same time defy the foundations on which its built? Sounds impossible, but the existence of such a particle may very well be necessary from a quantum standpoint, resolving key issues that arise in that chaotic theory. It is known as the Tachyon Particle, a hypothetical subatomic particle that can move faster than light and poses a number intriguing problems and possibilities to the field of physics.
In the language of special relativity, a tachyon would be a particle with space-like four-momentum and imaginary proper time. Their existence was first attributed to German physicist Arnold Sommerfeld; even though it was Gerald Feinberg who first coined the term in the 1960s, and several other scientists helped to advance the theoretical framework within which tachyons were believed to exist. They were originally proposed within the framework of quantum field theory as a way of explaining the instability of the system, but have nevertheless posed problems for the theory of special relativity.
For example, if tachyons were conventional, localizable particles that could be used to send signals faster than light, this would lead to violations of causality in special relativity. But in the framework of quantum field theory, tachyons are understood as signifying an instability of the system and treated using a theory known as tachyon condensation, a process that attempts to resolve their existence by explaining them in terms of better understood phenomena, rather than as real faster-than-light particles. Tachyonic fields have appeared theoretically in a variety of contexts, such as the bosonic string theory. In general, string theory states that what we see as “particles” —electrons, photons, gravitons and so forth—are actually different vibrational states of the same underlying string. In this framework, a tachyon would appear as either indication of instability in the D-brane system or within spacetime itself.
Despite the theoretical arguments against the existence of tachyon particles, experimental searches have been conducted to test the assumption against their existence; however, no experimental evidence for the existence of tachyon particles has been found.
We have written many articles about tachyon for Universe Today. Here’s an article about elementary particles, and here’s an article about Einstein’s Theory of Relativity.
If you’d like more info on tachyon, check out these articles from Science World. Also, you may want to browse through a forum discussion about tachyons.
[/caption]
Anyone who took elementary science in grade school recalls the lesson about the three states of matter, right? That was the one where we were told that matter comes in three basic forms: liquid, solid and gas. This works for the periodic table of elements and can be extended to include just about any compound. Except perhaps for whipped cream (that damnable compound continues to defy attempts as classification!) But what if there were a fourth state for matter? It occurs when a state of matter similar to gas contains a large portion of ionized particles and generates its own magnetic field. It’s called Plasma, and it just happens to be the most common type of matter, comprising more than ninety-nine percent of matter in the visible universe and which permeates the solar system, interstellar and intergalactic environments.
The basic premise behind plasma is that heating a gas dissociates its molecular bonds, rendering it into its constituent atoms. Further heating leads to ionization (a loss of electrons), which turns it into a plasma. This plasma is therefore defined by the existence of charged particles, both positive ions and negative electrons.The presence of a large number of charged particles makes the plasma electrically conductive so that it responds strongly to electromagnetic fields. Plasma, therefore, has properties quite unlike those of solids, liquids, or gases and is considered a distinct state of matter. Like a gas, plasma does not have a definite shape or a definite volume unless enclosed in a container. But unlike gas, under the influence of a magnetic field, it may form structures such as filaments, beams and double layers. It is precisely for this reason that plasma is used in the construction of electronics, such as plasma TVs and neon signs.
The existence of plasma was first discovered by Sir William Crookes in 1879 using an assembly that is today known as a “Crookes tube”, an experimental electrical discharge tube in which air is ionized by the application of a high voltage through a voltage coil. At the time, he labeled it “radiant matter” because of its luminous quality. Sir J.J. Thomson, a British physicist, identified the nature of the matter in 1897, thanks to his discovery of electrons and numerous experiments using cathode ray tubes. However, it was not until 1928 that the term “plasma” was coined by Irving Langmuir, an American chemist and physicist, who was apparently reminded of blood plasma.
As already mentioned, plasmas are by far the most common phase of matter in the universe. All the stars are made of plasma, and even the space between the stars is filled with a plasma, albeit a very sparse one.
We have written many articles about plasma for Universe Today. Here’s an article about the plasma engine, and here’s an article about the states of matter.
If you’d like more info on plasma, check out these articles from Chem4Kids and NASA Science.
[/caption]Magnetism is a fundamental force of the universe, essential to its function and existence in the same way that gravity and weak and strong nuclear forces are. But interestingly enough, there are several different kinds of magnetism. For example, there is ferromagnetism, a property which applies to super magnets, where magnetic properties exist regardless of whether or not there is a magnetic field acting on the material itself. There is also Diamagnetism, which refers to materials that are not affected by a magnetic field, and Paramagnetism, a form of magnetism that occurs only in the presence of an externally applied magnetic field.
Materials that are called ‘paramagnets’ are most often those that exhibit, at least over an appreciable temperature range, magnetic susceptibilities that adhere to the Curie or Curie–Weiss laws. According to these laws, which apply at low-levels of magnetization, the susceptibility of paramagnetic materials is inversely proportional to their temperature. Mathematically, this can be expressed as: M = C(B/T), where M is the resulting magnetization, B is the magnetic field, T is absolute temperature, measured in kelvins, C is a material-specific Curie constant.
Paramagnets were named and extensively researched by British scientist Michael Faraday – the man who gave us Faraday’s Constant, Faraday’s Law, the Faraday Effect, etc. – beginning in 1845. He, and many scientists since, found that certain material exhibited what was commonly referred to as “negative magnetism”. Most elements and some compounds are paramagnetic, with strong paramagnetism being exhibited by compounds containing iron, palladium, platinum, and certain rare-earth elements. In such compounds atoms of these elements have some inner electron shells that are incomplete, causing their unpaired electrons to spin like tops and orbit like satellites. This makes the atoms act like a permanent magnet, tending to align with and hence strengthen an applied magnetic field. However, once the magnetic field is removed, the atoms fall out of alignment and the material return to its original state. Strong paramagnetism also decreases with rising temperature because of the de-alignment produced by the greater random motion of the atomic magnets.
Weak paramagnetism, independent of temperature, is found in many metallic elements in the solid state, such as sodium and the other alkali metals. Other examples include Iron oxide, Uranium, Platinum, Tungsten, Cesium, Aluminum, Lithium, Magnesium, Sodium, and Oxygen gas. Even iron, a highly magnetic material, can become a paramagnet once it is heated above its relatively high Curie-point.
We have written many articles about magnetism for Universe Today. Here’s an article about magnetic field, and here’s an article about what magnets are made of.
If you’d like more info on paramagnetism, check out these articles from Hyperphysics and Physlink.
Ever since scientists first discovered the existence of black holes in our universe, we have all wondered: what could possibly exist beyond the veil of that terrible void? In addition, ever since the theory of General Relativity was first proposed, scientists have been forced to wonder, what could have existed before the birth of the Universe – i.e. before the Big Bang?
Interestingly enough, these two questions have come to be resolved (after a fashion) with the theoretical existence of something known as a Gravitational Singularity – a point in space-time where the laws of physics as we know them break down. And while there remain challenges and unresolved issues about this theory, many scientists believe that beneath veil of an event horizon, and at the beginning of the Universe, this was what existed.
Definition:
In scientific terms, a gravitational singularity (or space-time singularity) is a location where the quantities that are used to measure the gravitational field become infinite in a way that does not depend on the coordinate system. In other words, it is a point in which all physical laws are indistinguishable from one another, where space and time are no longer interrelated realities, but merge indistinguishably and cease to have any independent meaning.
Origin of Theory:
Singularities were first predicated as a result of Einstein’s Theory of General Relativity, which resulted in the theoretical existence of black holes. In essence, the theory predicted that any star reaching beyond a certain point in its mass (aka. the Schwarzschild Radius) would exert a gravitational force so intense that it would collapse.
At this point, nothing would be capable of escaping its surface, including light. This is due to the fact the gravitational force would exceed the speed of light in vacuum – 299,792,458 meters per second (1,079,252,848.8 km/h; 670,616,629 mph).
This phenomena is known as the Chandrasekhar Limit, named after the Indian astrophysicist Subrahmanyan Chandrasekhar, who proposed it in 1930. At present, the accepted value of this limit is believed to be 1.39 Solar Masses (i.e. 1.39 times the mass of our Sun), which works out to a whopping 2.765 x 1030 kg (or 2,765 trillion trillion metric tons).
Another aspect of modern General Relativity is that at the time of the Big Bang (i.e. the initial state of the Universe) was a singularity. Roger Penrose and Stephen Hawking both developed theories that attempted to answer how gravitation could produce singularities, which eventually merged together to be known as the Penrose–Hawking Singularity Theorems.
According to the Penrose Singularity Theorem, which he proposed in 1965, a time-like singularity will occur within a black hole whenever matter reaches certain energy conditions. At this point, the curvature of space-time within the black hole becomes infinite, thus turning it into a trapped surface where time ceases to function.
The Hawking Singularity Theorem added to this by stating that a space-like singularity can occur when matter is forcibly compressed to a point, causing the rules that govern matter to break down. Hawking traced this back in time to the Big Bang, which he claimed was a point of infinite density. However, Hawking later revised this to claim that general relativity breaks down at times prior to the Big Bang, and hence no singularity could be predicted by it.
Some more recent proposals also suggest that the Universe did not begin as a singularity. These includes theories like Loop Quantum Gravity, which attempts to unify the laws of quantum physics with gravity. This theory states that, due to quantum gravity effects, there is a minimum distance beyond which gravity no longer continues to increase, or that interpenetrating particle waves mask gravitational effects that would be felt at a distance.
Types of Singularities:
The two most important types of space-time singularities are known as Curvature Singularities and Conical Singularities. Singularities can also be divided according to whether they are covered by an event horizon or not. In the case of the former, you have the Curvature and Conical; whereas in the latter, you have what are known as Naked Singularities.
A Curvature Singularity is best exemplified by a black hole. At the center of a black hole, space-time becomes a one-dimensional point which contains a huge mass. As a result, gravity become infinite and space-time curves infinitely, and the laws of physics as we know them cease to function.
Conical singularities occur when there is a point where the limit of every general covariance quantity is finite. In this case, space-time looks like a cone around this point, where the singularity is located at the tip of the cone. An example of such a conical singularity is a cosmic string, a type of hypothetical one-dimensional point that is believed to have formed during the early Universe.
And, as mentioned, there is the Naked Singularity, a type of singularity which is not hidden behind an event horizon. These were first discovered in 1991 by Shapiro and Teukolsky using computer simulations of a rotating plane of dust that indicated that General Relativity might allow for “naked” singularities.
In this case, what actually transpires within a black hole (i.e. its singularity) would be visible. Such a singularity would theoretically be what existed prior to the Big Bang. The key word here is theoretical, as it remains a mystery what these objects would look like.
For the moment, singularities and what actually lies beneath the veil of a black hole remains a mystery. As time goes on, it is hoped that astronomers will be able to study black holes in greater detail. It is also hoped that in the coming decades, scientists will find a way to merge the principles of quantum mechanics with gravity, and that this will shed further light on how this mysterious force operates.