The Starhops Have Begun!

SpaceX's first detailed render of the Starship reentering Earth's atmosphere. Credit: SpaceX

According to Elon Musk, SpaceX’s Starship Hopper just completed its inaugural hop test at the company’s South Texas Launch Site. As the first of many, this test is intended to validate the sophisticated Raptor engines that will be used aboard the full-scale Starship spacecraft, which is intrinsic to Musks’ long-term vision of providing intercontinental flights and making commercial trips to the Moon and Mars.

Continue reading “The Starhops Have Begun!”

New Research Reveals How Galaxies Stay Hot and Bothered

This visualization uses data from simulations of orbital motions of gas swirling around at about 30% of the speed of light on a circular orbit around the black hole. Credit: ESO/Gravity Consortium/L. Calçada

It’s relatively easy for galaxies to make stars. Start out with a bunch of random blobs of gas and dust. Typically those blobs will be pretty warm. To turn them into stars, you have to cool them off. By dumping all their heat in the form of radiation, they can compress. Dump more heat, compress more. Repeat for a million years or so.

Eventually pieces of the gas cloud shrink and shrink, compressing themselves into a tight little knots. If the densities inside those knots get high enough, they trigger nuclear fusion and voila: stars are born.

Continue reading “New Research Reveals How Galaxies Stay Hot and Bothered”

A New Atomic Clock has been Built that Would be off by Less than a Second Since the Big Bang

Timeline of the Big Bang and the expansion of the Universe. If the new atomic clock had been turned on at the Big Bang, it would be off by less than a single second now, almost 14 billion years later. Credit: NASA
Timeline of the Big Bang and the expansion of the Universe. If the new atomic clock had been turned on at the Big Bang, it would be off by less than a single second now, almost 14 billion years later. Credit: NASA

Physicists have developed an atomic clock so accurate that it would be off by less than a single second in 14 billion years. That kind of accuracy and precision makes it more than just a timepiece. It’s a powerful scientific instrument that could measure gravitational waves, take the measure of the Earth’s gravitational shape, and maybe even detect dark matter.

How did they do it?

Continue reading “A New Atomic Clock has been Built that Would be off by Less than a Second Since the Big Bang”

Chinese Fusion Experiment Reaches 100 Million Degrees

Researchers at the Experimental Advanced Superconducting Tokamak facility in China have achieved a new milestone in fusion power. Credit: ipp.cas.cn

Fusion power has been the fevered dream of scientists, environmentalists and futurists for almost a century. For the past few decades, scientists have been attempting to find a way to create sustainable fusion reactions that would provide human beings with clean, abundant energy, which would finally break our dependence on fossil fuels and other unclean methods.

In recent years, many positive strides have been made that are bringing the “fusion era” closer to reality. Most recently, scientists working with the Experimental Advanced Superconducting Tokamak (EAST) – aka. the “Chinese artificial sun” – set a new record by super-heating clouds of hydrogen plasma to over 100 million degrees – a temperature which is six times hotter than the Sun itself!

Continue reading “Chinese Fusion Experiment Reaches 100 Million Degrees”

Next Generation Telescopes Could Use “Teleportation” to Take Better Images

The Very Large Telescope in Chile firing a laser from its adaptive optics system. Credit: ESO

Telescopes have come a long way in the past few centuries. From the comparatively modest devices built by astronomers like Galileo Galilei and Johannes Kepler, telescopes have evolved to become massive instruments that require an entire facility to house them and a full crew and network of computers to run them. And in the coming years, much larger observatories will be constructed that can do even more.

Unfortunately, this trend towards larger and larger instruments has many drawbacks. For starters, increasingly large observatories require either increasingly large mirrors or many telescopes working together – both of which are expensive prospects.  Luckily, a team from MIT has proposed combining interferometry with quantum-teleportation, which could significantly increase the resolution of arrays without relying on larger mirrors.

A New Solution to the Space Junk Problem. Spacecraft with Plasma Beams to Force Space Junk to Burn Up

A satellite using a bi-directional plasma thruster can direct one beam at space junk, sending it harmlessly into Earth's atmosphere. The other opposite beam can stabilize the position of the satellite itself. Image: Takahashi et. al. 2018.
A satellite using a bi-directional plasma thruster can direct one beam at space junk, sending it harmlessly into Earth's atmosphere. The other opposite beam can stabilize the position of the satellite itself. Image: Takahashi et. al. 2018.

Space junk is a growing problem. For decades we have been sending satellites into orbit around Earth. Some of them de-orbit and burn up in Earth’s atmosphere, or crash into the surface. But most of the stuff we send into orbit is still up there.

This is becoming an acute problem as years go by and we launch more and more hardware into orbit. Since the very first satellite—Sputnik 1—was launched into orbit in 1957, over 8000 satellites have ben placed in orbit. As of 2018, an estimated 4900 are still in orbit. About 3000 of those are not operational. They’re space junk. The risk of collision is growing, and scientists are working on solutions. The problem will compound itself over time, as collisions between objects create more pieces of debris that have to be dealt with.

Continue reading “A New Solution to the Space Junk Problem. Spacecraft with Plasma Beams to Force Space Junk to Burn Up”

Technosignatures are NASA’s New Target for Detecting Other Civilizations in Space. Wait. What’s a Technosignature?

Artist's impression of a Dyson Sphere. The construction of such a massive engineering structure would create a technosignature that could be detected by humanity. Credit: SentientDevelopments.com/Eburacum45
Artist's impression of a Dyson Sphere. The construction of such a massive engineering structure would create a technosignature that could be detected by humanity. Credit: SentientDevelopments.com/Eburacum45

NASA is targeting technosignatures in its renewed effort to detect alien civilizations. Congress asked NASA to re-boot its search for other civilizations a few months ago. Their first step towards that goal is the NASA Technosignatures Workshop, held in Houston from September 26th to 28th, 2018.
Continue reading “Technosignatures are NASA’s New Target for Detecting Other Civilizations in Space. Wait. What’s a Technosignature?”

Instead of Building Single Monster Scopes like James Webb, What About Swarms of Space Telescopes Working Together?

In the future, telescopes may consist of distributed arrays rather than single instruments - like NASA's Terrestrial Planet Finder (TPF), a system of space telescopes for detecting extrasolar terrestrial planets. Credit: NASA

In the coming decade, a number of next-generation instruments will take to space and begin observing the Universe. These will include the James Webb Space Telescope (JWST), which is likely to be followed by concepts like the Large Ultraviolet/Optical/Infrared Surveyor (LUVOIR), the Origins Space Telescope (OST), the Habitable Exoplanet Imager (HabEx) and the Lynx X-ray Surveyor.

These missions will look farther into the cosmos than ever before and help astronomers address questions like how the Universe evolved and if there is life in other star systems. Unfortunately, all these missions have two things in common: in addition to being very large and complex, they are also very expensive. Hence why some scientists are proposing that we rely on more cost-effective ideas like swarm telescopes.

Two such scientists are Jayce Dowell and Gregory B. Taylor, a research assistant professor and professor (respectively) with the Department of Physics and Astronomy at the University of New Mexico. Together, the pair outlined their idea in a study titled “The Swarm Telescope Concept“, which recently appeared online and was accepted for publication by the Journal of Astronomical Instrumentation.

Illustration of NASA’s James Webb Space Telescope. Credits: NASA

As they state in their study, traditional astronomy has focused on the construction, maintenance and operation of single telescopes. The one exception to this is radio astronomy, where facilities have been spread over an extensive geographic area in order to obtain high angular resolution. Examples of this include the Very Long Baseline Array (VLBA), and the proposed Square Kilometer Array (SKA).

In addition, there’s also the problem of how telescopes are becoming increasingly reliant on computing and digital signal processing. As they explain in their study, telescopes commonly carry out multiple simultaneous observation campaigns, which increases the operational complexity of the facility due to conflicting configuration requirements and scheduling considerations.

A possible solution, according to Dowell and Taylor, is to rethink telescopes. Instead of a single instrument, the telescope would consist of a distributed array where many autonomous elements come together through a data transport system to function as a single facility. This approach, they claim, would be especially useful when it comes to the Next Generation Very Large Array (NGVLA) – a future interferometer that will build on the legacy of the Karl G. ansky Very Large Array and Atacama  Large Millimeter/submillimeter Array (ALMA). As they state in their study:

“At the core of the swarm telescope is a shift away from thinking about an observatory as a monolithic entity. Rather, an observatory is viewed as many independent parts that work together to accomplish scientific observations. This shift requires moving part of the decision making about the facility away from the human schedulers and operators and transitioning it to “software defined operators” that run on each part of the facility. These software agents then communicate with each other and build dynamic arrays to accomplish the goals of multiple observers, while also adjusting for varying observing conditions and array element states across the facility.”

This idea for a distributed telescope is inspired by the concept of swarm intelligence, where large swarms of robots  are programmed to interact with each other and their environment to perform complex tasks. As they explain, the facility comes down to three major components: autonomous element control, a method of inter-element communication, and data transport management.

Of these components, the most critical is the autonomous element control which governs the actions of each element of the facility. While similar to traditional monitoring and control systems used to control individual robotic telescopes, this system would be different in that it would be responsible for far more. Overall, the element control would be responsible for ensuring the safety of the telescope and maximizing the utilization of the element.

“The first, safety of the element, requires multiple monitoring points and preventative actions in order to identify and prevent problems,” they explain. “The second direction requires methods of relating the goals of an observation to the performance of an element in order to maximize the quantity and quality of the observations, and automated methods of recovering from problems when they occur.”

The second component, inter-element communication, is what allows the individual elements to come together to form the interferometer. This can take the form of a leaderless system (where there is no single point of control), or an organizer system, where all of the communication between the elements and with the observation queue is done through a single point of control (i.e. the organizer).

Long Wavelength Array, operated by the University of New Mexico. Credit: phys.unm.edu

Lastly, their is the issue of data transport management, which can take one of two forms based on existing telescopes. These include fully 0ff-line systems, where correlation is done post-observation – used by the Very Long Baseline Array (VLBA) – to fully-connected systems, where correlation is done in real-time (as with the VLA).  For the sake of their array, the team emphasized how connectivity and correlation are a must.

After considering all these components and how they are used by existing arrays, Dowell and Taylor conclude that the swarm concept is a natural extension of the advances being made in robotic and thinking telescopes, as well as interferometry. The advantages of this are spelled out in their conclusions:

“It allows for more efficient operations of facilities by moving much of the daily operational work done by humans to autonomous control systems. This, in turn, frees up personnel to focus on the scientific output of the telescope. The swarm concept can also combine the unused resources of the different elements together to form an ad hoc array.”

In addition, swarm telescopes will offer new opportunities and funding since they will consist of small elements that can be owned and operated by different entities. In this way, different organizations would be able to conduct science with their own elements while also being able to benefit from large-scale interferometric observations.

Graphic depiction of Modular Active Self-Assembling Space Telescope Swarms
Credit: D. Savransky

This concept is similar to the Modular Active Self-Assembling Space Telescope Swarms, which calls for a swarm of robots that would assemble in space to form a 30 meter (~100 ft) telescope. The concept was proposed by a team of American astronomers led by Dmitri Savransky, an assistant professor of mechanical and aerospace engineering at Cornell University.

This proposals was part of the 2020 Decadal Survey for Astrophysics and was recently selected for Phase I development as part of the 2018 NASA Innovative Advanced Concepts (NIAC) program. So while many large-scale telescopes will be entering service in the near future, the next-next-generation of telescopes could include a few arrays made up of swarms of robots directed by artificial intelligence.

Such arrays would be capable of achieving high-resolution astronomy and interferometry at lower costs, and could free up large, complex arrays for other observations.

Further Reading: arXiv

Engineers Propose a Rocket that Consumes Itself as it Flies to Space

A team of engineers from the University of Glasgow and the Ukraine have created an engine that could cut costs by "eating itself". Credit: Ken Kremer/kenkremer.com

When it comes to the new era of space exploration, one of the primary focuses has been on cutting costs. By reducing the costs associated with individual launches, space agencies and private aerospace companies will not only be able to commercialize Low Earth-Orbit (LEO), but also mount far more in the way of exploration missions and maybe even colonize space.

Several methods have been proposed so far for reducing launch costs, which include reusable rockets and single-stage-to-orbit rockets. However, a team of engineers from the University of Glasgow and the Ukraine recently proposed an entirely different idea that could make launching small payloads affordable – a self-eating rocket! This “autophage” rocket could easily send small satellites into space more easily and more affordably.

The study which describes how they built and tested the “autophage” engine recently appeared in the Journal of Spacecraft and Rockets under the title “Autophage Engines: Toward a Throttleable Solid Motor“. The team was led by Vitaly Yemets and Patrick Harkness – a Professor from the Oles Honchar Dnipro National University in the Ukraine and a Senior Lecturer from the University of Glasgow, respectively.

The autophage engine, being tested at the Dnipro testing lab in the Ukraine. Credit: University of Glasgow

Together, the team addressed one the most pressing issues when it comes to rockets today. This has to do with the fact that storage tanks, which contain the rocket’s propellants as they climb, weight many times the spacecraft’s payload. This reduces the efficiency of the launch vehicle and also adds to the problem of space debris, since these fuel tanks are disposable and fall away when spent.

As Dr Patrick Harkness, who led Glasgow’s contribution to the work, explained in a recent University of Glasgow press release:

“Over the last decade, Glasgow has become a centre of excellence for the UK space industry, particularly in small satellites known as ‘CubeSats’, which provide researchers with affordable access to space-based experiments. There’s also potential for the UK’s planned spaceport to be based in Scotland. However, launch vehicles tend to be large because you need a large amount of propellant to reach space. If you try to scale down, the volume of propellant falls more quickly than the mass of the structure, so there is a limit to how small you can go. You will be left with a vehicle that is smaller but, proportionately, too heavy to reach an orbital speed.”

In contrast, an autophage engine consumes its own structure during ascent, so more cargo capacity could be freed-up and less debris would enter orbit. The propellant consists of a solid fuel rod (made of a solid plastic like polyethylene) on the outside and an oxidizer on the inside. By driving the rod into a hot engine, the fuel and oxidizer are vaporized to create gas that then flows into the combustion chamber to produce thrust.

The use of autophage engines on rockets could allow for the deployment of small satellites cheaply and efficiently, without adding to the problem of space debris. Credit: AMNH.

“A rocket powered by an autophage engine would be different,” said Dr. Harkness. “The propellant rod itself would make up the body of the rocket, and as the vehicle climbed the engine would work its way up, consuming the body from base to tip. That would mean that the rocket structure would actually be consumed as fuel, so we wouldn’t face the same problems of excessive structural mass. We could size the launch vehicles to match our small satellites, and offer more rapid and more targeted access to space.”

The research team also showed that the engine could be throttled by simply varying the speed at which the rod is driven into the engine, which is something rare in a solid motor. During the lab tests, the team has been able to sustain rocket operations for 60 seconds at a time. As Dr. Harkness said, the team hopes to build on this and eventually conduct a launch test:

“While we’re still at an early stage of development, we have an effective engine testbed in the laboratory in Dnipro, and we are working with our colleagues there to improve it still further. The next step is to secure further funding to investigate how the engine could be incorporated into a launch vehicle.”

Another challenge of the modern space age is how to deliver additional payloads and satellites into orbit without creating more in the way of orbital clutter. By introducing an engine that can make for cheap launches that also has no disposable parts, the autophage could be a game-changing technology, one which is right up there with fully-recoverable rockets.

The research team also consisted of Mykola Dron and Anatoly Pashkov – a Professor and Senior Researcher from Oles Honchar Dnipro National University – and Kevin Worrall and Michael Middleton – a Research Associate and M.S. student from the University of Glasgow.

Further Reading: University of Glasgow, Journal of Spacecraft and Rockets

 

Uh oh, the EMDrive Could be Getting Its “Thrust” From Cables and Earth’s Magnetic Field

A model of the EmDrive, by NASA/Eagleworks. Credit: NASA Spaceflight Forum/emdrive.com

Ever since NASA announced that they had created a prototype of the controversial Radio Frequency Resonant Cavity Thruster (aka. the EM Drive), any and all reported results have been the subject of controversy. Initially, any reported tests were the stuff of rumors and leaks, the results were treated with understandable skepticism. Even after the paper submitted by the Eagleworks team passed peer review, there have still been unanswered questions.

Hoping to address this, a team of physicists from TU Dresden – known as the SpaceDrive Project – recently conducted an independent test of the EM Drive. Their findings were presented at the 2018 Aeronautics and Astronautics Association of France’s Space Propulsion conference, and were less than encouraging. What they found, in a nutshell, was that much of the EM’s thrust could attributable to outside factors.

The results of their test were reported in a study titled “The SpaceDrive Project – First Results on EMDrive and Mach-Effect Thrusters“, which recently appeared online. The study was led by Martin Tajmar, an engineer from the Institute of Aerospace Engineering at TU Dresden, and included TU Dresden scientists Matthias Kößling, Marcel Weikert and Maxime Monette.

EMDrive Thruster: Cavity (Left), Antenna (Middle) and On Balance (Right). Credit: Martin Tajmar, et al.

To recap, the EM Drive is a concept for an experimental space engine that came to the attention of the space community years ago. It consists of a hollow cone made of copper or other materials that reflects microwaves between opposite walls of the cavity in order to generate thrust. Unfortunately, this drive system is based on principles that violate the Conservation of Momentum law.

This law states that within a system, the amount of momentum remains constant and is neither created nor destroyed, but only changes through the action of forces. Since the EM Drive involves electromagnetic microwave cavities converting electrical energy directly into thrust, it has no reaction mass. It is therefore “impossible”, as far as conventional physics go.

As a result, many scientists have been skeptical about the EM Drive and wanted to see definitive evidence that it works. In response, a team of scientists at NASA’s Eagleworks Laboratories began conducting a test of the propulsion system. The team was led by Harold White, the Advanced Propulsion Team Lead for the NASA Engineering Directorate and the Principal Investigator for NASA’s Eagleworks lab.

Despite a report that was leaked in November of 2016 – titled “Measurement of Impulsive Thrust from a Closed Radio Frequency Cavity in Vacuum“ – the team never presented any official findings. This prompted the team led by Martin Tajmar to conduct their own test, using an engine that was built based on the same specifications as those used by the Eagleworks team.

According to tests conducting by a team from TU Dresden, the EM Drive’s thrust may be the result of interaction with Earth’s magnetic field. Credit: ESA/ATG medialab

In short, the TU Dresden team’s prototype consisted of a cone-shaped hollow engine set inside a highly shielded vacuum chamber, which they then fired microwaves at. While they found that the EM Drive did experience thrust, the detectable thrust may not have been coming from the engine itself. Essentially, the thruster exhibited the same amount of force regardless of which direction it was pointing.

This suggested that the thrust was originating from another source, which they believe could be the result of interaction between engine cables and the Earth’s magnetic field. As they conclude in their report:

“First measurement campaigns were carried out with both thruster models reaching thrust/thrust-to– power levels comparable to claimed values. However, we found that e.g. magnetic interaction from twisted-pair cables and amplifiers with the Earth’s magnetic field can be a significant error source for EMDrives. We continue to improve our measurement setup and thruster developments in order to finally assess if any of these concepts is viable and if it can be scaled up.”

In other words, the mystery thrust reported by previous experiments may have been nothing more than an error. If true, it would explain how the “impossible EM Drive” was able to achieve small amounts of measurable thrust when the laws of physics claim it shouldn’t be. However, the team also emphasized that more testing will be needed before the EM Drive can be dismissed or validated with confidence.

What will it take before human beings can travel to the nearest star system within their own lifetimes? Credit: Shigemi Numazawa/ Project Daedalus

Alas, it seems that the promise of being able to travel to the Moon in just four hours, to Mars in 70 days, and to Pluto in 18 months – all without the need for propellant – may have to wait. But rest assured, many other experimental technologies are being tested that could one day allow us to travel within our Solar System (and beyond) in record time. And additional tests will be needed before the EM Drive can be written off as just another pipe dream.

The team also conducted their own test of the Mach-Effect Thruster, another concept that is considered to be unlikely by many scientists. The team reported more favorable results with this concept, though they indicated that more research is needed here as well before anything can be conclusively said. You can learn more about the team’s test results for both engines by reading their report here.

And be sure to check out this video by Scott Manley, who explains the latest test and its results

Further Reading: ResearchGate, Phys.org