Robots will be one of the keys to the expanding in-space economy. As launch costs decrease, hopefully significantly when Starship and other massive lift systems come online, the most significant barrier to entry for the space economy will finally come down. So what happens then? Two acronyms have been popping up in the literature with increasing frequency – in-space servicing, assembly, and manufacturing (ISAM) and On-orbit servicing (OOS). Over a series of articles, we’ll look at some papers detailing what those acronyms mean and where they might be going shortly. First, we’ll examine how robots fit into the equation.
Space robots have been around since 1981 when the Shuttle Remote Manipulator System (SRMS) was launched with the space shuttle, whose astronauts then operated them. They have expanded far beyond that original use case in the last forty years, playing an increasingly important role in everything from assembling the International Space Station (ISS) to more recently proof-of-concept missions to service a failing satellite in Earth’s orbit.
A new paper from the State Key Laboratory of Robotics and Systems at the Harbin Institute of Technology in China details some of the work that still needs to be done to realize the dream of fully functional robots in space. It breaks that work down into five different functional areas.
First, and one familiar to anyone who spends time with autonomous robots, is vision. Vision systems are consistently being improved here on Earth, especially those tied to the operation of autonomous cars. However, while the visual surrounding might not be near as chaotic in space, it can be challenging to have a robot visually understand what it is looking at, especially if a satellite is tumbling uncontrollably.
Pattern recognition, such as circles placed around the docking ports of a satellite expecting to be serviced (known in the jargon as “cooperative”), is still difficult. Partly that is because the computational load of doing the recognition algorithm must be done on the robot itself. That requires increased computational power, directly related to increased power consumption and heat that must be dealt with. Recognizing an “un-cooperative” satellite that isn’t designed to accept help from a robot is even more difficult, especially in real-time.
Once a robot sees where it’s going and what it’s trying to interact with, the next step is to get there and effectively interact with the thing. There are several factors to consider here, and the paper calls them collectively “motion and control” technologies.
They present solutions to several unique control problems, including how to deal with the forces of a robot when there is very little gravity affecting it. In particular, how do movement commands, and especially trying to move specific objects, cause things like vibrations in the body and manipulator of the robot? This is especially true if the robot itself isn’t anchored to a much larger weight, such as the ISS or a space shuttle. Dynamic control algorithms can help dampen some of the more dangerous vibrations, potentially shaking the robot apart if left uncontrolled.
But even if there was a control system to dampen vibrations, other coordination factors can still be complex, including coordinating multiple arms to interact with an object. While that has been done before, it still proves difficult to do the coordination simultaneously, as it does with robots on Earth.
When a robot (or its manipulator) reaches its intended target, another technology has to interact with it – its end-effector. In robotics, end-effectors are how the robot interacts with objects. They’re the equivalent of human hands but can be much more functional, as they can both be made out of things that human hands are not made out of (screwdrivers) and can be switched out to something else entirely, such as by switching from a screwdriver to a soft-gel gripper. The possibilities of end-effectors and a robot’s efficiency at switching between end-effectors are endless, and plenty of technical work still needs to be done to make robots as capable as they can be in space.
One method to help effectively operate a robot’s end-effector is to allow a human to teleoperate it. This has been relatively common practice for most of the existence of robots in space, with astronauts operating the SRMS from inside the shuttle or the Canadarm2 from inside the ISS. However, teleoperation takes time, and an astronaut’s time is extremely precious. So efforts are underway to teleoperate robots in space from the ground.
We’ve recently reported on some efforts for the reverse, where an astronaut controlled a robot back on Earth. Those experiments aimed to prove the concept of operating robots down on the surface of other worlds like the Moon or Mars. This form of teleoperation would still suffer from the same delay difficulty – and what’s more, the delay might change depending on where a robot is in its orbital path.
Various solutions have been posed to this problem, including a virtual reality control setup that predicts where the robot will be at the end of the time delay so the operator can plan ahead without waiting for feedback. Force feedback is also a popular option, though it still suffers from the same time delay issues. Numerous technical solutions exist to this hurdle, but nothing can eliminate the fact that signals don’t transmit instantaneously over long distances.
Even back on Earth, there are still challenges. As noted in the paper, high-fidelity ground verification is difficult. Verification in engineering means proving that something works as expected in the environment, it’s intended to work in. That is almost impossible for a robot meant to operate in microgravity since it would be prohibitively expensive to launch a verification prototype into microgravity and deal with all the issues that inevitably crop up during verification testing.
Several technical solutions to the problem have been in use for a while, including suspending the robot in pockets of forced air to simulate floating, using either freefall or a parabolic flight on an airplane to test how it would operate in those conditions, or even dunking it in a pool and seeing how it would operate underwater.
Hardware-in-the-loop is the most promising new technology used in other industries. This models the expected behavior of the robotic system and mimics specific environments via software that the robot might experience in space. However, creating the models for this system is complex and can lead to inaccuracies in the verification test itself. So far, there is no optimal solution for ensuring a robot will operate in space while it is still being developed on the ground.
Ironically, robot operation in space itself might solve this last problem by creating a large enough infrastructure in space to allow for the design and assembly of robots themselves in space. That is still a long way off, but numerous teams worldwide are working on making it a reality. Someday it will be, and overcoming these technical challenges will help it become so.
Lead More:
Ma. et al – Advances in Space Robots for On-Orbit Servicing: A Comprehensive Review
UT – An Astronaut Will Be Controlling Several Robots on Earth… from Space
UT – ESA Astronaut Luca Parmitano will be Controlling a Rover From Space
UT – Robonaut 2 set to move freely about space station
Lead Image:
Canadarm reaching for the space shuttle during STS-72.
Credit – NASA
Through the Artemis Program, NASA will send the first astronauts to the Moon since the…
New research suggests that our best hopes for finding existing life on Mars isn’t on…
Entanglement is perhaps one of the most confusing aspects of quantum mechanics. On its surface,…
Neutrinos are tricky little blighters that are hard to observe. The IceCube Neutrino Observatory in…
A team of astronomers have detected a surprisingly fast and bright burst of energy from…
Meet the brown dwarf: bigger than a planet, and smaller than a star. A category…