Here’s a new DARPA-inspired, NASA-built robot, complete with a glowing NASA Meatball in its chest, reminiscent of ET’s heart light. The robot’s name is Valkyrie and she was created by a team at the Johnson Space Center as part of the DARPA Robotics Challenge, a contest designed to find the life-saving robot of the future. While NASA’s current robot — Robonaut 2 – is just now getting a pair of legs, “Val” (officially named “R5″ by NASA) is a 1.9 meter tall, 125 kilogram, (6-foot 2-inch, 275-pound) rescue robot that can walk over multiple kinds of terrain, climb a ladder, use tools, and even drive.
According to an extensive article about the new robot in IEEE Spectrum, “This means that Valkyrie has to be capable of operating in the same spaces that a person would operate in, under the control of humans who have only minimal training with robots, which is why the robot’s design is based on a human form.”
Why is NASA building more robots? The thinking is that NASA could send human-like robots to Mars before they send humans. Right now, Valkyrie is not space-rated, but the team at JSC is just getting started.
She’s loaded with cameras, LIDAR, SONAR, is strong and powerful, and is just a great-looking robot.
“We really wanted to design the appearance of this robot to be one that was, when you saw it you’d say, wow, that’s awesome.” Nicolaus Radford, Project and Group Lead at the Dexterous Robotics Lab and JSC.
This is both wonderful and terrifying. A DARPA-funded four-legged robot named WildCat is being developed by a company called Boston Dynamics (tagline of “Changing Your Idea of What Robots Can Do”). They’ve previously developed a humanoid capable of walking across multiple terrains called Atlas, and the scarily-fast Cheetah which set a new land-speed record for legged robots. But the WildCat is a brand new robot created to run fast on all types of terrain, and so far its top speed has been about 16 mph on flat terrain using both bounding and galloping gaits.
The video, released yesterday, shows WildCat’s best performance so far. Don’t let the sound fool you — yes, it does sound like a weed-whacker. But as soon as it raises up off its haunches, you know you’re doomed.
I’ve been trying to figure out what sci-fi equivalent might describe it best: the Terminator’s pet? A lethal, non-fuzzy Daggit from Battlestar Galactica? An AT-AT Walker on speed?
The talking robot launched to the International Space Station in August has sent its first audio/visual message to Earth. Kirobo, the mini Japanese robot — which appears to have the bravado of Buzz Lightyear and the cuteness of WALL-E — is just .34 meters (13.4-inches) long. Kirobo is designed to be able to have conversations with its astronaut crewmates and to study how robot-human interactions can help the astronauts in the space environment. In its first message, Kirobo wished Earth a “good morning” and mentioned (and motioned) its giant step in getting to space.
Kirobo is part of a research project sponsored by the University of Tokoyo and Toyota, and the robot will be working closely with Koichi Wakata, slated to be the first Japanese commander of the ISS for Expedition 39, who will launch this November as part of the Expedition 38/39 crew. An identical robot named Mirata remains on Earth for additional testing.
Kirobo is designed to navigate in zero-gravity, have facial recognition of its fellow crewmates, and will assist Wakata in various experiments. No word on whether it will have access to opening or closing the various hatches on the space station.
Building a flying vehicle for Mars would have significant advantages for exploration of the surface. However, to date, all of our surface exploring vehicles and robotic units on Mars have been terrestrial rovers. The problem with flying on Mars is that the Red Planet doesn’t have much atmosphere to speak of. It is only 1.6% of Earth air density at sea level, give or take. This means conventional aircraft would have to fly very quickly on Mars to stay aloft. Your average Cessna would be in trouble.
But nature may provide an alternative way of looking at this problem.
The fluid regime of any flying (or swimming) animal, machine, etc. can be summarized by something called the Reynolds Number (Re). The Re is equal to the characteristic length x velocity x fluid density, divided by the dynamic viscosity. It is a measure of the ratio of inertial forces to viscous ones. Your average airplane flies at a high Re: lots of inertia relative to air stickiness. Because the Mars air density is low, the only way to get that inertia is to go really fast. However, not all flyers operate at high Re: most flying animals fly at much lower Re. Insects, in particular, operate at quite small Reynolds numbers (relatively speaking). In fact, some insects are so small that they swim through the air, rather than fly. So, if we scale up a bug-like critter or small bird just a bit, we might get something that can move in the Martian atmosphere without having to go insanely fast.
We need a system of equations to constrain our little bot. Turns out that’s not too tough. As a rough approximation, we can use Colin Pennycuick’s average flapping frequency equation. Based on the flapping frequency expectations from Pennycuick (2008), flapping frequency varies roughly as body mass to the 3/8 power, gravitational acceleration to the 1/2 power, span to the -23/24 power, wing area to the -1/3 power, and fluid density to the -3/8 power. That’s handy, because we can adjust to match Martian gravity and air density. But we need to know if we are shedding vortices from the wings in a reasonable way. Thankfully, there is a known relationship, there, as well: the Strouhal number. Str (in this case) is flapping amplitude x flapping frequency divided by velocity. In cruising flight, it turns out to be pretty constrained.
Our bot should, therefore, end up with a Str between 0.2 and 0.4, while matching the Pennycuick equation. And then, finally, we need to get a Reynolds number in the range for a large living flying insect (tiny insects fly in a strange regime where much of propulsion is drag-based, so we will ignore them for now). Hawkmoths are well studied, so we have their Re range for a variety of speeds. Depending on speed, it ranges from about 3,500 to about 15,000. So somewhere in that ballpark will do.
There are a few ways of solving the system. The elegant way is to generate the curves and look for the intersection points, but a fast and easy method is to punch it into a matrix program and solve iteratively. I won’t give all the possible options, but here’s one that worked out pretty well to give an idea:
Mass: 500 grams
Span: 1 meter
Wing Aspect Ratio: 8.0
This gives an Str of 0.31 (right on the money) and Re of 13,900 (decent) at a lift coefficient of 0.5 (which is reasonable for cruising). To give an idea, this bot would have roughly bird-like proportions (similar to a duck), albeit a bit on the light side (not tough with good synthetic materials). It would, however, flap through a greater arc at higher frequency than a bird here on Earth, so it would look a bit like a giant moth at distance to our Earth-trained eyes. As an added bonus, because this bot is flying in a moth-ish Reynolds Regime, it is plausible that it might be able to jump to the very high lift coefficients of insects for brief periods using unsteady dynamics. At a CL of 4.0 (which has been measured for small bats and flycatchers, as well as some large bees), the stall speed is only 19.24 m/s. Max CL is most useful for landing and launching. So: can we launch our bot at 19.24 m/s?
For fun, let’s assume our bird/bug bot also launches like an animal. Animals don’t take off like airplanes; they use a ballistic initiation by pushing from the substrate. Now, insects and birds use walking limbs for this, but bats (and probably pterosaurs) use the wings to double as pushing systems. If we made our bots wings push-worthy, then we can use the same motor to launch as to fly, and it turns out that not much push is required. Thanks to the low Mars gravity, even a little leap goes a long way, and the wings can already beat near 19.24 m/s as it is. So just a little hop will do it. If we’re feeling fancy, we can put a bit more punch on it, and that’ll get out of craters, etc. Either way, our bot only needs to be about 4% as efficient a leaper as good biological jumpers to make it up to speed.
These numbers, of course, are just a rough illustration. There are many reasons that space programs have not yet launched robots of this type. Problems with deployment, power supply, and maintenance would make these systems very challenging to use effectively, but it may not be altogether impossible. Perhaps someday our rovers will deploy duck-sized moth bots for better reconnaissance on other worlds.
Mission planners really hate it when space robots land off course. We’re certainly improving the odds of success these days (remember Mars Curiosity’s seven minutes of terror?), but one space agency has a fancy simulator up its sleeve that could make landings even more precise.
Shown above, this software and hardware (tested at the European Space Agency) so impressed French aerospace center ONERA that officials recently gave the lead researcher an award for the work.
“If I’m a tourist in Paris, I might look for directions to famous landmarks such as the Eiffel Tower, the Arc de Triomphe or Notre Dame cathedral to help find my position on a map,” stated Jeff Delaune, the Ph.D. student performing the research.
“If the same process is repeated from space with enough surface landmarks seen by a camera, the eye of the spacecraft, it can then pretty accurately identify where it is by automatically comparing the visual information to maps we have onboard in the computer.”
Because landmarks close-up can look really different from far away, this system has a method to try and get around that problem.
The so-called ‘Landing with Inertial and Optical Navigation’ (LION) system takes the real-time images generated by the spacecraft’s camera and compares it to maps from previous missions, as well as 3-D digital models of the surface.
LION can take into account the relative size of every point it sees, whether it’s a huge crater or a tiny boulder.
At ESA’s control hardware laboratory in Noordwijk, the Netherlands, officials tested the system with a high-res map of the moon.
Though this is just a test and there is still a ways to go before this system is space-ready, ESA said simulated positional accuracy was better than 164 feet at 1.86 miles in altitude (or 50 meters at three kilometers in altitude.)
Oh, and while it’s only been tested with simulated moon terrain so far, it’s possible the same system could help a robot land on an asteroid, or Mars, ESA adds.
No word on when the system will first hitch an interplanetary ride, but Delaune is working to apply the research to terrestrial matters such as unmanned aerial vehicles.
The HyTAQ (Hybrid Terrestrial and Aerial Quadrotor) robot developed at Illinois Institute of Technology (IIT)
Ever since the Huygens probe landed on Titan back in January 2005, sending us our first tantalizing and oh-so-brief glimpses of the moon’s murky, pebbly surface, researchers have been dreaming up ways to explore further… after all, what’s more intriguing than a world in our own Solar System that’s basically a miniature version of an early Earth (even if it’s quite a few orders of magnitude chillier?)
Many concepts have been suggested as to the best way to explore Titan, from Mars-style rovers to boats that would sail its methane seas to powered gliders… and even hot-air balloons have been put on the table. Each of these have their own specific benefits, specially suited to the many environments that are found on Titan, but what if you could have two-in-one; what if you could, say, rove and fly?
That’s what this little robot can do.
Designed by Arash Kalantari and Matthew Spenko at the Robotics Lab at Illinois Institute of Technology, this rolling birdcage is actually a quadrotor flying craft that’s wrapped in a protective framework, allowing it to move freely along the ground and then take off when needed, maneuvering around obstacles easily.
A design like this, fitted with scientific instruments and given adequate power supply, might make a fantastic robotic explorer for Titan, where the atmosphere is thick and the terrain may range from rough and rocky to sandy and slushy. (And what safer way to ford a freezing-cold Titanic stream than fly over it?)
Also, the robot’s cage design may make it better suited to travel across the frozen crust of Titan’s flood plains, which have been found to have a consistency like damp sand with a layer of frozen snow on top. Where wheels could break through and get permanently stuck (a la Spirit) a rolling cage might remain on top. And if it does break through… well, fire up the engines and take off.
The robot (as it’s designed now) is also very energy-efficient, compared to quadrotors that only fly.
“During terrestrial locomotion, the robot only needs to overcome rolling resistance and consumes much less energy compared to the aerial mode,” the IIT website notes. “This solves one of the most vexing problems of quadrotors and rotorcraft in general — their short operation time. Experimental results show that the hybrid robot can travel a distance 4 times greater and operate almost 6 times longer than an aerial only system.”
Of course this is all just excited speculation at this point. No NASA or ESA contracts have been awarded to IIT to build the next Titan explorer, and who knows if the idea is on anyone else’s plate. But innovations like this, from schools and the private sector, are just the sorts of exciting things that set imaginations rolling (and flying!)
Color view of Titan’s surface, captured by the Huygens probe after landing in January 2005. (NASA/JPL/ESA/University of Arizona)
Canada’s most famous robot is on the front page of Google.ca today. The Google doodle honors the 31st anniversary of the first use of Canadarm in space.
Canadarm is a robotic arm that flew on virtually every shuttle mission. The technology is still being used today in space.
According to the 1992 book A Heritage of Excellence, Canada was first invited to work in the shuttle program in 1969. Toronto engineering firm DSMA-Atcon Ltd. initially pitched a Canadian-built space telescope, but NASA was more interested in DSMA’s other work.
“The Goddard Space Flight Center in Maryland expressed interest in another of DSMA’s gadgets – a robot the company had developed for loading fuel into Candu nuclear reactors,” wrote Lydia Dotto in the book, which Spar commissioned to celebrate its 25th anniversary.
“It was just the thing for putting a satellite they were building into space.”
Dozens of astronauts have used the Canadarms during spacewalks, including Michael L. Gernhardt on STS-104. Credit: NASA
The Canadian government and NASA signed a memorandum of understanding in 1975 to build the arm. Legislation allowing the project to move forward passed the next year. Canadian company Spar became the prime contractor, with DSMA, CAE and RCA as subcontractors.
Engineers had to face several challenges when constructing the Canadarm, including how to grapple satellites. The solution was an “end effector“, a snare on the end of the Canadarm to grasp satellites designed to be hoisted into space.
Several NASA astronauts, including Sally Ride, gave feedback on the arm’s development. Canadarm flew for the first time on STS-2, which launched Nov. 12, 1981. (Ride herself used the arm on STS-7 when she became the first American woman to fly in space.)
Marc Garneau, the first Canadian astronaut in space, has said the arm’s success led to the establishment of the Canadian astronaut program. He flew in 1984, three years after Canadarm’s first flight.
Canadian astronaut Chris Hadfield during an EVA in 2001. Also in the image is the Canadarm2 robotic arm on the ISS. Credit: NASA
Some of the arm’s notable achievements:
– Launching space probes, including the Compton Gamma Ray Observatory, as well as short-term experiments that ran during shuttle missions;
– Helping to build the International Space Station along with Canadarm2, its younger sibling;
– Scanning for broken tiles on the bottom of the shuttle. Astronauts used a procedure developed after Columbia, carrying seven astronauts, was destroyed during re-entry in 2003. A Canadarm was modified into an extension boom; another Canadarm grasped that boom to reach underneath the shuttle.
The arm was so successful that MacDonald, Dettwiler and Associates (which acquired Spar) built a robotic arm for the International Space Station, called Canadarm2. Canadian astronaut Chris Hadfield helped install the arm during his first spacewalk in 2001.
Canadarm2’s most nail-biting moment was in 2007, when astronauts used it to hoist astronaut Steve Parazynski (who was balancing on the extension boom) for a tricky solar panel repair on the station.
November 3, 2007 – Canadarm2 played a big role in helping astronauts fix a torn solar array. Here, Scott Parazynski analyses the solar panel while anchored to the boom. Credit: NASA
More recently, Canadarm2 was used to grapple the Dragon spacecraft when SpaceX’s demonstration and resupply missions arrived at the International Space Station this year.
MDA recently unveiled several next-generation Canadarm prototypes that could, in part, be used to refuel satellites. The Canadian Space Agency funded the projects with $53 million (CDN $53.1 million) in stimulus money. MDA hopes to attract more money to get the arms ready for space.
No, it’s not a UFO — it’s NASA’s “Mighty Eagle”, a robotic prototype lander that successfully and autonomously found its target during a 32-second free flight test at Marshall Space Flight Center yesterday, August 16.
You have to admit though, Mighty Eagle does bear a resemblance to classic B-movie sci-fi spacecraft (if, at only 4 feet tall, markedly less threatening to the general populace.)
Fueled by 90% pure hydrogen peroxide, Mighty Eagle is a low-cost “green” spacecraft designed to operate autonomously during future space exploration missions. It uses its onboard camera and computer to determine the safest route to a pre-determined landing spot.
During the August 16 test flight, Mighty Eagle ascended to 30 feet, identified a target painted on the ground 21 feet away, flew to that position and landed safely — all without being controlled directly.
“This is huge. We met our primary objective of this test series — getting the vehicle to seek and find its target autonomously with high precision,” said Mike Hannan, controls engineer at Marshall Space Flight Center. “We’re not directing the vehicle from the control room. Our software is driving the vehicle to think for itself now. From here, we’ll test the robustness of the software to fly higher and descend faster, expecting the lander to continue to seek and find the target.”
In the wake of a dramatically unsuccessful free flight test of the Morpheus craft on August 9, another green lander designed by Johnson Space Center, the recent achievements by the Mighty Eagle team are encouraging.
Here’s a video from a previous test flight on August 8:
Future tests planned through September will have the lander ascend up to 100 feet before landing. Read more here.
The Mighty Eagle prototype lander was developed by the Marshall Center and Johns Hopkins University Applied Physics Laboratory in Laurel, Md., for NASA’s Planetary Sciences Division, Headquarters Science Mission Directorate Image/video: NASA/Marshall Space Flight Center
A combined team of American and Canadian engineers has taken a major first step forward by successfully applying new, first-of-its-kind robotics research conducted aboard the International Space Station (ISS) to the eventual repair and refueling of high value orbiting space satellites, and which has the potential to one day bring about billions of dollars in cost savings for the government and commercial space sectors.
Gleeful researchers from both nations shouted “Yeah !!!” – after successfully using the Robotic Refueling Mission (RRM) experiment – bolted outside the ISS- as a technology test bed to demonstrate that a remotely controlled robot in the vacuum of space could accomplish delicate work tasks requiring extremely precise motion control. The revolutionary robotics experiment could extend the usable operating life of satellites already in Earth orbit that were never even intended to be worked upon.
“After dedicating many months of professional and personal time to RRM, it was a great emotional rush and a reassurance for me to see the first video stream from an RRM tool,” said Justin Cassidy in an exclusive in-depth interview with Universe Today. Cassidy is RRM Hardware Manager at the NASA Goddard Spaceflight Center in Greenbelt, Maryland.
And the RRM team already has plans to carry out even more ambitious follow on experiments starting as soon as this summer, including the highly anticipated transfer of fluids to simulate an actual satellite refueling that could transfigure robotics applications in space – see details below !
All of the robotic operations at the station were remotely controlled by flight controllers from the ground. The purpose of remote control and robotics is to free up the ISS human crew so they can work on other important activities and conduct science experiments requiring on-site human thought and intervention.
Over a three day period from March 7 to 9, engineers performed joint operations between NASA’s Robotic Refueling Mission (RRM) experiment and the Canadian Space Agency’s (CSA) robotic “handyman” – the Dextre robot. Dextre is officially dubbed the SPDM or Special Purpose Dexterous Manipulator.
On the first day, robotic operators on Earth remotely maneuvered the 12-foot (3.7 meter) long Dextre “handyman” to the RRM experiment using the space station’s Canadian built robotic arm (SSRMS).
Dextre’s “hand” – technically known as the “OTCM” – then grasped and inspected three different specialized satellite work tools housed inside the RRM unit . Comprehensive mechanical and electrical evaluations of the Safety Cap Tool, the Wire Cutter and Blanket Manipulation Tool, and the Multifunction Tool found that all three tools were functioning perfectly.
“Our teams mechanically latched the Canadian “Dextre” robot’s “hand” onto the RRM Safety Cap Tool (SCT). The RRM SCT is the first on orbit unit to use the video capability of the Dextre OTCM hand,” Cassidy explained.
“At the beginning of tool operations, mission controllers mechanically drove the OTCM’s electrical umbilical forward to mate it with the SCT’s integral electronics box. When the power was applied to that interface, our team was able to see that on Goddard’s large screen TVs – the SCT’s “first light” video showed a shot of the tool within the RRM stowage bay (see photo).
“Our team burst into a shout out of “Yeah!” to commend this successful electrical functional system checkout.”
Dextre then carried out assorted tasks aimed at testing how well a variety of representative gas fittings, valves, wires and seals located on the outside of the RRM module could be manipulated. It released safety launch locks and meticulously cut two extremely thin satellite lock wires – made of steel – and measuring just 20 thousandths of an inch (0.5 millimeter) in diameter.
“The wire cutting event was just minutes in duration. But both wire cutting tasks took approximately 6 hours of coordinated, safe robotic operations. The lock wire had been routed, twisted and tied on the ground at the interface of the Ambient Cap and T-Valve before flight,” said Cassidy.
This RRM exercise represents the first time that the Dextre robot was utilized for a technology research and development project on the ISS, a major expansion of its capabilities beyond those of robotic maintenance of the massive orbiting outpost.
Video Caption: Dextre’s Robotic Refueling Mission: Day 2. The second day of Dextre’s most demanding mission wrapped up successfully on March 8, 2012 as the robotic handyman completed his three assigned tasks. Credit: NASA/CSA
Altogether the three days of operations took about 43 hours, and proceeded somewhat faster than expected because they were as close to nominal as could be expected.
“Days 1 and 2 ran about 18 hours,” said Charles Bacon, the RRM Operations Lead/Systems Engineer at NASA Goddard, to Universe Today. “Day 3 ran approximately 7 hours since we finished all tasks early. All three days baselined 18 hours, with the team working in two shifts. So the time was as expected, and actually a little better since we finished early on the last day.”
“For the last several months, our team has been setting the stage for RRM on-orbit demonstrations,” Cassidy told me. “Just like a theater production, we have many engineers behind the scenes who have provided development support and continue to be a part of the on-orbit RRM operations.”
“At each stage of RRM—from preparation, delivery, installation and now the operations—I am taken aback by the immense efforts that many diverse teams have contributed to make RRM happen. The Satellite Servicing Capabilities Office at NASA’s Goddard Space Flight Center teamed with Johnson Space Center, Kennedy Space Center (KSC), Marshall Space Flight Center and the Canadian Space Agency control center in St. Hubert, Quebec to make RRM a reality.”
“The success of RRM operations to date on the International Space Station (ISS) using Dextre is a testament to the excellence of NASA’s many organizations and partners,” Cassidy explained.
The three day “Gas Fittings Removal task” was an initial simulation to practice techniques essential for robotically fixing malfunctioning satellites and refueling otherwise nominally operating satellites to extend to hopefully extend their performance lifetimes for several years.
Ground-based technicians use the fittings and valves to load all the essential fluids, gases and fuels into a satellites storage tanks prior to launch and which are then sealed, covered and normally never accessed again.
“The impact of the space station as a useful technology test bed cannot be overstated,” says Frank Cepollina, associate director of the Satellite Servicing Capabilities Office (SSCO) at NASA’s Goddard Space Flight Center in Greenbelt, Md.
“Fresh satellite-servicing technologies will be demonstrated in a real space environment within months instead of years. This is huge. It represents real progress in space technology advancement.”
Four more upcoming RRM experiments tentatively set for this year will demonstrate the ability of a remote-controlled robot to remove barriers and refuel empty satellite gas tanks in space thereby saving expensive hardware from prematurely joining the orbital junkyard.
The timing of future RRM operations can be challenging and depends on the availability of Dextre and the SSRMS arm which are also heavily booked for many other ongoing ISS operations such as spacewalks, maintenance activities and science experiments as well as berthing and/or unloading a steady stream of critical cargo resupply ships such as the Progress, ATV, HTV, Dragon and Cygnus.
Flexibility is key to all ISS operations. And although the station crew is not involved with RRM, their activities might be.
“While the crew itself does not rely on Dextre for their operations, Dextre ops can indirectly affect what the crew can or can’t do,” Bacon told me. “For example, during our RRM operations the crew cannot perform certain physical exercise activities because of how that motion could affect Dextre’s movement.”
Here is a list of forthcoming RRM operations – pending ISS schedule constraints:
Refueling (summer 2012) – After Dextre opens up a fuel valve that is similar to those commonly used on satellites today, it will transfer liquid ethanol into it through a sophisticated robotic fueling hose.
Thermal Blanket Manipulation (TBD 2012)- Dextre will practice slicing off thermal blanket tape and folding back a thermal blanket to reveal the contents underneath.
Electrical Cap Removal (TBD 2012)- Dextre will remove the caps that would typically cover a satellite’s electrical receptacle.
http://youtu.be/LboVN38ZdgU
RRM was carried to orbit inside the cargo bay of Space Shuttle Atlantis during July 2011 on the final shuttle mission (STS-135) of NASA’s three decade long shuttle program and then mounted on an external work platform on the ISS backbone truss by spacewalking astronauts. The project is a joint effort between NASA and CSA.
“This is what success is all about. With RRM, we are truly paving the way for future robotic exploration and satellite servicing,” Cassidy concluded.
…….
March 24 (Sat): Free Lecture by Ken Kremer at the New Jersey Astronomical Association, Voorhees State Park, NJ at 830 PM. Topic: Atlantis, the End of Americas Shuttle Program, RRM, Orion, SpaceX, CST-100 and the Future of NASA Human & Robotic Spaceflight
[/caption]In an interesting case of science fiction becoming a reality, NASA has been testing their SPHERES project over the past few years. The SPHERES project (Synchronized Position Hold, Engage, Reorient, Experimental Satellites) involves spherical satellites about the size of a bowling ball. Used inside the International Space Station, the satellites are used to test autonomous rendezvous and docking maneuvers. Each individual satellite features its own power, propulsion, computers and navigational support systems.
The SPHERES project is the brainchild of David Miller (Massachusetts Institute of Technology). Miller was inspired by the floating remote “droid” that Luke Skywalker used to help hone his lightsaber skills in Star Wars. Since 2006, a set of five SPHERES satellites, built by Miller and his students have been onboard the International Space Station.
Since lightsabers are most likely prohibited onboard the ISS, what practical use have these “droids” been to space station crews?
The first SPHERES satellite was tested during Expedition 8 and Expedition 13, with a second unit delivered to the ISS by STS-121, and a third delivered by STS-116. The crew of ISS Expedition 14 tested a configuration using three of the SPHERES satellites. Since their arrival, over 25 experiments have been performed using SPHERES. Until recently, the tests used pre-programmed algorithms to perform specific functions.
“The space station is just the first step to using remotely controlled robots to support human exploration,” said Chris Moore, program executive in the Exploration Systems Mission Directorate at NASA Headquarters in Washington. “Building on our experience in controlling robots on station, one day we’ll be able to apply what we’ve learned and have humans and robots working together everywhere from Earth orbit, to the Moon, asteroids, and Mars.”
In November, the SPHERES satellites were upgraded with “off-the-shelf” smartphones by using an “expansion port” Miller’s team designed into each satellite.
“Because the SPHERES were originally designed for a different purpose, they need some upgrades to become remotely operated robots,” said DW Wheeler, lead engineer in the Intelligent Robotics Group at Ames.
Wheeler added, “By connecting a smartphone, we can immediately make SPHERES more intelligent. With the smartphone, the SPHERES will have a built-in camera to take pictures and video, sensors to help conduct inspections, a powerful computing unit to make calculations, and a Wi-Fi connection that we will use to transfer data in real-time to the space station and mission control.”
In order to make the smartphones safer to use onboard the station, the cellular communications chips were removed, and the lithium-ion battery was replaced with AA alkaline batteries.
By testing the SPHERES satellites, NASA can demonstrate how the smart SPHERES can operate as remotely operated assistants for astronauts in space. NASA plans additional tests in which the compact assistants will perform interior station surveys and inspections, along with capturing images and video using the smartphone camera. Additional goals for the mission include the simulation of free-flight excursions, and possibly other, more challenging tasks.
“The tests that we are conducting with Smart SPHERES will help NASA make better use of robots as assistants to and versatile support for human explorers — in Earth orbit or on long missions to other worlds and new destinations,” said Terry Fong, project manager of the Human Exploration Telerobotics project and Director of the Intelligent Robotics Group at NASA’s Ames Research Center in Moffett Field, Calif.