Background
This research enables robotics in space by leveraging existing SwRI robotics capabilities and transforming them to meet the current and future needs of the space industry. The space industry is pushing for industrial-style robotics capabilities to enable in-space servicing, repair, and maintenance (ISAM). While there have been a few early missions in this arena, the industry needs robotics for ISAM that are less expensive, more flexible, and more easily reprogrammable.
Approach
This project advances the technology for autonomous resident space object (RSO) characterization, on-orbit refueling, and ISAM through advancing and adapting terrestrial vision systems and path planning.
Vision systems in space: Equip cameras and other sensing modalities with machine vision. This is accomplished by using low-power field-programmable gate arrays (FPGAs) to enable passive advanced image processing in space for building a 3D model of an RSO and identifying rotation.
Dynamic path planning: Use hardware/software to simulate planning robot paths while considering the variables of space operations. Standard robotic arms and a power-constrained processor will complete tasks while minimizing induced momentum.
Accomplishments
Before moving to hardware in future phases of this effort, our team is starting development of motion planning algorithms in a simulation environment called Drake. The goal of the simulation component is to both simulate, plan, and command motions of a robotic arm in microgravity that minimize momentum imparted to the satellite base.
We have created a Python software package, based on Drake, that models the dynamics of robotic manipulators maneuvering in space without gravity. This software package can plan several different types of motion trajectories, measure the momentum generated by those motions, and optimize those motions to reduce the generated momentum below a target threshold.
The vision work is implemented on two platforms: a developer's workstation and an FPGA prototyping board. We are exploring a variety of feature descriptors, such as SURF, SIFT, and ORB based on our prior experience with pose tracking structure from motion. Early results show these descriptors generate strong matching keypoints on sequences of images. We plan to track these features on a target craft and use that information to estimate its motion, position, and orientation. For the computer vision system on space-rated hardware, we are using the Kintex Ultrascale FPGA by Xilinx to run key point-finding algorithms that will be used for estimating the pose of the spacecraft.