Background
Advances in computing capabilities and the abundance of training data have sparked a surge in the use Machine Learning (ML) techniques across many domains, including autonomous vehicles, cyber-security, and robotics. ML techniques could provide valuable new capabilities to space missions operating in Earth orbit and deep space. However, spaceflight hardware lags significantly behind commercially available terrestrial hardware, and the computational resources available on a spacecraft have not been adequate to support the processing required by ML techniques. Recent advances in spaceflight processing hardware could make the deployment of ML models onboard spacecraft a possibility. The objective of this ongoing research is to investigate and characterize the feasibility of implementing modern ML algorithms on computational hardware that can be deployed on spacecraft.
Approach
Our approach involves four primary technical tasks, which are: (1) analyze existing hardware platforms and select platforms to test based on key criteria, including power consumption, processing speed, and radiation tolerance; (2) develop a baseline ML model that detects crater rims from image data and establish a benchmark for model accuracy, model size, and execution speed; (3) optimize the baseline ML model to reduce both computation time and memory footprint; and (4) deploy the optimized ML model onto the selected platforms, including a radiation tolerant Central Processing Unit (CPU), Field Programmable Gate Array (FPGA), and Graphics Processing Unit (GPU) option and compare model performance against metrics including speed, power, and cost.
Accomplishments
Thus far, we have completed the first three tasks of the project. Suitable CPU, FPGA, and GPU platforms have been selected for testing of the baseline ML model. The baseline ML model has been successfully trained on crater images from the moon with a pixel-wise accuracy of 89%. We have implemented various model optimizations, including quantization and weight pruning and measured their effects on accuracy, model size, and model speed. We have successfully executed the model on the CPU platform and are in the process of deploying the model on the FPGA and GPU platforms.