BACKGROUND
SwRI is developing complex large-scale automation systems that require high-accuracy “hand-eye coordination” between a vision sensor and robot. These vision systems, often 3D point cloud sensors, capture accurate surface models of a work object; the models are then used to guide precision robotic operations. Multiple scans are collected from various perspectives to capture the surface model of the work object. Combining these individual data sets to create an accurate representation of the entire work object requires accurate kinematic mechanisms and sensing, as well as accurate calibration between systems. Without these, the result is misalignment, and the robot is unable to perform high-accuracy operations, such as plasma cutting or metrological quality inspection. The goal of this project was to utilize vision sensors to refine the kinematic models of the mechanical components of robotic systems to allow accurate sensor-to-robot coordination within a work cell.
APPROACH
In the first phase of this project, SwRI developed software tools to test, analyze, and quantify the results of our calibration algorithms. First, we improved our existing calibration simulation framework by developing the capability for producing controllable simulation data for testing the calibration algorithms. Our team added a covariance analysis capability to the existing calibration algorithms to measure the correlation of effects that two or more variables have on the result of an optimization problem. Finally, we developed tools for quantifying the errors present in the various aspects of the calibration routines, including tools to estimate sensor intrinsic noise, camera intrinsic calibration accuracy, and calibration feature correspondence accuracy. Together, these tools can be used provide a lower bound on the expected accuracy of the calibration and to pinpoint specific sources of error in a calibration.
In the second phase, the team developed an algorithm for performing kinematic calibration on a robot with arbitrary joint configuration. SwRI’s algorithm solves a generic problem where two robots with inaccurately known kinematic parameters are positioned with an inaccurately known 6 degree-of-freedom transform relative to one another. One robot holds a calibration target, and the second robot holds a sensor used to observe the target. Input data to the calibration algorithm is generated by moving each robot to various positions and observing the calibration target with the sensor. The calibration algorithm uses this data to optimize the robots' kinematic parameters and the transformation between robots. The result is a more accurate model of each robot's kinematics and a more accurate understanding of how the robots are positioned relative to one another. After developing the calibration algorithm, our staff also performed the calibration with a real multi-robot work cell, using both a metrology-grade sensing system and the vision system in the robotic work cell.
ACCOMPLISHMENTS
This project resulted in the development of many new software tools that expand the capabilities and improve the robustness of our calibration software library. These tools enable users to develop calibration solutions more quickly and thoroughly, as well as identify and quantify sources of error in the calibration process. Additionally, we created a kinematic calibration algorithm that successfully improved the "hand-eye coordination" between a robot and vision system. Through several test cases with real hardware, results indicated a 35-55% improvement in position and orientation accuracy, using SwRI’s kinematic calibration algorithm compared to prior calibration methods. Additionally, our team was able to show significant improvement in the task of aligning discrete point cloud scans, acquired at different viewpoints with the calibrated hardware. The improvement of this surface reconstruction task directly enables challenging manufacturing applications that rely on accurate work piece models for process planning.