Background
As development and deployment of automated driving system (ADS) technologies progress and public road testing accelerates, new approaches to assessing the safety of these emerging transportation technologies are needed. Testing based on an ADS’s operational design domain (ODD) (i.e., the conditions for which an ADS is designed to function) and corresponding test scenarios is gradually being recognized as an appropriate assessment method for ADS-equipped vehicles and has been used in global testing programs; however, the combination of ODD elements along with vehicle object and event detection and response (OEDR) and driving competencies (e.g., lane following, lane changing, merging, intersection negotiation, etc.) lead to an intractable number of test scenarios to consider.
Approach
We explored methods to evaluate and to prioritize test variables and test cases to work toward identifying a minimally viable set of tests that proved or disproved safety arguments for ADS (and extend to other robotic vehicles). To accomplish this, we developed tools to collect and to categorize test variables for ADS and other robotic systems, using SwRI-owned automated shuttles and high-mobility multi-wheeled vehicles (HMMWV) as representative platforms to study testing of both on- and off-road platforms. We then established a framework to “score” individual test variables and multi-variable test cases based on attributes such as frequency of occurrence, complexity of testing, and impact of failure, among others, so they could be analyzed and compared. We implemented a sampling method to generate collections of tests as subsets of the overall test space. The scores of these sample collections, along with their representative coverage of the total test space, could then be used to propose tests for testing agencies and regulators to perform.
Accomplishments
During this project, we implemented and applied combinatorial testing, a test case scoring framework, and test case sampling to identify and to compare test sets for safety assessments. We collected and categorized test variable data for two representative robotics systems (SwRI’s automated passenger shuttle and off-road military platform) according to their ODD, OEDR, and behavioral/driving competencies. We leveraged open-source tools with these datasets to generate comprehensive test spaces, resulting in hundreds of thousands of potential test scenarios in some cases and confirming the intractability of complete testing. We then implemented a scoring framework, assigned scores to those test variables, and used those to calculate composite scores for test scenarios (compositions of those distinct test variables). We implemented a sampling algorithm to present collections of test cases from the overall test space. By coupling this with the scoring framework and tools to measure the coverage of a representative test set, we can now identify viable test sets for use in performance testing and safety assessments for robotic systems. While our original motivation focused on ADS-equipped road vehicles, this approach and these tools have broader applicability to robotic vehicles and robotic systems in general.