Principal Investigators
Inclusive Dates 
06/19/2023 to 04/19/2024

Background 

As automated driving system (ADS) technologies progress and public road testing accelerates, new approaches to assessing the safety of these emerging transportation technologies are needed. Testing based on an ADS’s operational design domain (ODD) (i.e., the conditions for which an ADS is designed to function) and corresponding test scenarios is gradually being recognized as an appropriate assessment method for ADS-equipped vehicles and has been used in global testing programs. However, the combination of ODD elements, vehicle object and event detection and response (OEDR), and driving competencies (e.g., lane following, lane changing, merging, intersection negotiation, etc.) leads to an intractable number of test scenarios to consider. 

Approach 

We explored methods to evaluate and prioritize test variables and test cases to work toward identifying a minimum viable set of tests that proved or disproved safety arguments for ADS and other robotic vehicles. To accomplish this task, we developed tools to collect and categorize test variables for ADS and other robotic systems, using SwRI-owned automated shuttles and high-mobility multi-wheeled vehicles (HMMWV) as representative vehicles to study testing of both on- and off-road platforms. We then established a framework to “score” individual test variables and multi-variable test cases based on attributes such as frequency of occurrence, complexity of testing, and impact of failure, among others, so they could be analyzed and compared. We also implemented multiple sampling methods to explore ways to generate collections of tests as subsets of the overall test space. The scores of these sample collections, along with their representative coverage of the total test space, could then be used to propose tests for testing agencies and regulators to perform. 

Accomplishments 

During this project, we implemented and applied combinatorial testing, a test case scoring framework, and test case sampling to identify and compare test sets for safety assessments. We collected and categorized test variable data for two representative robotics systems (SwRI’s automated passenger shuttle and an off-road military platform) according to their ODD, OEDR, and behavioral/driving competencies. We leveraged open-source tools with these datasets to generate comprehensive test spaces, resulting in hundreds of thousands of potential test scenarios in some cases and confirming the intractability of complete testing. We then implemented a scoring framework, assigned scores to those test variables, and used those to calculate composite scores for test scenarios (compositions of those distinct test variables). We implemented both random sampling and hybrid, importance-based sampling algorithms to present collections of test cases from the overall test space. Random sampling ensured that overall test space coverage could remain high; however, it did not take full advantage of the test case scoring. The hybrid importance-based sampling algorithm allowed us to retain an element of randomness while also balancing test case scores (to ensure a range of test case scores were included) and test space coverage. This hybrid importance-based sampling approach allows us to identify viable test sets for use in performance testing and safety assessments for robotic systems. While our original motivation focused on ADS-equipped road vehicles, this approach and these tools have broader applicability to robotic vehicles and robotic systems in general.