상세 컨텐츠

본문 제목

Fuzzing in drone, self driving cars

Life/자율주행, AI

by 세미531 2024. 3. 24. 01:54

본문

728x90

Fuzzing, in the context of drones and self-driving cars, refers to a technique for automatically testing these systems by feeding them unexpected or invalid data. This helps uncover bugs and vulnerabilities that might not be revealed through traditional testing methods.

Here's how it works:

1. Scenario Generation:  A fuzzer creates a bunch of test cases, like driving scenarios for a self-driving car. These scenarios can involve unusual situations, like unexpected objects on the road or sensor malfunctions.

2. Mutation Power: The fuzzer doesn't just throw random data, it cleverly mutates existing scenarios to create a wider range of test cases. Imagine starting with a normal lane change scenario, then the fuzzer might modify it to include a sudden gust of wind or another car swerving into the lane.

3. Finding the Cracks:  The fuzzer monitors the system's behavior during these tests.  Special detection methods, called oracles,  look for signs of trouble, like the car swerving off the road or the drone crashing. 

4. Sharpening the Focus: As the fuzzer finds issues, it can learn from them and prioritize generating scenarios that are more likely to expose new problems. This way, the fuzzer becomes more efficient over time.

 


Fuzzing is beneficial because:

Real-world Mimicry: It exposes the system to unexpected situations that might occur in the real world, helping identify weaknesses traditional testing might miss.

In-depth Testing: It can go beyond individual components and stress the entire system as a whole, finding bugs that emerge from interactions between different parts.

Automation Advantage: Fuzzing automates a lot of the testing process, saving time and effort compared to manual testing of every possible scenario.

Here's a research paper  titled "DriveFuzz: Discovering Autonomous Driving Bugs through Driving Quality-Guided Fuzzing" that explores how fuzzing can be applied to self-driving cars.

 

 

Types of Fuzzing:

Dumb Fuzzing: Yes, this is the truly random approach. You generate data without any knowledge of the system you're testing. It's a good starting point but often not the most efficient.
Smart Fuzzing: This incorporates knowledge of the system's structure. Consider the following:
Protocol-aware: Understanding the data format a system expects (length, special characters, etc.) allows you to generate invalid input tailored to break things.
Mutation-based: Start with valid input and modify it (tweak values, delete fields, etc.). This is more likely to get through initial checks into the core logic of the system.
Coverage-guided: The fuzzer tracks which parts of a system's code have been exercised and prioritizes inputs likely to explore new areas (potentially finding deeper bugs).
Defining "Wrong" Values

The notion of "wrong" in fuzzing depends on your objective:

Finding Crashes: Often, "wrong" means values completely outside the expected range (huge numbers, negative lengths, etc.). The goal is to cause the system to fail outright.

Revealing Logic Bugs: Here, "wrong" might be values that are technically valid but could trigger edge-case behavior within the system's logic.

Exploit Finding:  Attackers use fuzzing to deliberately find "wrong" values that give them more control than they should have (overwriting memory to execute their own code).

Efficiency in Fuzzing

It's more efficient to focus on "wrong" values that are tailored to your goals. This is where smart fuzzing comes into play. Here's how:

Learning from Prior Bugs: Analyzing past vulnerabilities in similar systems gives clues about the kinds of invalid input that might expose new ones.
Using Feedback Loops: When the fuzzer detects a crash, it analyzes what kind of input caused it. This helps the fuzzer generate more input that is likely to trigger similar problems.

 

 

728x90

관련글 더보기

댓글 영역