Once the fuzzing started, the dashboard will provide you with live monitoring of the fuzzing:
Each fuzzing run will be assigned a random name (confident_lehmann in this example) in order to make it easier to associate findings with fuzzing runs.
From left to right, the dashboard displays three different metrics.
The leftmost metric is the total number of code blocks, edges and additional metrics covered by executing the current fuzz test. Our fuzz engines use different metrics to evaluate the code coverage as feedback to maximize the tested code coverage including: edge coverage, edge counters, value profiles, indirect caller/callee pairs, equal bytes, etc.
The graph in the middle displays the performance over time. Fuzzers will start fast, with many executions per second. As the size of the random inputs that are used for testing increases over time, the duration per execution will increase too. This leads to lower performance for long running fuzz tests. A sudden decrease in performance can also indicate bugs like endless loops or memory exhaustion.
The rightmost metric is the number of unique corpus inputs where each leads to different code paths covered by an input. Found corpus inputs are collected during all runs and are re-used in every run to directly cover all paths from the previous runs.
Read next: Findings for C/C++