TL;DR: You want to compare whether using multiple Automated Static Analysis Tools (ASATs) makes sense for your projects, say whether the additional warnings of using Checkstyle on top of FindBugs are worth the increased maintenance costs. We have implemented a prototypical tool to visualize this in an intuitive manner, called UAV . Go check it out on an example project. Here follows the full story.
When we look at the software engineering landscape today, we see that state-of-the-art projects already make use of Automated Static Analysis Tools (ASATs) in some form or another: Either as traditional tools such as JSLint, PVS-Studio, FindBugs, Google’s Error-Prone, as web services such as Coverity Scan, or as compiler-embedded analyses like the ones you get from clang. In fact, ASATs have become so numerous for any given programming language that it is difficult for us as project managers to decide which ones my project the most benefits. While deciding on one tool might still be simple, deciding on an effective combination of tools is even more difficult, even though we know that combining multiple ASATs would unleash their potential . The complementary warnings that different ASATs can find often combine nicely think for example of FindBug’s bug-finding capabilities and the code readability-centered CheckStyle.
However, many developers understandably refrain from running more than one ASAT mainly because of two reasons:
1. It is difficult to compare and understand the strengths of multiple ASATs on your own project. Currently, in most cases, you would have to go through the (possibly) lengthy list of findings which each tool emits. This is tedious work since the warnings are not standardized, so it is difficult to tell if the two tools indeed find different warnings or not.
2. Sifting through ASAT warnings in general is hard work. You don’t want to make your life harder by including more tools than necessary. We know that many ASAT warnings are not important to developers, so finding (or configuring) an additional ASAT that reports only interesting warnings for your project is crucial. However, this is again made difficult because there is no common and automatically applicable classification between the warnings that different ASATs emit. As an end result of these complications, many projects still only employ one ASAT with practically no further customization .
With UAV, the Unified ASAT Visualizer, we created an ASAT-comparison tool with an intuitive visualization that enables developers, researchers, and tool creators to compare the complementary strengths and overlaps of different Java ASATs. UAV’s enriched treemap and source code views provide its users with a seamless exploration of the warning distribution from a high-level overview down to the source code. We have evaluated our UAV prototype in a user study with ten second-year Computer Science (CS) students, a visualization expert and tested it on large Java repositories with several thousands of PMD, FindBugs, and Checkstyle warnings.
TS;WM (Too Short; Want more): We have a tool paper that goes into many of UAV’s implementation details .
Project Website: https://clintoncao.github.io/uav/
 Moritz Beller, Radjino Bholanath, Shane McIntosh, Andy Zaidman: Analyzing the State of Static Analysis: A Large-Scale Evaluation in Open Source Software. In 23rd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER), Osaka (Japan), 2016.
 Tim Buckers, Clinton Cao, Michiel Doesburg, Boning Gong, Sunwei Wang, Moritz Beller and Andy Zaidman: UAV: Warnings from Multiple Automated Static Tools at a Glance. In 24th IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER), Klagenfurt (Austria), 2017.