Are your customers, or more broadly 3rd parties, finding more bugs in your code than you are? Are your development organizations releasing code that has poor quality stamped all over it?
Recently I saw a report that had this graphic in it, and my mind wandered a little. What if 25% of your bugs actually ARE discovered by your customers? There is a collision of a few things here that makes this matter a lot less simple than we'd like, and a lot less convenient if you think you have a solution to the problem, but in the end it is a problem.
While it's absolutely true that customers and other various 3rd parties will come up with ways to use and abuse your applications that you can't even dream of - should this account for almost one in every four defects discovered?
The other thing to consider here is that development and testing cycles are simply too short to accommodate a full-scale testing exercise. Seems only logical that they would miss a few defects. But is 1 in 4 OK? I don't think so.
Whether you're complaining about lack of budget or testing time, or maybe even a lack of automation to iterate through millions or billions of test cases the fact is that there are way too many defects still walking out the door into production environments.
Is the answer shorter release cycles, and more releases that are smaller? How do organizations get better at finding more bugs, in a more compressed time-frame? The answer has to be some combination of process and automation.
Setting up unit tests to ferret out the unit-level bugs should be part of every developer's job, and system-level testing should happen regularly as well as traditional integration testing where we look at how a component we're inserting or updating interacts and behaves with the environment as a whole. Problem here is that we need effective duplicates of our production environment for accurate testing environments!
So an interesting KPI (or Key Performance Indicator) of your overall software organization as they adjust and evolve with the prevailing winds ... is the ratio of internally vs. externally discovered defects rising or falling? This EDD (externally discovered defects) would look something like this:
EDD = [externally discovered defects] / [total defects discovered during time-window]
*externally discovered defects = a composite of quality defects including security bugs that are discovered outside the normal application release cycle, by 3rd parties such as customers, hackers, etc.
Now, as a software development organization makes changes such as adopting Agile, or begins experimenting with DevOps, management can look at this and say things like "Look, we're getting better (or worse) at releasing software because our EDD is going up or down.
What do you think?