Here is an overview of the process the RAD Studio team uses for processing bugs reported by customers, and the current status. Over the recent years, the effort in fixing bugs and issues reported by customers has increased significantly. We have numbers from our internal bug tracking system that can shed some light.
Before we get to those numbers, however, it is important to understand how the RAD Studio team tracks and manages bugs, how they are categorized and processed. I won’t get to the details, but an overview will help explain the rest of the story.
From Quality Central to Quality Portal
For many years, the customer-facing tool for reporting bugs was hosted as Quality Central at http://qc.embarcadero.com. This was an internally developed bug-tracking tool that mapped to the internal bug tracking system, RAID. It is still there, but don't use it any more.
Over the years, RAID was replaced with an instance of Atlassian Jira (https://www.atlassian.com/software/jira), and Quality Central was remapped to it. Since last year, the team introduced the new Quality Portal (http://quality.embarcadero.com), which is also based on Jira but has different configurations and settings. The current flow from Quality Portal to the internal system and back is very smooth and it is significantly improving the communication between the team and customers reporting bugs.
Bugs Flow and Status
The second information worth knowing, before looking to the actual data, is the flow up bugs and their status over time. For this, I’m considering the current system (there were differences in the previous combinations).
When a bug gets reported, it is copied in the internal system and is validated by a QA (Quality Assurance) team member. He might open it and send to the proper developer, close it because it is not considered a bug but the expected behavior, a “test case” error, close because it cannot be reproduced, open it as a feature request for future consideration, ask for more information to the bug reporter, close it as duplicate of an existing issue… and a couple of other scenarios.
At that point, if the bug is open, it is assigned to a developer, who can provide a fix for it, but also suggest a temporary workaround, can decide this is expected, or suggest merging it with future work. An individual developer doesn’t actually do this process alone: There are team meetings, “bug council” meetings, and many steps to access and re-evaluate over time the priority of issues.
There are a couple of further considerations here. The first is that when a bug is closed internally, its status is not immediately reflected in the public system. The reason is simple: Telling a customer the bug is closed but he or she cannot get the fix would be of no value. The bug is marked fixed in the public system when a fix including the bug is released. This is why in the public system there are huge spikes of fixed followed by periods in which it seems nothing is done. The internal system, instead, tells a more complete story.
The second consideration is that a significant number of bug reports that are not considered “errors” in the current implementation, but requests to extend a given capability are kept in the system. While internally tagged as “feature requests” they stay open and look like bugs not being addressed. In theory we could close them indicating the feature works as "currently" designed, and open a separate internal request for enhancements.
The last and final consideration is here we are looking to publicly reported bugs, but you have to consider that the majority of bugs is reported internally by the QA team, other developer, or internal users (incuding myself). Our goal and more consistent effort, of course, is to fix bug before the software is released. So the internal numbers tells a different story, but for this blog post we are focusing only on bugs reported by customers.
Let’s Get to the Data
With this picture in mind, I’ve recently dug some data and some graph that help understanding the current status and the extra effort done recently on RAS Studio quality.
Faster Resolution Time. The first graph shows the yearly average resolution time over the last 4 years. This is how many days it takes on average to resolve an issue. Thinks are improving significantly, I’d say.
70% of Reported Bugs Have Been Solved. If we consider all bugs that have been reported over the same time frame (from 2012) and came from customers, we get a real picture of the effort. If you add closed and resolved issues, it’s a 71% of the total. Many of the re-open issues are also partially closed (they might not be optimal or complete solutions). Also among the open issues there are 279 (at a recent count) marked as feature requests, which brings the real number of open bugs further down.
Conclusion
There is certainly much more information we can dig in our system to show how many publicly reported bugs have been fixed over time in the various product areas. The new public bug tracking system is also making it easier to follow the bugs status and it’s ensuring a better communication between customers, quality assurance team, and development team.
The RAD Studio team is focused on further improving the process and devoting more resources in fixing issues. The trends have been encouraging, but this doesn’t mean we think our effort is good enough and we are going to stop here. On the contrary, we see a positive trend and want to keep focused on that direction, increasing the timeliness of bug fixes, their numbers, and (which is what really matters) the overall product quality.
PS: Clarification on Closed Vs. Fixed Issues
Some of the comments (including a few I didn't approve) hint to the fact that not all "fixed" issues have been specifically addressed by the team, given some might have been duplicate or coud have ended as being considered "test case errors". So I dug some extra information. Out of that bucket of closed bugs, over 3,200 individual and distinct issues have been fixed with actual changes in code. Wiht is roughly half of all of the issues that have been addressed.