Bug reports are the currency of quality assurance. They're the product of our labour and, it could be argued, the validation of our existence. Typically, they're the sole conduit of information between the tester and the developer.
For this reason, it's pretty important we get them right. A bad bug is a hindrance rather than an asset and can cause all sorts of mischief:
- They misdirect the developers, wasting time
- They frustrate the developers, undermining confidence in QA
- They give a false impression of the overall status of a build
- They disrupt modelling tools designed to determine completion dates
- They hamper the rest of the QA team who have to correct them
Writing a bug report isn't an art, it's science. What makes a good bug report might seem like an easy question to answer, and if you're approaching it from the perspective of a developer the answer is usually something along the lines of: "A good report is something that we hadn't spotted and is easy to fix."
Testers have a different perspective. We evaluate the quality of the data contained within the report itself rather than the bug it describes. This is the type of 'good bug' we're discussing now.
Ideally, a developer will open a bug report, scan it for a nanosecond and immediately understand the exact problem it describes.
If a bug has to be read multiple times while the developers' brow becomes increasingly furrowed, it's a failure. If it has to be returned to QA for additional information to be requested then it's a catastrophe.
Uniformity breeds anonymity
There's usually a data field within each bug report displaying the name of the tester who submitted the bug. This should be the only way of identifying the tester.
Bugs must be written in a uniform style that follows a common standard.
If a developer can identify a tester by the style in which a bug has been written then either that tester is doing something wrong or all the other testers are!
Programming languages follow rigid formatting rules. If they didn't, the code wouldn't work. Bug reports aren't quite that inflexible, but they're not the place to express individuality or an artistic flair for language. We need to deliver every scrap of relevant information in as concise a manner possible.
Everyone who reads a bug report should comprehend the contents with ease, irrespective of their role in the organisation.
It's a frustrating scenario when we see a subset of bugs that almost always weren't submitted by QA, that cannot be triaged by production, translated, reassigned, regressed and closed by QA, because they're the equivalent of an encrypted two-line message written for only the author to understand. Notes like this may have their place but it isn't a bug database.
Bug reports are not the place to express individuality or an artistic flair for language
Elephants in the dark
You may be familiar with the parable of three blind folks attempting to conceptualise what an elephant looks like just using their sense of touch. Each individual feels a different part of the animal. Their descriptions are limited by their unique experiences and make little sense to anyone but themselves. This is a common flaw with poorly written bug reports, particularly when written by someone other than a tester (end user, developer, work experience trainee, etc).
Finding bugs is 50% of the QA vocation. The other half is communication. If a bug can't be communicated to the developer properly, it creates a new problem rather than helping with the solution.
Bugs require a succinct and descriptive summary that's separate from the recreation steps, similar to a newspaper headline: "It's an elephant"
The description should then list all steps required to force the bug to manifest without containing any extraneous data
A picture is worth a thousand poorly written bug descriptions
If the manifestation of a bug is visible to the eye, it needs a screenshot, a video or both. The same goes for anything that can be expressed using a graph, chart, audio file, results list, and so on.
When you're reproducing a bug, the fault is often conspicuously evident, because you're staring directly at it, which leads to temptation to skip capturing it on video because it's so obvious, right?
Developers shouldn't have to fire up the software and recreate the bug themselves
Wrong. Developers shouldn't have to fire up the software and recreate the bug themselves. The catalyst for requiring this action, often stems from a lack of appropriate imagery attached to bug reports.
When someone attempts to explain something, and your reaction is a look of incomprehension, your next response is almost always: "Show me." This is never truer than in a bug report. Upload those attachments!
Bugs are wounds in your software
Suffering a "minor" wound doesn't necessarily mean it's low priority, and occasionally a "major" wound isn't of the greatest urgency to address.
Hospitals evaluate incoming patients this way and we should triage incoming bugs using the same method; assigning each new bug a priority rating based on all the information the tester has submitted.
Severity is an unchangeable technical classification of bug. Priority is an ever-changing business decision.
Severity is a value assigned by the tester and best practice is to use an alphanumeric scale... Avoid using descriptive adjectives which are too open to interpretation. Come up with a set of values and define them. For instance, all crashes might be "class A" and all typos might be "class C." The important thing is that the values don't change once you've determined your range.
Priority is a value assigned by the developer depending on how urgent it is to the business to address the bug. Testers can't choose a priority rating because they don't know how important that bug is to you. The role of the tester is to include enough information so that the default assignee of all new bugs can triage them quickly, allocate a priority rating, and assign the bug to the most appropriate developer.
Use proper bug management software
Emailing is for emails. DMs are for flirting. Twitter is the internet equivalent of shouting out of a window. None of these are appropriate tools for reporting bugs.
Traceability and ownership of bugs is without exaggeration, one of the most important aspects of a bugs' lifecycle. There is no shortage of excellent bug management software available, often inexpensive or free. You may even find that there is a plugin available to adapt one of your existing software tools for bug reporting.
Traceability and ownership of bugs is one of the most important aspects of a bugs' lifecycle
Your profession is the creation of software. This is a tool that you need in order to do that job the best way possible. Successful software releases that shunned a dedicated bug database do exist, but for every one of those there a thousand that either failed, or went through an easily avoidable struggle and burnt through a lot of needless additional work on their way to success.
"But it's yet another thing we have to manage and login to" is an argument that's invalidated very quickly by the downsides of not utilising one.
Prime the bug cannon and prepare to fire!
The worst bug report is the one that was never written. Sometimes this is because a bug wasn't found, which can be attributable to poorly written or missing test cases, not enough testing or substandard/untrained testers (devs fall squarely into this category!)
Worse than this scenario in an unreported bug that was actually known about.
A tester says: "Oh, yeah, I didn't write that one up because the developer said they were already working on it." Unacceptable... Bug it regardless. Traceability is paramount.
A tester says: "Ah, well, I thought it was already bugged." They should have confirmed their suspicion by searching the bug database before making the call not to report the bug.
A tester says: "Well I couldn't figure out whether it was a bug or by design." Fine, but bug it anyway. The business risk of a bug going unreported utterly eclipses the risk of causing a developer the trivial inconvenience of triaging a bug and waiving it "as designed."
If in doubt: bug it!
I shudder when I read the words "I think" in a bug report. No one cares what you think, only what you know
Ambiguity is a worthless commodity on the bug stock exchange and you shouldn't buy it: bugs must exclusively be composed of factual statements.
I shudder when I read the words "I think" in the recreation steps of a bug report. No one cares what you think, only what you know. You either know, or you don't know. Is it measurable and provable?
Validity is borne of things that can be demonstrably proven using measurement and repetition.
Expressions along the lines of "it happens for a few seconds" or "I saw it briefly" or "it manifests some of the time" are unacceptable. Measure it. How many seconds? Define "briefly." Record a percentage of how frequently it manifests.
The testers' perceptions are what lead them to report bugs, but paradoxically have no place in a bug report. Facts, numbers, stats, data. Nothing abstract of ambiguous should factor into it.
Statements identifying a fault without expressing why it's been determined as a fault must be outlawed too. "This asset is the wrong colour," "the resolution is wrong," "the menu doesn't work" are valueless in isolation and open to counter argument. We must quote the design requirement or highlight a conflicting comparison with an identical feature elsewhere in the software.
To quote the teacher that introduced me to mathematics at age 6: "Show your workings!"