The 7 Deadly Sins of Software Testing A blog about software testing. https

The 7 Deadly Sins of Software Testing: A blog about software testing.

No one is perfect, and neither is any software. It has been a long journey for many companies to embrace software testing as part of the development cycle, because developers are not the only ones who find it hard to admit mistakes.

Mistakes are often an indispensable part of the learning curve, but they will put you back if you keep making the same ones over and over again. In this article, we’ll go through some of them, which can be seen in almost every company that needs to be addressed in order to continually improve their software quality.

The 7 Deadly Sins of Software Testing: A blog ab

The 7 Deadly Sins of Software Testing

Software testing is not a new concept. However, it is still a discipline that has to prove its value in the eyes of many developers and bosses.

It is also a discipline that can be hard to master due to its subjective nature. New software testers often struggle with how to do the right things in the right order with the right tools.

Fortunately, there are some common mistakes that can be avoided, as described in Elisabeth Hendrickson’s book “Explore It!”

Here are the “7 Deadly Sins of Software Testing” along with my thoughts on how to avoid them:

1) Prioritizing Speed Over Depth

Generally speaking, you should never sacrifice quality for speed. You can be fast and thorough at the same time! The key is to find ways to streamline your testing process so you can spend more time on deep exploratory testing.

2) Confusing Test Execution With Testing

This reminds me of a quote from James Bach: “The tester who says he doesn’t have time to test, doesn’t understand testing.” Don’t get caught up in too much test execution and forget about the real value and purpose of testing!

3) Focusing On Tool Development Instead Of Test Development

It’s true that having access to good tools

“Code coverage” is one of the most hotly debated software metrics. It’s a very simple concept, yet the more you learn about it, the more you realize how much depth there is in this topic.

One common fundamental misconception about code coverage is that it measures test quality. It doesn’t, and it can’t. Coverage is a useful metric, but only if you know what it does and doesn’t measure.

I’ve seen teams with 80%+ code coverage spending weeks trying to debug a single defect. I’ve seen teams with 20% coverage fix bugs quicker than their competitors with 80% coverage. So, I decided to write a quick blog post about some of the things that code coverage will not help you with (i.e., some “deadly sins” of software testing).

You’ll find some interesting ideas here, even if you don’t agree with all of them. That’s the beauty of diverse opinions about software testing: there’s always something to learn.

I particularly like the idea that testers shouldn’t specialize too much in one area. When I look back at my career, I realize that most of the projects I enjoyed were the ones where I had to learn a lot and do different kinds of testing.

Here are a few more tips:

Make sure you know your context (i.e., what is the team trying to achieve).

Don’t wait forever until everything is perfect before starting your tests. It’s never going to happen! You’ll be able to test more things if you start testing early and keep doing it often.

Learn from past mistakes: have retrospectives after each release or even during the project if time allows it.

Leave a Reply

Your email address will not be published.