Contact AT softwareQ DOT NET 818-934-1547
1000+

Test Projects

100+

Employees

10+

Years

200+

Clients

160+

Mobile Platforms

100+

OS Platforms

Building Quality Software Applications

Testing is an excellent means to build confidence in the quality of software before it’s deployed in a data center or released to customers. It’s good to have confidence before you turn an application loose on the users, but why wait until the end of the project? The most efficient form of quality assurance is building software the right way, right from the start. What can software testing, software quality, and software engineering professionals do, starting with the first day of the project, to deliver quality applications? The first step in building a quality application is to know what you need to build. An amazingly large number of projects get underway without clarity amongst the project stakeholders about what the requirements are. According to Capers Jones’ studies, as many as 45% of defects are introduced in specifications. One working definition for quality is “fitness for use.” If we’re unclear on the intended uses, how can we build something that is fit for use? Not only do we need some specification of the requirements—whether formal or informal —but we should also conduct a thorough project stakeholder review of this specification to look for defects and to build consensus and understanding. Another important early step is properly organizing the project. The overall approach to application development is the software development lifecycle model (SDLC). There are four main varieties of SDLC in common use today:

  1. Sequential (also called waterfall or V-model ): In this approach, the team proceeds through a sequence of phases, starting with requirements, then design, then implementation, and then multiple levels of testing. This model works best when you can specify requirements that will change very little if at all over the course of the project. It also works best when you can plan the project with great accuracy, which typically means it’s similar to a project the team’s done before.
  2. Iterative (also called incremental or evolutionary): In this approach, the high-level requirements are grouped together into iterations (or increments), often based on technical risk, business importance, or both. The system is then designed, built, and tested group-by-group. This model works well if you need to deliver the most important features by a rigid deadline, but can accept some features arriving later. This model can tolerate so me change in the plan (often due to uncertainty or change in requirements) and still deliver the key features on time, which is not true of the sequential models.
  3. Agile (such as Scrum and XP): In Agile approaches, each iteration is compressed to a short as two weeks. Documentation is minimized and change is expected from one iteration to the next, and within each iteration. Various rules help prevent devolution into churn and chaos. This model works when applied with discipline, and its emphasis on accommodate dating change allows it to produce results even in rapidly-e evolving situations.
  4. Code-and-fix : This approach is actually the absence of an approach. It involves starting the development of the application without any requirements, without a clear plan, without anything but a deadline, in many cases. This model can only work for the simplest, shortest, and least-risky of development projects.Now, the first three of these models exhibit significant variation in practice. You should feel free to intelligently tailor the model to your specific needs, but beware of violating certain aspects of the model that enable other features of the model.

With the project properly organized and the requirements clearly understood (whether for the whole project or only for this iteration), design and coding can start. Of course, coding presents not only the opportunity to create great new features, but also the risk that the programmer will create great big bugs. To mitigate this risk, there are three things every programmer should do with every piece of code she writes:

  1. Unit testing: The programmer should test every line of code, every branch, every condition, and every loop. Higher levels of testing such as system test often touch half (or less) of the code, and any un tested code is a potential hiding place for bugs. New tools, both commercial and freeware, make the job of unit testing much easier than it was in the past.
  2. Static analysis: Even code that passes unit tests can still contain latent defects, maintainability problems, and security vulnerabilities. Static analysis can cheaply and quickly find bugs that would take hours to find and remove during higher levels of testing. The programmer now has a wide variety of tools available to help with this task, too.
  3. Code review: Once a given unit of code is written, tested, and analyzed, having a walkthrough or technical review of the code among the programming team is a great way to catch most of the remaining bugs and to ensure good understanding of how the program works across the entire team. Studies at Motorola show that as few as three experienced programmers (including the author), following a rigorous inspection process, can find as many as 90% of remaining bugs.

We can be very confident indeed in each un it of code if programmers go through these three steps prior to checking their code into the source code repository.