Monday, December 12, 2011

Chapter 26

Chapter 26
Quality Management

CHAPTER OVERVIEW AND COMMENTS

This chapter provides an introduction to software quality management and software quality assurance (SQA). It is important to have the students understand that software quality work begins before the testing phase and continues after the software is delivered. The role of metrics in software management is reinforced in this chapter.
26.1     Quality Concepts
An important concept in this section is that controlling variation among products is what quality assurance work is all about. Software engineers are concerned with controlling the variation in their processes, resource expenditures, and the quality attributes of the end products. The definitions of many quality concepts appear in this section. Students need to be familiar with these definitions, since their use in software quality work does not always match their use in casual conversation. Students also need to be made aware that customer and user satisfaction is every bit as important to modern quality work as is quality of design and quality of conformance.
Spend time discussing the indirect cost of quality. That is, the costs associated with customer dissatisfaction, increased support and reduction in internal morale.
If time permits, you might have your student read excepts from Crosby’s Quality is Free or Persig’s Zen and the Art of Motorcycle Maintenance. Each contains many useful insights on quality.
26.2     Software Quality Assurance
This section describes software quality as conformance to explicitly stated requirements and standards, as well as implicit characteristics that customers assume will be present in any professionally developed software. The SQA group must look at software from the customer's perspective, as well as assessing its technical merits. The activities performed by the SQA group involve quality planning, oversight, record keeping, analysis and reporting. SQA plans are discussed in more detail later in this chapter.
26.3     Software Reviews
Arguably, software reviews are the single most important SQA mechanism for software engineering. The filter metaphor usually works well with students. Emphasize it in lecture.
It is important to point out to students that any work product (including documents) can be reviewed. Students are usually impressed by the fact that conducting timely reviews of all work products can often eliminate 80% of the defects before any testing is conducted. This message often needs to be carried to managers in the field, whose impatience to generate code sometimes makes them reluctant to spend time on reviews.
The sidebar on “Bugs, errors and Defects” is worth noting in lecture. I use a somewhat non-standard definition in SEPA because I feel strongly that a distinction should be made between errors found before the customer gets the software and defects found afterward. Unfortunately, this distinction is not made by other authors or by standards bodies. Until I can be convinced that my logic is faulty, I intend to stick with my point of view.
The defect amplification model (Section 26.3.2) is worth considering during lecture because it helps students to appreciate the impact of software reviews on error removal.
26.4     Formal Technical Reviews
The mechanics of conducting a formal technical review (FTR) are described in this section. Students should pay particular attention to the point that it is the work product that is being reviewed not the producer. During lecture, you might want to do a bit of role playing to emphasize the points made in this section.
Encouraging the students to conduct formal reviews of their development projects is a good way to make this section more meaningful. Requiring students to generate review summary reports and issues lists also helps to reinforce the importance of the review activities.
26.5     Formal Approaches to SQA
This section introduces the concept of formal methods in software engineering. More comprehensive discussions of formal specification techniques and formal verification of software appear Chapters 28 and 29.
26.6     Statistical Quality Assurance
A comprehensive discussion of statistical quality assurance is beyond the scope of a software engineering course. However, this section does contain a high level description of the process and gives examples of metrics that might be used in this type of work. The key points to emphasize to students are that each defect needs to be traced to its cause and that defect causes having the greatest impact on the success of the project must be addressed first.
Because six-sigma is widely used in industry, you might spend some lecture time on it.
26.7     Software Reliability
Software reliability is discussed in this section. It is important to have the students distinguish between software consistency (repeatability of results) and reliability (probability of failure free operation for a specified time period). Students should be made aware of the arguments for and against applying hardware reliability theory to software (e.g. a key point is that, unlike hardware, software does not wear out so that failures are likely to be caused by design defects). It is also important for students to be able to make a distinction between software safety (identifying and assessing the impact of potential hazards) and software reliability.
26.8     Mistake-Proofing for Software
This section describes the use of poka-yoke devices as mechanisms that lead to the prevention of potential quality problems or the rapid detection of quality problems introduced into a work product. Examples of poka-yoke devices are given, but students will need to see others (a web reference is given in the text).
26.9     The ISO 9000 Quality Standards
The ISO 9000 quality standard is discussed in this section as an example of quality model that is based on the assessment of quality of the individual processes used in the enterprise as a whole. ISO 9001:2000 is described as the quality assurance standard that contains 20 requirements that must be present in any software quality assurance system.
26.10   The SQA Plan
The major sections of a SQA plan are described in this section. It would advisable to have students write a SQA plan for one of their own projects sometime during the course. This will be a difficult task for them. It may be advisable to have the students review the material in Chapters 13 - 15 (testing and product metrics) before beginning this assignment.

In addition to the review checklists contained within the SEPA Web site, I have also included a small sampler in the special section that follows.
Review Checklists
Formal technical reviews can be conducted during each step in the software engineering process. In this section, we present a brief checklist that can be used to assess products that are derived as part of software development. The checklists are not intended to be comprehensive, but rather to provide a point of departure for each review.
System Engineering.  The system specification allocates function and performance to many system elements. Therefore, the system review involves many constituencies that may each focus on their own area of concern. Software engineering and hardware engineering  groups focus on software and hardware allocation, respectively. Quality assurance assesses system level validation requirements and field service examines the requirements for diagnostics. Once all reviews are conducted, a larger review meeting, with representatives from each constituency, is conducted to ensure early communication of concerns. The following checklist covers some of the more important areas of concern.     
1.         Are major functions defined in a bounded and unambiguous fashion?
2.         Are interfaces between system elements defined?
3.         Have performance bounds been established for the system as a whole and for each element?
4.         Are design constraints established for each element?
5.         Has the best alternative been selected?
6.         Is the solution technologically feasible?     
7.         Has a mechanism for system validation and verification been     established?
8.         Is there consistency among all system elements?
Software Project Planning.   Software project planning develops estimates for resources, cost and schedule based on the software allocation established as part of the system engineering activity. Like any estimation process, software project planning is inherently risky. The review of the Software Project Plan establishes the degree of risk. The following checklist is applicable.
1.         Is software scope unambiguously defined and bounded?
2.         Is terminology clear?
3.         Are resources adequate for scope?
4.         Are resources readily available?     
5.         Have risks in all important categories been defined.
6.         Is a risk management plan in place?
7.         Are tasks properly defined and sequenced? Is parallelism reasonable       given available resources?     
8.         Is the basis for cost estimation reasonable? Has the cost estimate been      developed using two independent methods?
9.         Have historical productivity and quality data been used?
10.      Have differences in estimates been reconciled?
11.      Are pre-established budgets and deadlines realistic?
12.      Is the schedule consistent?
Software Requirements Analysis. Reviews for software requirements analysis focus on traceability to system requirements and consistency and correctness of the analysis model. A number of FTRs are conducted for the requirements of a large system and may be augmented by reviews and evaluation of prototypes as well as customer meetings. The following topics are considered during FTRs for analysis:
1.         Is information domain analysis complete, consistent and accurate?
2.         Is problem partitioning complete?
3.         Are external and internal interfaces properly defined?
4.         Does the data model properly reflect data objects, their attributes and       relationships.
5.         Are all requirements traceable to system level?
6.         Has prototyping been conducted for the user/customer?     
7.         Is performance achievable within the constraints imposed by other           system elements?     
8.         Are requirements consistent with schedule, resources and budget?
9.         Are validation criteria complete?
Software Design. Reviews for software design focus on data design, architectural design and procedural design. In general, two types of design reviews are conducted. The preliminary design review assesses the translation of requirements to the design of data and architecture. The second review, often called a design walkthrough, concentrates on the procedural correctness of algorithms as they are implemented within program modules.  The following checklists are useful for each review:
Preliminary design review
1.         Are software requirements reflected in the software architecture?
2.         Is effective modularity achieved? Are modules functionally          independent?
3.         Is the program architecture factored?
4.         Are interfaces defined for modules and external system elements?
5.         Is the data structure consistent with information domain?
6.         Is data structure consistent with software requirements?
7.         Has maintainability considered?
8.         Have quality factors (section 17.1.1) been explicitly assessed?
Design walkthrough
1.         Does the algorithm accomplishes desired function?
2.         Is the algorithm logically correct?
3.         Is the interface consistent with architectural design?
4.         Is the logical complexity reasonable?
5.         Have error handling and "anti-bugging" been specified?
6.         Are local data structures properly defined?
7.         Are structured programming constructs used throughout?
8.         Is design detail amenable to implementation language?
9.         Which are used: operating system or language dependent features?
10.      Is compound or inverse logic used?
11.      Has maintainability considered?
Coding.  Although coding is a mechanistic outgrowth of procedural design, errors can be introduced as the design is translated into a programming language. This is particularly true if the programming language does not directly support data and control structures represented in the design. A code walkthrough can be an effective means for uncovering these translation errors. The checklist that follows assumes that a design walkthrough has been conducted and that algorithm correctness has been established as part of the design FTR.
1.         Has the design properly been translated into code? [The results of the       procedural design should be available during this review.]
2.         Are there misspellings and typos?
3.         Has proper use of language conventions been made?
4.         Is there compliance with coding standards for language style, comments,               module prologue?
5.         Are there incorrect or ambiguous comments?
6.         Are data  types and data declaration proper?
7.         Are physical constants correct?
8.         Have all items on the design walkthrough checklist been re-applied (as   required)?
Software Testing.  Software testing is a quality assurance activity in it own right. Therefore, it may seem odd to discuss reviews for testing. However, the completeness and effectiveness of testing can be dramatically improved by critically assessing any test plans and procedures that have been created. In the next two Chapters, test case design techniques and testing strategies are discussed in detail.
Test plan
1.         Have major test phases properly been identified and sequenced?
2.         Has traceability to validation criteria/requirements been established as part of software requirements analysis?
3.         Are major functions demonstrated early?
4.         Is the test plan consistent with overall project plan?
5.         Has a test schedule been explicitly defined?
6.         Are test resources and tools identified and available?
7.         Has a test record keeping mechanism been established?
8.         Have test drivers and stubs been identified and has work to develop           them been scheduled?
9.         Has stress testing for software been specified?
Test procedure
1.         Have both white and black box tests been specified?
2.         Have all independent logic paths been tested?
3.         Have test cases been identified and listed with expected results?
4.         Is error-handling to be tested?
5.         Are boundary values to be tested?
6.         Are timing and performance to be tested?
7.         Has acceptable variation from expected results been specified?
            In addition to the formal technical reviews and review checklists noted above, reviews (with corresponding checklists) can be conducted to assess the readiness of field service mechanisms for product software; to evaluate the completeness and effectiveness of training; to assess the quality of user and technical documentation, and to investigate the applicability and availability of software tools.     
Maintenance. The review checklists for software development are equally valid for the software maintenance phase. In addition to all of the questions posed in the checklists, the following special considerations should be kept in mind:
1.         Have side effects associated with change been considered?
2.         Has the request for change been documented, evaluated and approved?
3.         Has the change, once made, been documented and reported to interested parties?
4.         Have appropriate FTRs been conducted?
5.         Has a final acceptance review been conducted to ensure that all software  has been properly updated, tested and replaced?


No comments:

Post a Comment