What application development life cycle model uses a sequential design process?

Quality concerns in large-scale and complex software-intensive systems

Bedir Tekinerdogan, ... Richard Soley, in Software Quality Assurance, 2016

1.4 Addressing System Qualities

SQA can be addressed in several different ways and cover the entire software development process.

Different software development lifecycles have been introduced including waterfall, prototyping, iterative and incremental development, spiral development, rapid application development, and agile development. The traditional waterfall model is a sequential design process in which progress is seen as flowing steadily downwards (like a waterfall) through the phases of Analysis, Design, Implementation, Testing, and Maintenance. The waterfall model implies the transition to a phase only when its preceding phase is reviewed and verified. Typically, the waterfall model places emphasis on proper documentation of artefacts in the life cycle activities. Advocates of agile software development paradigm argue that for any non-trivial project finishing a phase of a software product’s life cycle perfectly before moving to the next phases is practically impossible. A related argument is that clients may not know exactly what requirements they need and as such requirements need to be changed constantly.

It is generally acknowledged that a well-defined mature process will support the development of quality products with a substantially reduced number of defects. Some popular examples of process improvement models include the Software Engineering Institute’s Capability Maturity Model Integration (CMMI), ISO/IEC 12207, and SPICE (Software Process Improvement and Capability Determination).

Software design patterns are generic solutions to recurring problems. Software quality can be supported by reuse of design patterns that have been proven in the past. Related to design patterns is the concept of anti-patterns, which are a common response to a recurring problem that is usually ineffective and counterproductive. Code smell is any symptom in the source code of a program that possibly indicates a deeper problem. Usually code smells relate to certain structures in the design that indicate violation of fundamental design principles and likewise negatively impact design quality.

An important aspect of SQA is software architecture. Software architecture is a coordination tool among the different phases of software development. It bridges requirements to implementation and allows reasoning about satisfaction of systems’ critical requirements (Albert and Tullis, 2013). Quality attributes (Babar et al., 2004) are one kind of non-functional requirement that are critical to systems. The Software Engineering Institute (SEI) defines a quality attribute as “a property of a work product or goods by which its quality will be judged by some stakeholder or stakeholders” (Koschke and Simon, 2003). They are important properties that a system must exhibit, such as scalability, modifiability, or availability (Stoermer et al., 2006).

Architecture designs can be evaluated to ensure the satisfaction of quality attributes. Tvedt Tesoriero et al. (2004), Stoermer et al. (2006) divide architectural evaluation work into two main areas: pre-implementation architecture evaluation, and implementation-oriented architecture conformance. In their classification, pre-implementation architectural approaches are used by architects during initial design and provisioning stages, before the actual implementation starts. In contrast implementation-oriented architecture conformance approaches assess whether the implemented architecture of the system matches the intended architecture of the system. Architectural conformance assesses whether the implemented architecture is consistent with the proposed architecture’s specification, and the goals of the proposed architecture.

To evaluate or design a software architecture at the pre-implementation stage, tactics or architectural styles are used in the architecting or evaluation process. Tactics are design decisions that influence the control of a quality attribute response. Architectural Styles or Patterns describe the structure and interaction between collections of components affecting positively to a set of quality attributes but also negatively to others. Software architecture methods are encountered in the literature to design systems based on their quality attributes such as the Attribute Driven Design (ADD) or to evaluate the satisfaction of quality attributes in a software architectural design such as the Architecture Tradeoff Analysis Method (ATAM). For example, ADD and ATAM follow a recursive process based on quality attributes that a system needs to fulfill. At each stage, tactics and architectural patterns (or styles) are chosen to satisfy some qualities.

Empirical studies have demonstrated that one of the most difficult tasks in software architecture design and evaluation is finding out what architectural patterns/styles satisfy quality attributes because the language used in patterns does not directly indicate the quality attributes. This problem has also been indicated in the literature (Gross and Yu, 2001 and Huang et al., 2006).

Also, guidelines for choosing or finding tactics that satisfy quality attributes have been reported to be an issue in as well as defining, evaluating, and assessing which architectural patterns are suitable to implement the tactics and quality attributes (Albert and Tullis, 2013). Towards solving this issue Bachmann et al. (2003), Babar et al. (2004) describe steps for deriving architectural tactics. These steps include identifying candidate reasoning frameworks which include the mechanisms needed to use sound analytic theories to analyze the behavior of a system with respect to some quality attributes (Bachmann et al., 2005). However, this requires that architects need to be familiar with formal specifications that are specific to quality models. Research tools are being developed to aid architects integrate their reasoning frameworks (Christensen and Hansen, 2010), but still reasoning frameworks have to be implemented, and tactics description and how they are applied has to be indicated by the architect. It has also been reported by Koschke and Simon (2003) that some quality attributes do not have a reasoning framework.

Harrison and Avgeriou have analyzed the impact of architectural patterns on quality attributes, and how patterns interact with tactics (Harrison and Avgeriou, 2007; Harrison and Avgeriou). The documentation of this kind of analysis can aid in creating repositories for tactics and patterns based on quality attributes.

Architecture prototyping is an approach to experiment whether architecture tactics provide desired quality attributes or not, and to observe conflicting qualities (Bardram et al., 2005). This technique can be complementary to traditional architectural design and evaluation methods such as ADD or ATAM (Bardram et al., 2005). However, it has been noted to be quite expensive and that “substantial” effort must be invested to adopt architecture prototyping (Bardram et al., 2005).

Several architectural conformance approaches exist in the literature (Murphy et al., 2001; Ali et al.; Koschke and Simon, 2003). These check whether software conform to the architectural specifications (or models). These approaches can be classified either by using static (source code of system) (Murphy et al., 2001; Ali et al.) or dynamic analysis (running system) (Eixelsberger et al., 1998), or both. Architectural conformance approaches have been explicit in being able to check quality attributes (Stoermer et al., 2006; Eixelsberger et al., 1998) and specifically run-time properties such as performance or security (Huang et al., 2006). Also, several have provided feedback on quality metrics (Koschke, 2000).

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128023013000016

Characterizing Software Test Case Behavior With Regression Models

B. Robbins, in Advances in Computers, 2017

1.2 Software Quality Assurance Activities

Within the SDLC, there are a number of Quality Assurance (QA) activities focused on measuring and assuring the quality of software. Whether or not these activities are done by a separate group of people, or during a separate phase of development, the same questions are addressed:

Planning: How should each artifact be checked for defects?

Finding potential defects: Given the requirements or expectations of an artifact, are there any potential “imperfections or deficiencies” present?

Triaging potential defects: Does each potential defect really qualify as a defect? If so, what is the likelihood and impact of the defect?

While there are certainly interesting considerations and techniques for Planning and Triaging, the act of finding potential defects continues to receive the most attention in industry and research. There are many categories of defect-finding approaches applicable for executable software artifacts, such as:

Compilation or interpretation: Source code translates into executable formats, and this process requires parsing and semantic analysis that may reveal defects related to conformance to the programming language.

Static analysis: Tools can inspect source code or compiled code for common mistakes.

Reviews: Team members can read one another's source code and offer feedback on possible defects.

Testing: Executable artifacts can be evaluated by running and inspecting their output under certain conditions.

While the other techniques listed above are certainly valuable [3, 4], software testing continues to be the most ubiquitous defect-finding technique in software development. Entire development methodologies in the industry (such as the very popular test-driven development technique [5]) have been developed around the idea of testing early and often. Perhaps some of this is due to the “pass/fail” nature of tests—their status, as opposed to review feedback, is nonnegotiable. At the system level, executable tests are also at the user level, which helps to build confidence that lower-level approaches cannot provide.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/S0065245816300754

Using Agile Concepts for UX Teams

Diana DeMarco Brown, in Agile User Experience Design, 2013

Focusing on Communication Over Documentation

If your software development life cycle requires epic novel-length specifications, you will not be able to get them off the team’s plate, but you can consider how to support a more effective way to share its content. If there is no hard requirement for such deliverables, consider examining your UX deliverables to see if they are geared toward supporting communication or simply documenting your intentions. Relieving your team of the burden of producing something that does not serve its purpose might be worthwhile. If the designers focus on engaging in collaboration with other functional areas and achieving common understanding through conversations, there is less to communicate in writing. In many cases, a simple sketch that reminds the different team members of the general design and direction might be enough. For functional areas that are less involved in day-to-day design activities or tend to come into the process later in the cycle, having a meeting to review the design might prove more effective, and quicker, than posting a long document.

The reality is that, even when long documents are required by the process, not everyone who should read them does so. Just as often, team members read an earlier version as part of a review process and never take the time to sort through the changes made with the final draft. If your team is in the business of writing such documents and your process does not allow for a different way of communicating the design, take a cue from Agile and think about additional delivery methods, such as a walkthrough of the design or a review and discussion of the critical design elements. This provides the other functional areas with a much richer opportunity to understand what the design is trying to accomplish and gives them a chance to ask questions and learn more about what the design is doing. This significantly increases the chances of the key elements of the design being implemented correctly.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780124159532000066

The Data Vault 2.0 Methodology

Daniel Linstedt, Michael Olschimke, in Building a Scalable Data Warehouse with Data Vault 2.0, 2016

3.1.2.1 Scrum

The traditional software development life-cycle, also known as the waterfall approach, has several advantages and disadvantages. If everything goes well (and as planned), the waterfall approach is the most efficient way to carry out a project. Only the required features are implemented, tested and deployed. The process produces a very good set of documentation, especially during the requirements and design phase of the project. However, it is almost impossible to carry out larger projects where the customer is not very concrete with requirements and ideas and where the business requirements evolve over time [14].

As a result, agile methodologies have been developed to make software development more flexible and overall more successful. Scrum is one example of such agile methodology and is described in the following sections. It was introduced in the late 1990s [15,16] and has become “the overwhelming [agile] favorite.” [17]

User requirements are maintained as user stories in a product backlog in Scrum [18]. They are prioritized by business value and include requirements regarding customer requests, new features, usability enhancements, bug fixes, performance improvements, re-architecting, etc. The user stories are implemented in iterations called “sprints” which last usually two to four weeks. During a sprint, user stories are taken off the product backlog according to the priority of the item and implemented by the team [19]. The goal of Scrum is to create a potentially shippable increment with every sprint, which is a new release of the system that can be presented to the business user and potentially put into production [20] (Figure 3.7).

What application development life cycle model uses a sequential design process?

Figure 3.7. The flow of requirements in the Scrum process [20].

This requires that a user story be implemented and tested as a whole, including all business logic that belongs to this story. All stakeholders, such as business users and the development team, can inspect the new, working feature and provide feedback or recommend changes to the user story before the next sprint starts. Scrum supports the reprioritization of the product backlog and welcomes changing business requirements between the sprints [20] (Figure 3.8).

What application development life cycle model uses a sequential design process?

Figure 3.8. Quick turns in Scrum [21].

This helps to improve the outcome of the project in a way that meets the expectations of the business user. To ensure this, the customer should become as much a part of the Scrum team as possible [21].

The next sections explain the elements of Scrum in more detail.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128025109000039

Essential DW/BI Background and Definitions

Ralph Hughes MA, PMP, CSM, in Agile Data Warehousing for the Enterprise, 2016

Developers and Programmers

Reflecting on the software development life cycle, one can see that providing a new application for end users requires far more than just encoding a design into a software language. A software project will need business experts, architects, analysts, and data modelers to identify requirements and draft the application’s design, as well as testers to validate everyone’s work. This book describes many ways for different combinations of these individuals to interact. To make those discussions clear, Table 4.3 outlines the grouping of team members I have in mind. Some readers may be surprised that the system testers are included among the team leaders. As discussed later in this book, agile data warehousing expands the duties of this role so that a team’s system tester provides all other teammates with a strong sense of direction and an opinion as to when the the team’s quality assurance work is sufficiently completed.

Table 4.3. Names for Different Groupings of Project Team Members

What application development life cycle model uses a sequential design process?

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780123964649000047

Software engineering

Paul S. Ganney, ... Edwin Claridge, in Clinical Engineering (Second Edition), 2020

Software development and coding

This phase of the software development lifecycle converts the design into a complete software package. It brings together the hardware, software and communications elements for the system. It is often driven by the detailed design phase and must take into consideration practical issues in terms of resource availability, feasibility and technological constraints.21 Choice of development platform is often constrained by availability of skills, software and hardware. A compromise must be found between resources available and ability to meet software requirements. A good programmer rarely blames the platform for problems with the software.

Installing the required environments, development of databases, writing programs, refining them, etc. are some of the main activities at this stage. More time spent on detailed design can often cut development time, however technical stumbling blocks can sometimes cause delays. Software costing should take this into account at the project initiation stage.

For the training and competency software example, a database will need to be created to hold the staff details and records. User interfaces will need to be built for inputting data, signing off competencies and other user interactions. Database handling routines for insertion, updating and validation of data records, logic for supervisor and management roles, etc. are some of the functional modules required. The back-end chosen should consider the number of records expected to be held, simultaneous multi-user accessibility and of course, software availability.22 As far as possible it is a good idea for departments to select a family of tools and to use them for several projects so that local expertise can be efficiently developed.

Software coding should adhere to established standards and be well documented. The basic principles of programming are simplicity, clarity and generality (Kernighan and Pike, 1999). The code should be kept simple, modular and easy to understand for both machines and humans. There are several books [e.g. Bennett et al., 2010; Knuth, 2011] which describe best practices in programming both in terms of developing algorithms as well as coding itself. Code written should be generalised and reusable as far as possible and adaptable to changing requirements and scenarios. Automation and reduced manual intervention will minimise human errors.

Unit testing is often included in this phase of the software lifecycle as it is an integral part of software development. It is an iterative process during and at the end of development. This includes testing error handling, exception handling, memory usage and leaks, connections management, etc, for each of the modules independently.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780081026946000097

Securing the Utility Companies

Tony Flick, Justin Morehouse, in Securing the Smart Grid, 2011

Source Code Review

As part of a mature software development life cycle, organizations should perform reviews of their source code for vulnerabilities. Although vulnerability scanning attempts to identify vulnerabilities once they are introduced into an environment, source code reviews aim to identify vulnerabilities in software before the software is released. Utility companies must implement source code reviews for all of their software developed internally or by vendors that will be implemented in environments that will house sensitive information or critical infrastructure. By investing in source code reviews during the software development phase, the utility companies will recognize cost savings when compared with fixing vulnerable code once it is in production, while also being able to prevent vulnerabilities from being introduced into their environments.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B978159749570700008X

Conduct Security Awareness and Training

Jason Andress CISSP, ISSAP, CISM, GPEN, Mark Leary CISSP, CISM, CGIET, PMP, in Building a Practical Information Security Program, 2017

Software Development Life Cycle

The inclusion of security into the Software Development Life Cycle (SDLC) is a key area to include in training for developers. There are a nearly innumerable variety of approaches to this with one of the commonly used being the Microsoft Secure Development Lifecycle (SDL) [4]. The SDL process includes training, requirements, design, implementation, verification, release, and response, as shown in Fig. 8.1.

What application development life cycle model uses a sequential design process?

Figure 8.1. Microsoft secure development lifecycle process.

Although it is not particularly important that the SDL process specifically be the one to be followed, it is important there be some methods of ensuring that security is adequately represented in the SDLC.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128020425000093

Program Design, Coding, and Testing

Ruth Guthrie, in Encyclopedia of Information Systems, 2003

II.B. Structured Code

The coding phase of the SDLC is where the system is built or generated based upon the design documents and models. Structured programming languages include COBOL, Fortran, and Pascal. These languages are categorized as procedural programming languages because the programs they are written with follow a strict procedural hierarchy, giving instructions for how manipulation of data will occur. Procedural languages separate data from the program procedures. Modules are independent of the data, and data are typically shared globally by the entire program. Each module or function in the program reads data from the global file and then manipulates it and returns the altered values. This can make test and maintenance difficult.

To develop a large application, a driver program will control several subprocedures or functions to accomplish the solution found during structured de sign. However, the purpose of the coding phase is not just to code the design, but to code the design in such a way that the software has high quality. Several process controls can help to improve software quality. Among these are control of the programming environment, code and configuration management techniques, and software metrics.

A programming environment can consist of many things including hardware and software that help de sign and diagnose the software. Essential to this effort is a debugger. Programming languages come with de bugging tools that either identify errors and point the programmer to the place where the error occurs or allow programmers to step through their logic until the error is revealed. Using a software lab or workstation as the production environment ensures that the operating system, hardware, databases, debuggers, versions of programming languages, and versions of software modules being developed are controlled in a single environment. This is important because while the software is under development, if an error occurs, the programmers need to be able to regenerate the error and figure out what caused it. If development was done in multiple environments with different configurations, isolating and duplicating an error would be extremely difficult.

The use of coding standards can improve software quality by ensuring that structure, size, and naming conventions are adhered to throughout the application. Coding standards ensure that everyone follows the same set of assumptions and presents their work in the same way. This is an efficient way for a software development organization to work because as people change responsibilities, all program elements will be familiar. All the programmers will know what each module should look like and where to find information about the logic and strategy that went before the coding of each module.

Configuration management is the control and management of software and software artifacts throughout the software life cycle. The configuration management process ensures version control and change control on all code, documentation, and data. There are many automated configuration management tools that keep code modules in a library and automatically track changes. The configuration management activity helps in monitoring the progress of the program development. For development of a software product, version control is very important. Imagine on a large project if several programmers kept altering code without knowing what other programmers were doing. The result would be disastrous. At some point, no one would understand all of the changes that were made and how they effect the entire system.

At some point, subsystems of an application are joined to form a baseline product. The baseline is the first mock-up of what the operational system will be. Naturally, there are still many errors with this program. As testing and rework correct these errors, new versions are released, forming a new baseline. Knowing what version has been tested and what errors have been remedied is essential to delivering a quality program. Control allows programmers the visibility to see if an implemented change has affected other portions of the system. If the change is not successful, it is easier to return to an earlier version.

Software metrics measure characteristics of the software throughout the software development life cycle. During the analysis phase metrics may focus on software complexity. During design, metrics can be used to determine how well software adheres to de sign rules such as modular independence. During the test phase, metrics can be taken to give a likelihood that testing is complete and the errors in the program have been found. Numerous software metrics have been denned, but what is important is that the metrics add value and insight into producing high quality software. To this end, Ejiogu in 1991 defined characteristics of successful metrics to be simple and computable, intuitively persuasive, objective, consistent in use of units, programming-language independent, and providing quality feedback. Knowing what to measure and how to use the measures to improve software processes and procedures can build quality into the development process.

Numerous tools are available to help build and test the design model. Computer aided software engineering (CASE) tools provide graphic assistance to build DFDs, ERDs, and data dictionaries for structured programs. Additionally, the tools can provide for configuration management and error checking in the design and code generation. Powerful tools, when used properly to create a complete design, can generate thousands of lines of code, often creating gains in productivity. However, it is also common for the generated code to require alteration before it can operate properly. Some programmers feel it is easier to code a system by hand than to debug and recode soft ware generated by a case tool.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B0122272404001374

Enterprise Web Application Testing

Shailesh Kumar Shivakumar, in Architecting High Performing, Scalable and Available Enterprise Web Applications, 2015

6.1 Introduction

Software testing is an integral part of the software development lifecycle (SDLC) to ensure delivery quality. The increasing popularity of enterprise web technologies poses unique opportunities, as well as challenges, for testing. Traditional testing methodologies fall short of effectively testing web applications.

Drawing insights and best practices from multiple large-scale enterprise applications, this chapter provides insights into the limitations of traditional testing for enterprise web applications and explores a comprehensive UCAPP testing model for the same. The chapter also elaborates various other testing methodologies including defect prevention techniques, testing process, key web testing metrics, and it also provides a list of open-source web testing tools.

Read full chapter

URL: https://www.sciencedirect.com/science/article/pii/B9780128022580000068

Which application development life cycle model uses a sequential design process?

Also called the linear sequential model, Waterfall is the most traditional software development process. In this approach, we follow all the steps of the software development process in sequence.

During which phase of the system development life cycle are new software updates patches installed?

Once the product is completely operational, the SDLC maintenance phase begins. If the program breaks, software maintenance may include updates, fixes, and patches.

Which AV approach uses a variety of techniques to spot the characteristics of a virus instead of attempting to make matches?

A newer approach to AV is dynamic analysis heuristic monitoring, which uses a variety of techniques to spot the characteristics of a virus instead of attempting to make matches.

What type of program analyzers are tools that examine the software without actually executing the program instead the source code is reviewed and analyzed?

Static code analysis is the analysis of computer software performed without actually executing the code. Static code analysis tools scan all code in a project and seek out vulnerabilities, validates code against industry best practices, and some software tools validate against company-specific project specifications.