This is the second article in a series about a recent paper our team submitted to the SEI arguing that technical debt should be defined from what we called “The Stakeholder Perspective.” As we discussed in the introduction, there are many different perspectives from which quality can be evaluated. Ward Cunningham proposed the most commonly referenced definition of technical debt which takes a very developer-oriented view on intrinsic factors influencing quality. While this perspective is indeed valid, it tells only part of the story. The story on quality has major interdependencies and managing any piece of it in isolation creates challenges. Technology environments behave similar to biological ecosystems with each layer having dependencies on the others. A big enough gap in any one area can cause the entire system to crumble. Therefore, we believe that managing gaps in quality (i.e. technical debt) should be centralized and undertaken as a collaborative effort between key stakeholders.
Quality Interdependence
In an influential paper evaluating different perspectives on quality, David Garvin describes several different lenses through which quality is typically evaluated [2]. Technical debt’s current definitional framework takes on a perspective similar to what Garvin describes as the “manufacturing view.” This view emphasizes impacts to quality that are internal to system development. This includes specifications compliance, design effectiveness, defect rates and maintenance challenges. If we highlight the ISO quality attributes [3] emphasized by technical debt’s current definitional framework it might look something like Figure 1. However, this is not a perfect representation because the extrinsic elements of the highlighted attributes below would not be included.
Figure 1: ISO 9126 Quality Model [3]
It would be convenient for technology teams to manage quality attributes upon which they have the most control. The challenge is that these quality attributes are highly interdependent. Areas of the technology infrastructure are interrelated to form a structure similar to a biological ecosystem. Each area of the ecosystem has inherent dependencies on the others. Decisions made in one area of the infrastructure frequently have implications in other areas. Design choices that increase Maintainability and Portability, such as deciding to leverage an object-relational mapping or interpreted language, often impact Resource Utilization. Efforts to improve Security, such as implementing multi-factor authentication, usually decrease Usability.
Technical debt can be a valuable tool when used responsibly to meet organizational objectives with limited resources. However, responsible use requires that it be accrued strategically with the best possible understanding of its consequences. Failure to recognize and account for the interdependencies between quality characteristics will lead to the accumulation of technical debt unintentionally. Organizations could pay off technical debt in one area and unknowingly accumulate debt in another area with a higher interest rate. Unintentional debt is the worst possible kind because it cannot be planned for and often does not become visible until there is an impact. We will now explore several examples that demonstrate the interdependent nature of quality. These examples also happen to fall outside of technical debt’s current definitional framework.
Deferred functionality
When we defer features, we impact the Operability of that system. This can lead to undesirable user interactions and ultimately compromise Accuracy by way of data integrity. An example of this can come from users “cramming” data in unintended places from missing attributes or to fill the gap left by missing functionality. In this scenario, Accuracy would be impacted by compromises made to Operability. This is depicted in Figure 2 with a pencil icon next to Operability which changed and a red arrow next to Accuracy which was unintentionally impacted.
Figure 2: Deferred functionality’s effect on overall quality
To illustrate, consider the example of an entity whose ability to be deactivated is missing or poorly implemented in the user interface. To differentiate between active and inactive records, users sometimes concatenate strings such as “(Inactive)” to the name field. Another common example is when there are first and last name fields for an entity representing a person, but none for middle name or suffix. Users will tend to cram the middle initial in the first name column and the suffix in the last name. This behavior has a denormalizing effect on the database and will impact downstream processes.
User experience
Poor user experience can drive the use of what process experts call hidden factories. Missing or poorly implemented features encourage users to create workarounds to complete their tasks. These workarounds result in hidden factories that decrease the quality and security of the data. The Operability shortcomings in the application can unintentionally bring about a decrease in Accuracy and Security as shown in Figure 3.
Figure 3: User experience’s effect on overall quality
A common example of this is when data is extracted from an application into spreadsheets for processing or reporting. These hidden factories appear when features are missing from an application or the functionality is more difficult to use than the workaround. The logic contained in these spreadsheets is usually not sanctioned or controlled, and leads to inconsistency and several “versions of the truth.” Additionally, once the data leaves the confines of the system, any controls designed to protect it are taken out of play. Hidden factories are unregulated and create inefficiencies.
Control weaknesses
Unintentional debt can also result from missing or inadequate controls. Validation control weaknesses decrease data quality by allowing invalid data to be entered into the system intentionally (e.g. cramming) or unintentionally (e.g. duplicates, typos, etc.). Weakness in access control implementations also presents risk to data quality and Security.
Figure 4: Control weaknesses’ effect on overall quality
Access controls keep unauthorized users from viewing or editing certain data. Users are not permitted access to certain data because they do not have the training or clearance to interact with it. Data integrity will suffer when the controls that protect that data break down. This is shown in Figure 4 as Accuracy being affected by gaps in Security.
Data quality
If we think of the technical environment as an ecosystem, data would be analogous to the water flowing through a river system. When it is polluted, other downstream areas will be impacted. The toxic effects of defective data can ripple across an entire infrastructure. Bad data will impose cascading interest on any process or system that interacts with it. Inaccurate or incomplete data impairs a system’s ability to provide reliable intelligence and integrate with other systems.
Figure 5: Data quality’s effect on overall quality
Data quality issues also introduce new technical debt by requiring workarounds in data movement logic. These workarounds increase complexity which makes enhancements and testing more challenging.
Platform suitability
For the purpose of this discussion, the “platform” refers to everything that allows the software to run. This includes hardware architecture, operating system, programming languages and frameworks. The computing platform provides the foundational topography upon which the technology ecosystem is built. How well an ecosystem is suited to its host topography differentiates success from failure. Further, the opportunities and limitations of the topography dictate how the ecosystem evolves.
Likewise, a computing platform that is not well suited to needs of the systems it supports will limit capabilities and obstruct evolutionary paths. Every component and subcomponent defined in the ISO software quality model is dependent upon the underlying platform. Therefore, displaying this relationship visually would not add value. Suffice it to say that suitability can impact every other node in Figure 1.
The next and final article will propose a revised definitional framework and how it bridges many of the gaps that exist today.
References will be provided in the last article in the series.
Pingback: The Stakeholder Perspective – Conclusion | Acrowire