Tag Archives: methodology

notes on quality process

Software development methodologies such as the Unified Process are intended to institutionalize a quality process.

Often, the manner in which a quality process is implemented will focus on quality of manufacturing, in order to achieve predictable schedules and outcomes. In the name of predictability, rigorous up front planning attempts to predetermine tasks, resources, and schedules to estimate costs and deliverables. Putting constraints on the size and shape too early will necessarily require gross assumptions about the requirements and design. In so doing, we have traded off quality of innovation to achieve quality of manufacture. We are able to build it within budget and on time, but what we are capable of building is not innovative.

follow the rules

I have an engineer who works for me. His responsibilities include ensuring design consistency and best practices across the components in the system. We establish design guidelines and rules for the team to apply in their work, and he performs the grunt work of reviewing and editing everyone’s designs. He is well suited to the role, because he is very methodical, and he has an affinity to documenting and following rules.

Engineering can be divided into two different modes of operation: manufacturing and invention.

I use the term manufacturing in reference to ordinary development. That is, designing based on a strategy of incremental improvement over past designs. This strategy has served the Japanese well in automobiles and consumer electronics. A disciplined approach to engineering with focus on continuous improvement and attention to market forces is the key to success in manufacturing. Manufacturing is about quality of implementation. Manufacturing is about applying best practices and quality process.

Invention is completely different. Invention is not about responding to market demand. It is about using imagination to create new markets, where they never existed before. Invention requires innovative thinking to formulate new ideas and different (hopefully better) practices. Invention is not about applying today’s best practices and rules to incrementally improve. Invention requires the understanding that today’s best is flawed and impeding progress towards a superior possibility that may be highly risky, unproven, and difficult to achieve – but worthwhile to pursue, because it would be revolutionary.

A business needs disciplined engineers, who are skilled at manufacturing. This is where the money is.

My grunt is definitely a manufacturer, not an inventor. He has mediocre design creativity, because he is unable to intuit across a mass of contradictory information and conflicting motivations (resolving the forces), while paying attention to detail. He needs rules to govern his thinking and everyone else’s. Without rules, he is unable to function. He has no intuitive understanding of good design principles. He follows the rules.

An inventor (and every good designer) does not follow the rules. He does not break the rules either. He defines the rules that are appropriate, and he uses them to aid his thinking. Above all, he uses his brain to produce good designs, given the facts in evidence. He leads, where few have the courage to follow.

oops reuse misuse – part 2

part 2 of infinity – reuse and replacement

(Continued from part 1 – introduction.) Organizations frequently become enamored with building software infrastructure and frameworks for reuse. They see a great deal of common problems being solved again and again in different ways by different people. They see duplication of effort as a business inefficiency that can be optimized by factoring the commonality into reusable components, so that wheels are not reinvented. Centralized control through corporate standards is another popular activity.

One sign of successful reuse is the proliferation of users. With each new user there is a new dependency. The dependency is manageable so long as the requirements are stable. If new requirements must be satisfied, as is typical of rapidly evolving technology, these dependencies become a serious impediment to productivity.

A fundamental enabler of component reuse is the software contract. The interface that isolates the underlying implementation details from its users must remain stable to allow the two to evolve independently. However, changing requirements often result in interface changes. Any change to the software contract will break its users. As the number of users grows, the impact of this breakage becomes more severe. Thus, the desire to scale software functionality linearly with development effort is not viable because of changing requirements.

Only with perfect foresight can we avoid changing requirements. If we could employ prescience so that all new requirements resulted in wholly new software components, leaving existing software components in tact, we would have a predictable and scalable development process. Unfortunately, human fallibility prevents us from having good visibility into the future, and our software designs suffer.

We must be honest and recognize that software, no matter how well designed, has limited ability to evolve to meet new and changing requirements over time. We must build into our development processes the practices that can accommodate the retirement and eventual replacement of components that have collapsed under the weight of their outdated designs. A fatal flaw in many development organizations is the denial that software can grow old and die. This flawed thinking allows younger and healthier software from the competition to eventually win out.

Therefore, any reuse strategy must anticipate this component lifecycle to ensure that the entire ecosystem of dependent users does not die a horrible death tied to aging components. A reuse strategy without a recurring replacement strategy is a path of certain destruction.

Another thing to recognize is the distinction between infrastructure versus application code. The view that some components are foundational, while others are fundamentally not, is short-sighted. The entire premise of reuse is that software should be built to enable an evolution towards building more functionality using the components already constructed. In effect, every component eventually becomes infrastructure for other components, which are often not anticipated.

oops reuse misuse – part 1

part 1 of infinity – introduction

One of the strongest motivations that is espoused for promoting object oriented programming is reuse. While object oriented programming languages provide features, such as encapsulation and inheritance, that enable better reuse, OOP has gotten a bad rap, because the discipline that is required to produce good software must address many concerns aside from reuse in order to be successful.

Reuse is a buzzword software professionals bandy about, when they promote certain software practices. It refers to the ability to incrementally grow software capability by building additional functions to augment what was already built previously. Development organizations like this idea, because it implies predictability, which is the holy grail of software methodology (quality process). If the additional effort required to build and test functionality can be made proportional only to the additional code being incrementally developed, then development can be highly predictable. This would be the key to projects being delivered on time and within budget.

Although reuse in this sense is a good motivation, nobody knows how to make it work. Very intelligent and highly respected individuals will say how to apply principles of reuse, but either they don’t know what they are talking about, or the people who apply their principles don’t understand what they should be doing. Or both. The state of the software industry is a testament to the fact that reuse strategies are being widely applied with a high degree of (and near total) failure. No one achieves anywhere close to the linearity of effort, predictability of process, and scalability of outcome that reuse is meant to achieve.

In an series of articles, of which this is the first, I would like to evaluate various reuse strategies, practices, and object oriented programming techniques. Stay tuned for more in this series.

forest for the trees – abstractions

When faced with designing objects, many developers immediately try to identify the abstractions. They have a preconception that the world must be organized as a hierarchy of abstractions. They quickly run into situations that don’t fit nicely into their hierarchy. They struggle to force-fit things into this model. After a while, they ask for help.

The world is not a tree. There may be forests. But generally the world is what it is. Do not force things to be a part of a tree, when that is not what they are. That should be obvious enough. It is always the most obvious things that are difficult to realize, when one is blinded by misconceptions.

Step 1. Identify the concrete types. Start from the objects that will actually exist, not from higher level abstractions. The concretes have definite characteristics that can be analyzed. These characteristics are where the abstractions will be found later through refactoring (unit economy). Concretes are characterized by their behavior. A concrete has roles and responsibilities relative to other objects with which it collaborates. It may be necessary to decompose a concrete into smaller parts that take on those distinct roles and responsibilities (separation of concerns) more clearly.

Step 2. Refactor the characteristics into higher level abstractions. Identify characteristics and behaviors that are common across concretes. Often behaviors can be seen as actions performed on the concretes (rather than actions performed by the concretes), and there is a distinct characteristic that each action depends on. I frequently call such a characteristic a capability, and the action is known as a generic algorithm. As this refactoring proceeds, the generic algorithms and the capabilities are identified as higher level abstractions shared or applied across concretes.

Step 3. Refactor the algorithms and capabilities. As the generic algorithms are implemented against the capabilities of the concretes, the methods will often benefit from factoring out commonality (duplicate code). Parts of the algorithm may benefit from being specialized through pluggable policies. These motivations identify additional abstractions at finer granularities. The finer grained parts of algorithms will likely act upon finer grained capabilities on the concretes. Broader generalizations and narrower specializations appear in the model as a consequence of this iterative approach due to refactoring to reduce complexity (the complexity is refactored to become encapsulated within objects, which are more manageable than messy algorithms).

The resulting model is forests of trees, along with lions and tigers and bears, as well as all the other concretes that exist in the world. Things as they really are. The way it ought to be.

mda – model driven architecture

I explore Model Driven Architecture (MDA). This approach uses modeling for programming. Models drive the generation of code according to patterns.

We tend to write a lot of similar code over and over. If you have ever written code to persist a Java object into a relational database table, you know what I mean. It is similar for every object. Although the characters you type are different, there is a definite pattern, and you know deep down that an intelligent person should not have to endure such tedium day in and day out. The same goes for a lot of user interface code, like data entry forms. The same code gets written again and again; only the names are different.

I believe very strongly that model driven architecture is the way of the future for software development. Objects led to interfaces. Then, collaborations. Then, patterns and pattern languages (families of patterns). Model driven architecture is a natural extension of this trend. After identifying a pattern language, it should be possible to instantiate those patterns (write a software program) by expressing that intent through modeling. Modeling in the abstract sense is the act of capturing concepts in a notation.

I consider programming in Java to be a form of modeling. A programming language is not a very compact modeling language. Programming languages are designed to have a lot of redundancy to allow for error checking. Patterns in the code also indicate a lot of information that could be expressed in a more compact form. I view MDA as expressing software concepts in a very compact form (the model).

I think we need an example. I could write a program to represent pictures formed from geometric shapes. The software itself would probably require thousands of lines of code to perform the editing, rendering, and other functions. However, the application is really only dealing with a few basic concepts: ellipses, polygons, line segments, and a few others.

It should be possible to model each shape with a minimum number of parameters; e.g., an ellipse would be modeled by three points. In this manner, we could express any diagram understood by this application very compactly in a modeling language that captures the parameterization of shapes.

Now, let’s take that graphics package and reuse it as the foundation for constructing an application that renders telecommunications networks. This application deals with concepts like location, network elements, ports, and connections. Each concept is rendered with symbols, which are merely recurring patterns of our familiar geometric shapes.

The parameterization of shapes is no longer the most appropriate modeling language for capturing the essential concepts of this application. A more compact representation can be achieve by parameterizing the concepts that are directly understood by the application. This is the essence of model driven architecture.

Not everyone views MDA as I do. Actually, I have yet to encounter anyone who has expressed it as simply as that. Most proponents of MDA ramble on about UML and MOF and platform independence and all kinds of other non-essential mumbo jumbo. The key idea that is lost among the OMG UML/MOF folks is the notion of unit economy in the theory of concepts; that is the need to represent concepts in the most compact form. This is necessary within the mind, and this need is reflected in language.

UML makes perfect sense as a language for representing analysis and designs for software development. However, when one is looking to model concepts within any other context, the absurd verbosity and complexity exhibited by UML/MOF make as much sense as using geometric shapes as the modeling language for telecommunications networks.

success and failure

Fear of failure is paralyzing. It is the one source of impotency that can prevent all forward progress. Success becomes the most imperative, when there is a great deal to lose. It is also when it is the most difficult to act towards achieving success.

We should not forget that success is a strategic vision. There will be tactical failures along the way. Often, failures are necessary in order to gain the requisite knowledge that leads to ultimate success. An unwillingness initially to go forth and learn from failure would be the critical error that would make success impossible.

There is a tendency to mitigate risk by introducing rigid processes. If we go through these tried and true steps, then we will have satisfied ourselves that we did everything in our power. As an organization matures, its processes become more rigid and ingrained into the culture. They also become more invasive and fine grained. Every thought and action becomes regimented. The more procedures to follow, the less we rely on individuals to think. When individuals do not think for themselves, innovation ceases.

Institutionalized processes are put in place to ensure that best practices are documented. Organizations talk about continuous improvement and point to the procedures that allow for innovation to feed back into the quality system. We must realize that quality is something that we trade for at the expense of innovation and time to market. A quality system also seeks to prevent failure from occurring. Stifling the opportunity to learn from our mistakes is a fatal error. That can be a detriment to long term success.

revolutionary in an evolutionary world

This article is a follow-up to cost-value entanglement. Product management is notorious for being risk averse. This often comes from a history of dealing with frequent failures to deliver on time and with quality due to chronic cost-value entanglement. This initial architectural failure cripples a product forever, unless the root cause of the problem is recognized and corrected. Risk aversion grows as the product becomes brittle, and development becomes unwieldy due to ever-increasing code complexity.

Architecture is often thought of as a design function, but this is far from accurate. Use case and requirements analysis are specification activities, which are central to product architecture. It is most important to identify how its users interact with the system and what functions a system performs. These aspects of the system should be encapsulated by its facade, the boundary between the externally visible behavior (interfaces) and its internal implementation. Poor product specification and poor separation between interface and implementation are the architectural manifestations of cost-value entanglement.

This leads to product management demanding a meticulous “evolutionary” approach to development, meaning only small patchwork enhancements are permitted. Significant redesign and technological improvements are impossible, because internal changes will disrupt the externally visible behavior, breaking things for the installed base of users. Such unreasonable constraints can be alleviated by disentangling the facade from the internals. Clearly identify the externally visible concepts in a precise model to support human understanding and interfaces for programmatic access. This enables evolving the facade independently of radical redesigns to the internal implementation. Without this flexibility, revolutionary change is impossible, if quality and time to market are to be maintained.

leaders and followers

I have noticed that there are those leaders who blaze the trail. They lead the world kicking and screaming into the frontier. Then, there are those who are willing to follow, but have little to blaze for themselves. Finally, there are those who refuse to follow, but have no skills to lead either.

What do we do, when we want to learn something? Research what others have done before. The vast majority of things can be accomplished without a deep understanding of the problem or solution; it just requires emulating success. Someone else did the hard work of understanding, and documented a procedure and a simplified explanation that others could absorb and reproduce. That is how humans work. To become a successful trailblazer requires a deep understanding for oneself, but also the ability to distill that into simple explanations and instructions that ordinary people can absorb. The sophisticated knowledge will probably be taken to the grave, but it is the idiot’s guide that will endure the ages.

Ordinary men need traditions and procedures to emulate. Otherwise, they wouldn’t be able to even feed themselves. Imagine the vast knowledge and investment in thought that was required to invent the cooking recipes that we know today. Many people would starve today if they were responsible for acquiring that knowledge themselves through creative thinking, rather than by emulation. Traditions, rituals, and procedures bring comfort to us, because it relieves our burden to think. Most people are incapable of advanced contemplation; emulation is their only recourse. It also instills a common denominator in a society’s culture, and it binds them together psychologically.

We grow up emulating our parents and neighbors by decorating a tree, putting up lights, singing songs, exchanging gifts, and gathering with loved ones. There is comfort in these mindless activities, because they are familiar and safe. This manner of blind emulation leaves us vulnerable to less benign inclinations, like smoking or religion. Ritualistic activities give me no comfort at all. Tis the season to celebrate the most basic survival technique: mindless emulation.

cost-value entanglement

With software, the value is in the concepts (model), brand (reputation), people (experience), and customers (installed base). Not so much the design or implementation artifacts. Often we place too much focus on the code, because it is the most tangible manifestation of our investment.

This is a grave error. The code is the result of the sunk cost, but it is not the true value of a software product. The value lies in the capabilities enabled by the software. The software is the means, not the ends. The code must be free to evolve rapidly and radically. If it is not, it will not be able to survive in this ever-changing business environment.

Software organizations that are code-centric have long product release cycles, slow response times to changing requirements, and poor agility to scale the business to expand its market. Code-centric organizations rely heavily upon a skilled development team’s intimate and long-standing relationship to the code. This leaves them vulnerable to competition and employee turnover.

We must be value-centric, not cost-centric in identifying goals. The value is the ends, whereas the costs are the means. The costs (code) must be very flexible, adjusting with agility to the times. The value must be durable. The value must be identified, disentangled from the costs, represented tangibly and separately, and communicated widely. Cost reduction implies discounting investment in code. Decoupling value from code as much as possible allows costs to be reduced without impacting the value.

Complexity grows with code size. Costs grow nonlinearly with code complexity. When you suffer from cost-value entanglement, then code complexity will sink your entire product in time. Entanglement takes away the ability to significantly redesign and reorganize the code, because it puts the value at tremendous risk. Without the freedom to significantly redesign, eventually the product will collapse under the ever-growing weight of its code complexity.