Tag Archives: methodology

going meta – the human-machine interface

Anatomy of an n-tier application

A fully functioning web app involves several layers of software, each with its own technology, patterns, and techniques.

At the bottom of the stack is the database. A schema defines the data structures for storage. A query language is used to operate on the data. Regardless whether the database is relational, object-relational, NoSQL, or some other type, the programming paradigm at the database tier is distinctly different than and quite foreign from the layers above.

Above the database is the middle tier or application server. This is where the server-side business logic, APIs, and Web components reside.

There is usually a set of persistent entities, which provide an object abstraction of the database schema. The database query language (e.g., SQL) may be abstracted into an object query language (e.g., JPQL) for convenience. The majority of CRUD (create, read, update, delete) operations can be done naturally in the programming language without needing to formulate statements in the database query language. This provides a persistent representation of the model of the application.

Above the persistent entities is a layer of domain services. The transactional behavior of the business logic resides in this layer. This provides the API (local) that encapsulates the essence of the application functions.

The domain services are usually exposed as SOAP or RESTful services to remote clients for access from Web browsers and for machine-to-machine integration. This would necessitate that JSON and/or XML representations be derived from the persistent entities (i.e., using JAXB). This provides a serialized representation of the model of the application.

We finally come to the presentation layer, which is divided into server-side components residing in the application server and client-side components that execute in the Web browser. Usually there is a presentation-oriented representation called a view-model, which matches the information rendered on views or input on forms. The view and controls are constructed from HTML, CSS, and JavaScript. The programming paradigm in these technologies is distinctly different than the layers below.

Extending the application

Let’s examine what it would take to extend an application with a simple type (e.g., string) property on an entity. The database schema would need to be altered. A persistent entity would need a field, getter and setter methods, and a binding between the field and a column in the database schema. The property may be involved in the logic of the domain services. Next, the JSON and XML binding objects would need to be augmented with the property, and logic would be added to transform between these objects and the persistent entities used by the domain services. At the presentation layer, the view-model would be augmented with the property to expose it to the views. Various views to show an entity’s details and search results would likewise be enhanced to render the property. For editing and searching, a field would need to be added on forms with corresponding validation of any constraints associated with that property and on-submit transaction handling.

That is an awful lot of repetitive work at every layer. There are many technologies and skill sets involved. Much of the work is trivial and tedious. The entire process is far from efficient. It is worse if there is division of labor among multiple developers who require coordination.

A better platform

When confronted with coordinating many concomitant coding activities to accomplish a single well-defined goal, it is natural for an engineer to solve the more general problem rather than doing tedious work repeatedly. The solution is to “go meta”; instead of programming inefficiently, develop a better language to program in. Programming has evolved from machine language to assembly language for humans to express instructions more intuitively. Assembly evolved to structured languages with a long history of advances in control and data flow. Programming languages have evolved in conjunction with virtualization of the machine (i.e., bytecode) to provide better abstractions of software and hardware capabilities. In the spirit of Guy L. Steele’s Growing a Language talk from OOPSLA ’98, components, libraries, and frameworks have been developed using a programming language that itself supports extending the language itself within limits. All of these innovations continually raise the level of abstraction to increase human productivity.

We are hitting the limits of what can be expressed efficiently in today’s languages. We have a database storage abstraction that is separate from server-side application logic, which is itself separate from client-side (Web browser) presentation. There is growing support for database and server-side abstractions to scale beyond the confines of individual machines. Clustering enables a software to take advantage of multiple machines to distribute load and provide redundancy in case of failure. However, our abstractions seem to stop at the boundaries between database storage, server-side application logic, and client-side presentation. Hence, we have awkward impedance mismatches when integrating top-to-bottom. We also have impedance mismatches when integrating together heterogeneous application components or services, as RESTful and SOAP Web Services technologies cross the boundaries between distributed software components, but this style of control and data flow (remote procedure calls) is entirely foreign to the programming language. That is why we must perform inconvenient translations between persistent entities and their bindings to various serialized representations (JSON, XML).

It seems natural that these pain points will be relieved by again raising the level of abstraction so that these inefficiencies will be eliminated. Ease of human expression will better enable programming for non-programmers. We are trying to shape the world so that humans and machines can work together harmoniously. Having languages that facilitate effective communication is a big part of that. To get this right, we need to go meta.

you aren’t gonna need it – short-sighted

“You aren’t gonna need it” (YAGNI) is a principle espoused by extreme programming (XP). It says to implement things only when you actually need them, never when you foresee the need. I see serious problems with this principle; it is short-sighted.

From a product management perspective, the product owner’s job is to define a release roadmap and a backlog of requirements for development that they foresee a need in the market. There is a vision and product strategy that guides what should be developed and how to develop it. Does the YAGNI principle apply equally to this backlog? It can be argued that anything on the future roadmap is subject to change. Therefore, developers should not be building features, infrastructure, and platform capabilities in anticipation of those future needs, no matter how confident they are that the backlog will be implemented.

Doing the minimum to meet the immediate need leads to technical debt. YAGNI causes developers to defer solving more difficult architectural problems until the need becomes critical. Architectural problems (“-ilities”) are often systemic, which means as the code base grows the cost to refactor and fix becomes greater.

Security and error handling are examples of systemic issues that are very difficult to fix later. If the problem grows too large, it becomes too costly to fix, because the work cannot be accomplished within a sprint without leaving the code broken (i.e., failed builds, failed tests), and that is absolutely forbidden. That constraint makes it impractical to fix large scale architectural problems throughout the code base, when the problem has become intractable. The software eventually collapses under the weight of its technical debt because of compounded interest, as the debt is multiplied with a growing code base.

If we take YAGNI too seriously, we are being deliberately short-sighted, ignoring the requirements that we can (and should!) foresee. YAGNI encourages us to discount future requirements, expecting that they may not come to pass. A myopic approach tends to lead to a dead end, if we do not take care to set a course in the right direction, and to ensure that we have equipped ourselves to travel there.

If you allow YAGNI to make the road ahead too difficult to travel, those future requirements certainly will not come to pass, because you’ll be broken down on the side of the road. There won’t be anyone to come rescue you, because although you could foresee the need for roadside assistance, you didn’t pay for it on your auto insurance policy, because “you aren’t gonna need it”.

designs are useless – like planning

“In preparing for battle I have always found that plans are useless, but planning is indispensable.” -Eisenhower (from Planning Extreme Programming)

I believe this is true, because “no plan survives contact with the enemy”. In software, the enemy is in the form of dark spirits hidden within the code, both legacy and yet to be written. Because plans (the schedule of work to be done) are intimately tied to designs (models of the software), it must also be true that no design survives contact with the enemy. Any programmer who begins writing code based on a preconceived design will almost immediately feel the pain of opposing forces that beg to be resolved through refactoring; programmers who lack this emotional connection with their code are probably experiencing a failure of imagination (to improve the design).

Therefore, I think we can return to the original quote and state its corollary: designs are useless, but designing is indispensable.

All this is to say that for the above reasons, I think these ideas strongly support the notion that the process artifacts (e.g., Functional Solution Approach, Functional Design, Technical Design) are more or less useless, because as soon as they are written they are obsoleted by improvements that are discovered contemporaneously with each line of code written, but the act of producing them (the thought invested in designing) is indispensable to producing good software.

This leads me to conclude that we should not fuss so much about the actual content of the artifacts, so long as they capture the essence to show a fruitful journey through designing—that problems have been thought through and decisions have been based on good reasoning. Worrying about the content being perfectly precise, comprehensive, and consistent ends up being a waste of effort, since the unrelenting act of designing will have already moved beyond the snapshot captured in the artifact.

Coincidentally, this theme also aligns with the notion of a learning organization espoused by Lean. The value of designing is the facilitation of learning.

vorlons and shadows – philosophy

Babylon 5 is a story about two differing philosophies. The Vorlons promote a life of order, stability, and peace. The Shadows promote a life of chaos, destabilization, and conflict. There is a similar philosophical difference in software development.

Hubris of Prescience

Most commercial software projects lean toward a philosophy of order, stability, and peace. Optimism leads to anticipating stable requirements, which would allow for a stable design. This outlook influences management and developers to behave in a particular way. We gain an expectation for requirements and design to be orderly, stable, and for a peaceful progression of events to ensue. Successful methods and techniques are expected to continue to be viable. The products of past investments into research are expected to retain their value with the passage of time. Knowledge of the present incubates confidence in being able to anticipate future needs. This confidence is reinforced by the belief that momentum has longevity. This belief is correct, but not in the way that we would hope for a healthy technology business.

The technology market relies upon continuous innovation. Innovation is about disruptive change. A strategy based on incremental improvement bets on a steady pace of change, where one’s own product is the market leader. Trailing competitors are constantly seeking to impose revolutionary change upon the market to overtake the market leader. If one is not in the lead, then it makes sense to disrupt the market to put oneself in an advantageous position. New markets are created, where old ones are destroyed. Competitive innovation radically alters requirements, thereby invalidating entrenched designs. Disruptive change accelerates demand for technology through obsolescence.

The lure for the market to adopt innovative products is efficiency gain. Gains in efficiency increase productivity or reduce costs – or both. A methodology promoting the entrenchment of a status quo is unable to adapt to an environment of unrelenting disruptive change. In technology, the status quo is precisely what must be destroyed in order to sustain a healthy market. A strategy of maintaining an entrenched design is a strategy of certain failure in the technology market.

Creativity and Renewal

While a life of order, stability, and peace breeds comfort, it also leads to stagnation and complacency. Creative individuals will recognize where the status quo is not good enough. A culture that institutionalizes the status quo cripples creativity, ensuring a disadvantage in relation to the competition, who seeks disruptive change.

Innovation is nurtured by an entrepreneurial spirit. There are several factors that are required to facilitate creative thinking: freedom, motivation, inspiration, and courage. Creativity needs a culture that promotes creative ideas by nurturing independent thinking, not obedience to instructions delivered by management or designated “thought leaders”; individuals will be willing to think if they are entrusted to do so. Motivation comes from achievable goals, incentives, and the rewards of a job well done. Inspiration comes from the growth and enjoyment of working with peers, who demonstrate competencies, which become a source of knowledge. Finally, courage is provided by the willingness to incur risk to achieve the rewards that come from unconventional thought; people who are confident enough in their ability to attempt great things should be encouraged to do so, because overcoming technical problems is daunting enough without the burden of an unsupportive culture.

The key characteristic of a technology business that is prepared for innovation is agility. It must be adaptable quickly to change. It must embrace disruptive change, and use change to its advantage, rather than being resistant or vulnerable to it. This includes a high degree of tolerance for risk, because change is inherently risky, and the more disruptive the greater the risk (and potentially the reward). Innovation opens opportunities, but a business must be able to pounce on an opportunity to benefit.

Until Moore’s Law no longer holds, we can expect innovative growth to continue. Technology innovation and obsolescence is the dominant trend. A technology business that does not institutionalize a culture that embraces disruptive change will not be a business for long.

a better software business model

In my previous article, innovation as the enemy of maturity, I describe some organizational and technical patterns that can help to empower innovation, as a software development firm grows. Removing internal impediments to innovation does not alleviate the commercial impediments.

Commercial off the shelf (COTS) software aims to provide generalized capabilities, in order to be attractive to a broad market. Enterprise applications are sensitive to business processes and business policies, which are highly specialized to each organization. One size does not fit all organizations. Each deployment will have its own idiosyncrasies. Some will demand features that are not generally useful to others. Others will demand customizations that are exclusive, not made available to the general market. The former pollutes an application, so that each deployment utilizes a shrinking subset of the features, while having to endure the growing footprint of unwanted baggage. The latter burdens the software with the growing complexity of a diverse set of customizations. Both are symptoms of a chronic bloat.

COTS software is normally structured so that there is a license fee to purchase an application for a particular use, followed by a recurring annual maintenance fee for support and bug fixes. The customizations often associated with enterprise software incur additional fees for professional services. This business model influences the software in the following ways.

  1. New features are constrained by the software vendor’s release schedule, which is usually on a 6-18 month cycle;
  2. Feature prioritization is weighted by the general market demand;
  3. Development is limited by the resources provided by the vendor; and
  4. Design decisions are dictated by the software vendor.

The COTS software business is modeled to maximize the software vendor’s revenue by seeking out the highest value subset of generalized features that are widely in demand by the greatest number of customers. By attempting to satisfy a few needs for all customers, no single customer can be wholly satisfied. Usually a generalized solution to a problem with many specialized variations can only partially solve the problem for any particular circumstance.

Enterprise software requires a business model that can adequately satisfy the demand for specialized solutions to specialized problems, while leveraging generalized solutions to generalized problems. A customer with a need for features to be developed according to a schedule should be able to fund and resource that development without being constrained by a single source and the lack of design authority.

Customers can be better served by a business model, where an annual fee is paid for a subscription to the software source code, development infrastructure, customer support, and community resources. The market for innovative designs expands to include contributors, who are much more familiar with the problem space (the user community). Competition helps to seek out the best solution, and it benefits the whole community. Each customer selectively builds the application to include only the desired features with their specialized extensions and customizations on their own schedule. Enterprise customers tend to contract professional services to implement extensions and customizations. This community source model empowers consultants, and attracts systems integrators and channel partners to improve sales instead of compete. A subscription based community source model is good for customers and good for the software vendor.

innovation as the enemy of maturity

In my previous article, maturity as the enemy of innovation, I identified the destructive forces of organizational maturity. As an engineer, I cannot allow a problem to go unsolved.

To stem the tide of chronic risk aversion, we must imprint the entrepreneurial spirit into the culture as a core value. The courage to initiate calculated risk-taking needs to be admired and rewarded. The stigma associated with failure must be removed by appreciating the knowledge gained through lessons learned.

The cost of research and experimentation must be reduced. The pace of development must be rapid. These are achieved by keeping the team size small. The most risky and unproven ideas should be attacked by an individual, not a team. Functional teams need alignment on direction and a degree of agreement. Revolutionary advancement is frequently the result of radical thinking. New directions, different techniques, and breaking well-established rules are unpopular, especially to those with emotional investments in past innovations. There are fewer disagreements in a small team, and none at all in a team of one.

Uniformity of thinking leads to stagnation. Diversity must be encouraged, and this is promoted through a free market for ideas. Organizations that rely on a command-and-control style of management to set direction eliminate the competition that nurtures a diversity of ideas. Management should empower technical decision-making to be done bottom-up rather than top-down.

Once new ideas have been adequately proven, there must be ways of teaching them to others and incorporating them into product. Product development is often done sequentially, one release at a time. The immediate release focuses on short term commitments, and the delivery is usually constrained by schedules and costs that can ill afford risk. Riskier development must be allowed to proceed concurrently on longer-term schedules, without overly constraining the process. Developments from the unstable branch should be merged into the main product release, as they become ready. This provides an environment to accommodate both risky development and risk-averse development.

The above techniques can be effective in remedying the organizational impediments to innovation. However, there are also commercial impediments that must be overcome. That will be the topic of my next article.

maturity as the enemy of innovation

As a software development organization matures, its culture institutionalizes practices that promote quality. Maturity is for quality. The most obvious measures of quality are in terms of defects. The quest for lower defect counts is a noble one.

At the same time, complexity grows as the weight of larger feature sets and lofty requirements are incorporated into the software. With increasing complexity comes a larger development team. A larger team means more division of labor, higher cost of management overhead, and more careful planning to be able to utilize those resources. Division of labor requires the delegation of distinct responsibilities to specialized roles. Decision-making becomes more dispersed and distributed, and a decision will have wider impacts on team members, so good communication becomes a critical success factor.

Unfortunately, decision-making, communication, and the distribution of responsibility are never perfect. The more complex the software, the larger the team, the more distributed the responsibilities, the more specialized the roles, the higher the cost of development. This is not a scalable organizational model. Once a software organization has reached this stage of maturity, it is doomed by inefficiency. Creative individuals are no longer empowered to quickly realize their ideas. Two key requirements to motivate and nurture a creative mind are independence and freedom. The degree to which these environmental factors are blunted is the degree to which innovation is impeded.

Because decision-making is distributed among many people, it becomes impossible for innovative ideas to be realized without winning consensus. Innovative ideas are unconventional ideas. A novel approach often needs to break well-established rules and go against practices that have worked in the past. Innovation is destructive to the status quo. It must be disruptive in order to rise and overtake older practices. Innovation is the product of experimentation and risk-taking, usually resulting in failure and rework. Without an environment that encourages this, there will be no innovation.

The quest for quality through institutionalized practices and disciplined management are in many ways counter to the independence and freedom that motivate a creative mind. Careful project management and planning discourages experimentation with risky new ideas (technical, organizational, and commercial), where many unknowns make results difficult to predict. Risky ideas need experimentation so that many failed attempts can be discarded quickly, allowing successful ideas to be taken to production. Mature organizations respond to higher risk by applying stronger management practices for mitigation, thereby increasing the cost and slowing the pace of failure towards success. In fact, once management smells failure, they immediately work to eliminate it, thereby destroying its very value as a vehicle to learn what will be needed to succeed. This is how many mature organizations have killed off the entrepreneurial spirit. This is the reason why most innovative ideas come from small groups of individuals working in a garage or a small start-up company.

the software assembly line

Creativity is stifled by a manufacturing process. Project managers perceive the development process to begin by identifying and analyzing requirements in full, until they are perfect for feeding to designers. Then, designers will satisfy those requirements by detailing designs, until they are perfect for coding. Then, code will be written and tested, until all the pieces can be integrated and QA tested. No matter how often this process fails to deliver innovative solutions, software companies continue to blame the lack of compliance to the process, rather than the process itself.

Modern software methodology suggests that the disciplines operate concurrently. Requirements anaysis, design, coding, integration, and testing all happening in parallel and continuously. The process is iterative, attacking problems incrementally, and refining rapidly with every cycle. Project managers claim to support this approach, and yet they continue to define gargantuan iterations (several months in duration) with entire sets of features fully implemented. Within each iteration, they plan with distinct milestones for completing of requirements, design, coding, unit testing, integration, and QA.

This pattern seems almost inevitable, when the project team is large (>20) and composed of individuals, who are highly specialized in their discipline. Designers have expectations of receiving good requirements from the analyst. Coders have expectations of receiving good designs, if they were not the designer. Testers have expectations of receiving good code. Unfortunately, the software assembly line does not produce good software, because development is a creative process, not a manufacturing process.

You do not produce good software based on dreaming up a perfect design. You need to start with a great deal of unknowns, some gross assumptions (likely overly naive), and a poor understanding of the problem. Proceed with the knowledge that you will make mistakes, and these will facilitate learning how to solve the problem better. Build it quickly and screw up several times, improving with every attempt. Eventually, the problem will be understood, and it will be adequately solved by the solution at hand, despite its imperfections. Project managers do not understand how to plan for a process that involves spinning around cycles of two steps forward and one step back. They think that an assembly line is the model of efficiency, and iterations are too chaotic and unpredictable, because of the constant rework. A software assembly line is certainly orderly and predictable. It predictably produces little innovation at a very high cost and over a long duration.

where have all the hackers gone?

There is a real difference between those, who develop software as a job, and those, who do it for the love of it. It is easy to distinguish between them. The former believes that only straightforward problems can be solved in a timely manner, and the less to be done, the better. The latter believes that straightforward problems are mundane and tedious, and hopes to spend time on more challenging problems. True hackers are a rare breed.

The biggest difficulties in software are non-technical. They are psychological. When a group of people have produced a body of work, they become emotionally invested in it. They feel that they know something. It becomes their anchor to reality, and every future possibility is considered in relation to that knowledge. With a sense of security comes malaise. It impedes the child-like motivations to reach out to the unknown and discover. Legacy knowledge within accomplished veterans becomes toxic to the culture, because the perceived value of legacy knowledge eclipses the potential value of further discovery, which has more risk attached. Cultural attachment to legacy kills innovation. It blunts the entrepreneurial spirit.

A true hacker does not give a rat’s ass about what is established by what came before. Everything is open to destruction and rebirth. The risk of expending effort towards a failed attempt is irrelevant, because a true hacker understands that every failure is a positive step towards ultimate success.

planning and estimation

We spend an extraordinary amount of time planning our development, because it is so costly to get any amount of code written. Developers are asked to give estimates. They demand to know what is in scope down to the individual operation. It’s crazy how much design work and detail needs to be thought through before reaching that point. Then, they estimate maybe 10-20hr of work per method. Even a tiny piece of functionality results in estimates in several person-weeks of effort. This is how software professionals get their jobs done.

Where have all the hackers gone?

I came up with a cool idea on the weekend. I decided to code it sunday. I spent about four hours in total coding the entire thing, analyzing and designing, as I went. It turned to be over 2000 lines, which I proceeded to unit test. I put in another three hours tonight debugging a rather obscure problem (actually wasting 2.5 hours chasing a non-existent bug due to my own stupidity), which I discovered in my sleep the other night.

I simply cannot understand why experienced software professionals cannot hack out 3000 lines of unit tested code on a good day, and average 1000 lines of unit tested code per working day, with low bug counts.

I’m almost glad to see that many of North America’s grunt coding jobs are being outsourced to India. The cost of our software professionals in combination with their low productivity and high maintenance attitude is unbearable.