Category Archives: software

a better software business model

In my previous article, innovation as the enemy of maturity, I describe some organizational and technical patterns that can help to empower innovation, as a software development firm grows. Removing internal impediments to innovation does not alleviate the commercial impediments.

Commercial off the shelf (COTS) software aims to provide generalized capabilities, in order to be attractive to a broad market. Enterprise applications are sensitive to business processes and business policies, which are highly specialized to each organization. One size does not fit all organizations. Each deployment will have its own idiosyncrasies. Some will demand features that are not generally useful to others. Others will demand customizations that are exclusive, not made available to the general market. The former pollutes an application, so that each deployment utilizes a shrinking subset of the features, while having to endure the growing footprint of unwanted baggage. The latter burdens the software with the growing complexity of a diverse set of customizations. Both are symptoms of a chronic bloat.

COTS software is normally structured so that there is a license fee to purchase an application for a particular use, followed by a recurring annual maintenance fee for support and bug fixes. The customizations often associated with enterprise software incur additional fees for professional services. This business model influences the software in the following ways.

  1. New features are constrained by the software vendor’s release schedule, which is usually on a 6-18 month cycle;
  2. Feature prioritization is weighted by the general market demand;
  3. Development is limited by the resources provided by the vendor; and
  4. Design decisions are dictated by the software vendor.

The COTS software business is modeled to maximize the software vendor’s revenue by seeking out the highest value subset of generalized features that are widely in demand by the greatest number of customers. By attempting to satisfy a few needs for all customers, no single customer can be wholly satisfied. Usually a generalized solution to a problem with many specialized variations can only partially solve the problem for any particular circumstance.

Enterprise software requires a business model that can adequately satisfy the demand for specialized solutions to specialized problems, while leveraging generalized solutions to generalized problems. A customer with a need for features to be developed according to a schedule should be able to fund and resource that development without being constrained by a single source and the lack of design authority.

Customers can be better served by a business model, where an annual fee is paid for a subscription to the software source code, development infrastructure, customer support, and community resources. The market for innovative designs expands to include contributors, who are much more familiar with the problem space (the user community). Competition helps to seek out the best solution, and it benefits the whole community. Each customer selectively builds the application to include only the desired features with their specialized extensions and customizations on their own schedule. Enterprise customers tend to contract professional services to implement extensions and customizations. This community source model empowers consultants, and attracts systems integrators and channel partners to improve sales instead of compete. A subscription based community source model is good for customers and good for the software vendor.

the software assembly line

Creativity is stifled by a manufacturing process. Project managers perceive the development process to begin by identifying and analyzing requirements in full, until they are perfect for feeding to designers. Then, designers will satisfy those requirements by detailing designs, until they are perfect for coding. Then, code will be written and tested, until all the pieces can be integrated and QA tested. No matter how often this process fails to deliver innovative solutions, software companies continue to blame the lack of compliance to the process, rather than the process itself.

Modern software methodology suggests that the disciplines operate concurrently. Requirements anaysis, design, coding, integration, and testing all happening in parallel and continuously. The process is iterative, attacking problems incrementally, and refining rapidly with every cycle. Project managers claim to support this approach, and yet they continue to define gargantuan iterations (several months in duration) with entire sets of features fully implemented. Within each iteration, they plan with distinct milestones for completing of requirements, design, coding, unit testing, integration, and QA.

This pattern seems almost inevitable, when the project team is large (>20) and composed of individuals, who are highly specialized in their discipline. Designers have expectations of receiving good requirements from the analyst. Coders have expectations of receiving good designs, if they were not the designer. Testers have expectations of receiving good code. Unfortunately, the software assembly line does not produce good software, because development is a creative process, not a manufacturing process.

You do not produce good software based on dreaming up a perfect design. You need to start with a great deal of unknowns, some gross assumptions (likely overly naive), and a poor understanding of the problem. Proceed with the knowledge that you will make mistakes, and these will facilitate learning how to solve the problem better. Build it quickly and screw up several times, improving with every attempt. Eventually, the problem will be understood, and it will be adequately solved by the solution at hand, despite its imperfections. Project managers do not understand how to plan for a process that involves spinning around cycles of two steps forward and one step back. They think that an assembly line is the model of efficiency, and iterations are too chaotic and unpredictable, because of the constant rework. A software assembly line is certainly orderly and predictable. It predictably produces little innovation at a very high cost and over a long duration.

where have all the hackers gone?

There is a real difference between those, who develop software as a job, and those, who do it for the love of it. It is easy to distinguish between them. The former believes that only straightforward problems can be solved in a timely manner, and the less to be done, the better. The latter believes that straightforward problems are mundane and tedious, and hopes to spend time on more challenging problems. True hackers are a rare breed.

The biggest difficulties in software are non-technical. They are psychological. When a group of people have produced a body of work, they become emotionally invested in it. They feel that they know something. It becomes their anchor to reality, and every future possibility is considered in relation to that knowledge. With a sense of security comes malaise. It impedes the child-like motivations to reach out to the unknown and discover. Legacy knowledge within accomplished veterans becomes toxic to the culture, because the perceived value of legacy knowledge eclipses the potential value of further discovery, which has more risk attached. Cultural attachment to legacy kills innovation. It blunts the entrepreneurial spirit.

A true hacker does not give a rat’s ass about what is established by what came before. Everything is open to destruction and rebirth. The risk of expending effort towards a failed attempt is irrelevant, because a true hacker understands that every failure is a positive step towards ultimate success.

OOPs, here comes SOA (again)

Last week, I posted the following facetious comment about Service Oriented Architecture (SOA).

Posted By: Ben Eng on May 24, 2004 @ 09:27 PM in response to Message #123260

Strip all the behavior off of our domain objects. Define data structures to pass around. Provide sets of operations as stateless services, which perform functions on data, and return data to its callers. That sounds revolutionary.

All we have to do now is eliminate the object-oriented programmers, and the revolution will be complete.

There is a serious side to the issue. The recent popularity of SOA should give object oriented programmers much pause. Where do objects fit in the world of services? Are objects passé?

In the book Bitter EJB (p.57), the distinction is made between service-driven and domain-driven approaches. I am strong proponent of domain-driven approaches. However, maybe three years ago, I began feeling that a purely domain-driven approach was awkward for modeling object graphs with very sophisticated global constraints. This is a natural topic of interest, considering that I model telecommunications networks for a living.

Factoring out the global constraints, instead of embedding them within the domain objects, seemed natural. I started to develop stateless session local interfaces (using plain old Java objects, not EJB), which use domain objects as parameters and return values. Then I found myself adding conversational state to the session objects. I’m not sure, where I’m going with this yet. We’ll see after a few more weeks or months of experimentation.

After having developers fumble around with EJBs for the past four years, I am becoming disillusioned with the technology. XML and Web Services is going even further down the road of coarse-grained distributed components, which are a consequence of high latency due to expensive data marshaling and remote communications. Whenever EJB or Web Services is involved, the client code looks like garbage – calling functions on data, rather than methods on objects. Optimizing the interface to reduce client-server round trips pollutes the interface, until it also looks like garbage; objects become so polluted with denormalized data that the concepts are no longer recognizable. When faced with sophisticated object graphs, rather than simple services, the approach leads ultimately to garbage and more garbage. I’m tempted to call the approach Garbage Oriented Architecture, rather than SOA.

oops reuse misuse – part 2

part 2 of infinity – reuse and replacement

(Continued from part 1 – introduction.) Organizations frequently become enamored with building software infrastructure and frameworks for reuse. They see a great deal of common problems being solved again and again in different ways by different people. They see duplication of effort as a business inefficiency that can be optimized by factoring the commonality into reusable components, so that wheels are not reinvented. Centralized control through corporate standards is another popular activity.

One sign of successful reuse is the proliferation of users. With each new user there is a new dependency. The dependency is manageable so long as the requirements are stable. If new requirements must be satisfied, as is typical of rapidly evolving technology, these dependencies become a serious impediment to productivity.

A fundamental enabler of component reuse is the software contract. The interface that isolates the underlying implementation details from its users must remain stable to allow the two to evolve independently. However, changing requirements often result in interface changes. Any change to the software contract will break its users. As the number of users grows, the impact of this breakage becomes more severe. Thus, the desire to scale software functionality linearly with development effort is not viable because of changing requirements.

Only with perfect foresight can we avoid changing requirements. If we could employ prescience so that all new requirements resulted in wholly new software components, leaving existing software components in tact, we would have a predictable and scalable development process. Unfortunately, human fallibility prevents us from having good visibility into the future, and our software designs suffer.

We must be honest and recognize that software, no matter how well designed, has limited ability to evolve to meet new and changing requirements over time. We must build into our development processes the practices that can accommodate the retirement and eventual replacement of components that have collapsed under the weight of their outdated designs. A fatal flaw in many development organizations is the denial that software can grow old and die. This flawed thinking allows younger and healthier software from the competition to eventually win out.

Therefore, any reuse strategy must anticipate this component lifecycle to ensure that the entire ecosystem of dependent users does not die a horrible death tied to aging components. A reuse strategy without a recurring replacement strategy is a path of certain destruction.

Another thing to recognize is the distinction between infrastructure versus application code. The view that some components are foundational, while others are fundamentally not, is short-sighted. The entire premise of reuse is that software should be built to enable an evolution towards building more functionality using the components already constructed. In effect, every component eventually becomes infrastructure for other components, which are often not anticipated.

oops reuse misuse – part 1

part 1 of infinity – introduction

One of the strongest motivations that is espoused for promoting object oriented programming is reuse. While object oriented programming languages provide features, such as encapsulation and inheritance, that enable better reuse, OOP has gotten a bad rap, because the discipline that is required to produce good software must address many concerns aside from reuse in order to be successful.

Reuse is a buzzword software professionals bandy about, when they promote certain software practices. It refers to the ability to incrementally grow software capability by building additional functions to augment what was already built previously. Development organizations like this idea, because it implies predictability, which is the holy grail of software methodology (quality process). If the additional effort required to build and test functionality can be made proportional only to the additional code being incrementally developed, then development can be highly predictable. This would be the key to projects being delivered on time and within budget.

Although reuse in this sense is a good motivation, nobody knows how to make it work. Very intelligent and highly respected individuals will say how to apply principles of reuse, but either they don’t know what they are talking about, or the people who apply their principles don’t understand what they should be doing. Or both. The state of the software industry is a testament to the fact that reuse strategies are being widely applied with a high degree of (and near total) failure. No one achieves anywhere close to the linearity of effort, predictability of process, and scalability of outcome that reuse is meant to achieve.

In an series of articles, of which this is the first, I would like to evaluate various reuse strategies, practices, and object oriented programming techniques. Stay tuned for more in this series.

Business vs Software Use Cases

Let us examine how Use Cases are defined in the context of a business, who uses a software system, and a developer (vendor), who supplies that software.

Use Cases are intended to capture the software requirements. This is done by identifying and describing the units of work (Use Cases) that are performed by the users. Users are who are known as Actors. In this way, the behavior of the system is captured by analyzing how the Actors interact with the software in achieving their goals. In layman’s terms, I sometimes describe Use Case analysis as understanding the problem.

The business wants to describe Use Cases in terms that are familiar to it. Its Actors are named according to the roles within its organization. Its Use Cases are described according to the organization’s business processes. The actions and entities are named according to verbs and nouns that are specific to the business.

Meanwhile, the software developer wants to describe Use Cases in terms that can be generalized to the larger market for such software systems. Its Actors must be generalized to accommodate the many organizations that are typical within its market. Its Use Cases are described according to generalized business processes that are typical within the industry. The actions and entities are named according to abstract verbs and nouns that are not specific to the one particular deployment.

This difference in perspectives causes a communications gap. The business speaks in concretes that are specific to its organization. The developer speaks in abstractions that are generally applicable to the market. There may be a many-to-many relationship between the concretes identified by the business and the abstractions identified by the developer. The perspectives must be reconciled to allow the two parties to communicate effectively.

The abstractions provided by the developer become design elements to realize the business Use Cases.

mda – model driven architecture

I explore Model Driven Architecture (MDA). This approach uses modeling for programming. Models drive the generation of code according to patterns.

We tend to write a lot of similar code over and over. If you have ever written code to persist a Java object into a relational database table, you know what I mean. It is similar for every object. Although the characters you type are different, there is a definite pattern, and you know deep down that an intelligent person should not have to endure such tedium day in and day out. The same goes for a lot of user interface code, like data entry forms. The same code gets written again and again; only the names are different.

I believe very strongly that model driven architecture is the way of the future for software development. Objects led to interfaces. Then, collaborations. Then, patterns and pattern languages (families of patterns). Model driven architecture is a natural extension of this trend. After identifying a pattern language, it should be possible to instantiate those patterns (write a software program) by expressing that intent through modeling. Modeling in the abstract sense is the act of capturing concepts in a notation.

I consider programming in Java to be a form of modeling. A programming language is not a very compact modeling language. Programming languages are designed to have a lot of redundancy to allow for error checking. Patterns in the code also indicate a lot of information that could be expressed in a more compact form. I view MDA as expressing software concepts in a very compact form (the model).

I think we need an example. I could write a program to represent pictures formed from geometric shapes. The software itself would probably require thousands of lines of code to perform the editing, rendering, and other functions. However, the application is really only dealing with a few basic concepts: ellipses, polygons, line segments, and a few others.

It should be possible to model each shape with a minimum number of parameters; e.g., an ellipse would be modeled by three points. In this manner, we could express any diagram understood by this application very compactly in a modeling language that captures the parameterization of shapes.

Now, let’s take that graphics package and reuse it as the foundation for constructing an application that renders telecommunications networks. This application deals with concepts like location, network elements, ports, and connections. Each concept is rendered with symbols, which are merely recurring patterns of our familiar geometric shapes.

The parameterization of shapes is no longer the most appropriate modeling language for capturing the essential concepts of this application. A more compact representation can be achieve by parameterizing the concepts that are directly understood by the application. This is the essence of model driven architecture.

Not everyone views MDA as I do. Actually, I have yet to encounter anyone who has expressed it as simply as that. Most proponents of MDA ramble on about UML and MOF and platform independence and all kinds of other non-essential mumbo jumbo. The key idea that is lost among the OMG UML/MOF folks is the notion of unit economy in the theory of concepts; that is the need to represent concepts in the most compact form. This is necessary within the mind, and this need is reflected in language.

UML makes perfect sense as a language for representing analysis and designs for software development. However, when one is looking to model concepts within any other context, the absurd verbosity and complexity exhibited by UML/MOF make as much sense as using geometric shapes as the modeling language for telecommunications networks.

proprietary J2EE

I am less than impressed with this article.

The author is arguing that proprietary extensions threaten the promise of standards based application portability—that Java’s promise of “write once, run anywhere” is dead, and was never a reality anyway. This demonstrates to me a tremendous lack of vision.

The J2EE platform is growing as a language. This means starting small and collecting the best contributions over time into a larger, more capable language. The language evolves with the community that uses it. A standards based approach to application development suggests that if you constrain yourself to using only the standard features and interfaces, then you should be portable to any implementation. Generally, that is true of J2EE; standards based portability is certainly more a reality with J2EE than on any other equivalent platform (e.g., Microsoft .NET).

Proprietary extensions by application server vendors are expected. Applications written to the standard will still be portable, because they do not depend on proprietary extensions. In time, the scope of standards will expand, as technologies mature and become commonplace. The industry’s best practices become encoded into standards, because the market demands it for portability (reducing costs). Innovation (proprietary extensions) will happen further out into the leading edge, where technologies are unproven. Vendors can differentiate their product based on quality of implemention and value added features. The latter requires extending beyond the standard features.

The article paints a picture of a static standard, which is being eroded. In reality the J2EE standard is constantly evolving to expand its capabilities. Application server vendors are moving in lock step to embrace the newest standard; often, their products have already implemented the standard by the time it is officially published. In reality, J2EE application portability is continually improving rather than being threatened due to this natural evolution.