All posts by Ben Eng

I am a software architect in the Communications Applications Global Business Unit (CAGBU) within Oracle. I currently work on cloud services for Business Support System (BSS) and Operations Support System (OSS) applications for communications service providers. My previous responsibilities include architecture and product management for the the RODOD and RSDOD solutions to provide an integrated suite of BSS and OSS applications for communications providers. I founded the Oracle Communications Service & Subscriber Management (S&SM) application and the Oracle Communications Unified Inventory Management (UIM) application. I pioneered the adoption of Object Relational Mapping (ORM) based persistence techniques within OSS applications. I introduced the XML Schema based entity relationship modeling language, which is compiled into the persistent object modeling service. I established the notion of a valid time temporal object model and database schema for life cycle management of entities and the ability to travel through time by querying with a temporal frame of reference or a time window of interest. I established the patterns for resource consumption for capacity management. I championed the development of Web based user interfaces and Web Services for SOA based integration of OSS applications. I was responsible for single handedly developing the entire prototype that formed the foundation of the current generation of the OSS inventory application. I have been engaged in solution architecture with service providers to adopt and deploy Oracle's OSS applications across the globe. I am responsible for requirements analysis and architectural design for the Order-to-Activate Process Integration Pack (PIP) proposed to integrate the OSS application suite for the communications industry using the Application Integration Architecture (AIA). Any opinions expressed on this site are my own, and do not necessarily reflect the views of Oracle.

the software assembly line

Creativity is stifled by a manufacturing process. Project managers perceive the development process to begin by identifying and analyzing requirements in full, until they are perfect for feeding to designers. Then, designers will satisfy those requirements by detailing designs, until they are perfect for coding. Then, code will be written and tested, until all the pieces can be integrated and QA tested. No matter how often this process fails to deliver innovative solutions, software companies continue to blame the lack of compliance to the process, rather than the process itself.

Modern software methodology suggests that the disciplines operate concurrently. Requirements anaysis, design, coding, integration, and testing all happening in parallel and continuously. The process is iterative, attacking problems incrementally, and refining rapidly with every cycle. Project managers claim to support this approach, and yet they continue to define gargantuan iterations (several months in duration) with entire sets of features fully implemented. Within each iteration, they plan with distinct milestones for completing of requirements, design, coding, unit testing, integration, and QA.

This pattern seems almost inevitable, when the project team is large (>20) and composed of individuals, who are highly specialized in their discipline. Designers have expectations of receiving good requirements from the analyst. Coders have expectations of receiving good designs, if they were not the designer. Testers have expectations of receiving good code. Unfortunately, the software assembly line does not produce good software, because development is a creative process, not a manufacturing process.

You do not produce good software based on dreaming up a perfect design. You need to start with a great deal of unknowns, some gross assumptions (likely overly naive), and a poor understanding of the problem. Proceed with the knowledge that you will make mistakes, and these will facilitate learning how to solve the problem better. Build it quickly and screw up several times, improving with every attempt. Eventually, the problem will be understood, and it will be adequately solved by the solution at hand, despite its imperfections. Project managers do not understand how to plan for a process that involves spinning around cycles of two steps forward and one step back. They think that an assembly line is the model of efficiency, and iterations are too chaotic and unpredictable, because of the constant rework. A software assembly line is certainly orderly and predictable. It predictably produces little innovation at a very high cost and over a long duration.

where have all the hackers gone?

There is a real difference between those, who develop software as a job, and those, who do it for the love of it. It is easy to distinguish between them. The former believes that only straightforward problems can be solved in a timely manner, and the less to be done, the better. The latter believes that straightforward problems are mundane and tedious, and hopes to spend time on more challenging problems. True hackers are a rare breed.

The biggest difficulties in software are non-technical. They are psychological. When a group of people have produced a body of work, they become emotionally invested in it. They feel that they know something. It becomes their anchor to reality, and every future possibility is considered in relation to that knowledge. With a sense of security comes malaise. It impedes the child-like motivations to reach out to the unknown and discover. Legacy knowledge within accomplished veterans becomes toxic to the culture, because the perceived value of legacy knowledge eclipses the potential value of further discovery, which has more risk attached. Cultural attachment to legacy kills innovation. It blunts the entrepreneurial spirit.

A true hacker does not give a rat’s ass about what is established by what came before. Everything is open to destruction and rebirth. The risk of expending effort towards a failed attempt is irrelevant, because a true hacker understands that every failure is a positive step towards ultimate success.

OOPs, here comes SOA (again)

Last week, I posted the following facetious comment about Service Oriented Architecture (SOA).

Posted By: Ben Eng on May 24, 2004 @ 09:27 PM in response to Message #123260

Strip all the behavior off of our domain objects. Define data structures to pass around. Provide sets of operations as stateless services, which perform functions on data, and return data to its callers. That sounds revolutionary.

All we have to do now is eliminate the object-oriented programmers, and the revolution will be complete.

There is a serious side to the issue. The recent popularity of SOA should give object oriented programmers much pause. Where do objects fit in the world of services? Are objects passé?

In the book Bitter EJB (p.57), the distinction is made between service-driven and domain-driven approaches. I am strong proponent of domain-driven approaches. However, maybe three years ago, I began feeling that a purely domain-driven approach was awkward for modeling object graphs with very sophisticated global constraints. This is a natural topic of interest, considering that I model telecommunications networks for a living.

Factoring out the global constraints, instead of embedding them within the domain objects, seemed natural. I started to develop stateless session local interfaces (using plain old Java objects, not EJB), which use domain objects as parameters and return values. Then I found myself adding conversational state to the session objects. I’m not sure, where I’m going with this yet. We’ll see after a few more weeks or months of experimentation.

After having developers fumble around with EJBs for the past four years, I am becoming disillusioned with the technology. XML and Web Services is going even further down the road of coarse-grained distributed components, which are a consequence of high latency due to expensive data marshaling and remote communications. Whenever EJB or Web Services is involved, the client code looks like garbage – calling functions on data, rather than methods on objects. Optimizing the interface to reduce client-server round trips pollutes the interface, until it also looks like garbage; objects become so polluted with denormalized data that the concepts are no longer recognizable. When faced with sophisticated object graphs, rather than simple services, the approach leads ultimately to garbage and more garbage. I’m tempted to call the approach Garbage Oriented Architecture, rather than SOA.

planning and estimation

We spend an extraordinary amount of time planning our development, because it is so costly to get any amount of code written. Developers are asked to give estimates. They demand to know what is in scope down to the individual operation. It’s crazy how much design work and detail needs to be thought through before reaching that point. Then, they estimate maybe 10-20hr of work per method. Even a tiny piece of functionality results in estimates in several person-weeks of effort. This is how software professionals get their jobs done.

Where have all the hackers gone?

I came up with a cool idea on the weekend. I decided to code it sunday. I spent about four hours in total coding the entire thing, analyzing and designing, as I went. It turned to be over 2000 lines, which I proceeded to unit test. I put in another three hours tonight debugging a rather obscure problem (actually wasting 2.5 hours chasing a non-existent bug due to my own stupidity), which I discovered in my sleep the other night.

I simply cannot understand why experienced software professionals cannot hack out 3000 lines of unit tested code on a good day, and average 1000 lines of unit tested code per working day, with low bug counts.

I’m almost glad to see that many of North America’s grunt coding jobs are being outsourced to India. The cost of our software professionals in combination with their low productivity and high maintenance attitude is unbearable.

notes on quality process

Software development methodologies such as the Unified Process are intended to institutionalize a quality process.

Often, the manner in which a quality process is implemented will focus on quality of manufacturing, in order to achieve predictable schedules and outcomes. In the name of predictability, rigorous up front planning attempts to predetermine tasks, resources, and schedules to estimate costs and deliverables. Putting constraints on the size and shape too early will necessarily require gross assumptions about the requirements and design. In so doing, we have traded off quality of innovation to achieve quality of manufacture. We are able to build it within budget and on time, but what we are capable of building is not innovative.

follow the rules

I have an engineer who works for me. His responsibilities include ensuring design consistency and best practices across the components in the system. We establish design guidelines and rules for the team to apply in their work, and he performs the grunt work of reviewing and editing everyone’s designs. He is well suited to the role, because he is very methodical, and he has an affinity to documenting and following rules.

Engineering can be divided into two different modes of operation: manufacturing and invention.

I use the term manufacturing in reference to ordinary development. That is, designing based on a strategy of incremental improvement over past designs. This strategy has served the Japanese well in automobiles and consumer electronics. A disciplined approach to engineering with focus on continuous improvement and attention to market forces is the key to success in manufacturing. Manufacturing is about quality of implementation. Manufacturing is about applying best practices and quality process.

Invention is completely different. Invention is not about responding to market demand. It is about using imagination to create new markets, where they never existed before. Invention requires innovative thinking to formulate new ideas and different (hopefully better) practices. Invention is not about applying today’s best practices and rules to incrementally improve. Invention requires the understanding that today’s best is flawed and impeding progress towards a superior possibility that may be highly risky, unproven, and difficult to achieve – but worthwhile to pursue, because it would be revolutionary.

A business needs disciplined engineers, who are skilled at manufacturing. This is where the money is.

My grunt is definitely a manufacturer, not an inventor. He has mediocre design creativity, because he is unable to intuit across a mass of contradictory information and conflicting motivations (resolving the forces), while paying attention to detail. He needs rules to govern his thinking and everyone else’s. Without rules, he is unable to function. He has no intuitive understanding of good design principles. He follows the rules.

An inventor (and every good designer) does not follow the rules. He does not break the rules either. He defines the rules that are appropriate, and he uses them to aid his thinking. Above all, he uses his brain to produce good designs, given the facts in evidence. He leads, where few have the courage to follow.

man-qua-universe

I am currently reading Three Roads to Quantum Gravity. I have just completed the first part of the book, which is a lengthy introduction. The author felt a strong need to prepare the reader’s mind for thinking about the universe differently from whatever misguided education and personal experience was filling his head.

Perhaps if I had not already read The End of Time: The Next Revolution in Physics, I would have needed this preparation. However, being already convinced that space and time are invalid concepts for properly describing reality, I felt perfectly at home with the author’s perspective.

Something I was unprepared for was the realization that objectivist epistemology may be overly constrained by its man-qua-man perspective. Scientists, who practice the Scientific Method, impose constraints on their thinking by considering themselves an objective observer. Those who do this cannot effectively practice the science of cosmology, because no such artificial (mystical) separation can be established between observer and universe.

Only man-qua-man can allow himself to be so easily fooled into thinking the world is as he himself sees it, and be convinced that this is an accurate view of reality. Only man-qua-ordinary-man would so easily accept space, entities, and their attributes to represent the true state of things, while changes in state are viewed as snapshots in time. Most men still believe that reality is organized into a three dimensional grid representing space, and an extra dimension of time. Even their inability to sense these alleged characteristics does not bother them. Ask them to show you space and time one day, and see how far they get trying to use existents to describe non-existents.

Man-qua-ordinary-man is in fact a mere fool. In order to understand cosmology, we must become man-qua-universe and think likewise.

[It might seem strange that a software architect would spend so much effort on cosmology. Cosmology and software architecture are actually similar. Cosmology and software architecture are both about studying the universe. Just as every ordinary cosmologist has failed to gain a proper understanding of the universe, by being unable to accept that he himself cannot be separated from it as objective observer, ordinary software architects are in the same bind.]

oops reuse misuse – part 2

part 2 of infinity – reuse and replacement

(Continued from part 1 – introduction.) Organizations frequently become enamored with building software infrastructure and frameworks for reuse. They see a great deal of common problems being solved again and again in different ways by different people. They see duplication of effort as a business inefficiency that can be optimized by factoring the commonality into reusable components, so that wheels are not reinvented. Centralized control through corporate standards is another popular activity.

One sign of successful reuse is the proliferation of users. With each new user there is a new dependency. The dependency is manageable so long as the requirements are stable. If new requirements must be satisfied, as is typical of rapidly evolving technology, these dependencies become a serious impediment to productivity.

A fundamental enabler of component reuse is the software contract. The interface that isolates the underlying implementation details from its users must remain stable to allow the two to evolve independently. However, changing requirements often result in interface changes. Any change to the software contract will break its users. As the number of users grows, the impact of this breakage becomes more severe. Thus, the desire to scale software functionality linearly with development effort is not viable because of changing requirements.

Only with perfect foresight can we avoid changing requirements. If we could employ prescience so that all new requirements resulted in wholly new software components, leaving existing software components in tact, we would have a predictable and scalable development process. Unfortunately, human fallibility prevents us from having good visibility into the future, and our software designs suffer.

We must be honest and recognize that software, no matter how well designed, has limited ability to evolve to meet new and changing requirements over time. We must build into our development processes the practices that can accommodate the retirement and eventual replacement of components that have collapsed under the weight of their outdated designs. A fatal flaw in many development organizations is the denial that software can grow old and die. This flawed thinking allows younger and healthier software from the competition to eventually win out.

Therefore, any reuse strategy must anticipate this component lifecycle to ensure that the entire ecosystem of dependent users does not die a horrible death tied to aging components. A reuse strategy without a recurring replacement strategy is a path of certain destruction.

Another thing to recognize is the distinction between infrastructure versus application code. The view that some components are foundational, while others are fundamentally not, is short-sighted. The entire premise of reuse is that software should be built to enable an evolution towards building more functionality using the components already constructed. In effect, every component eventually becomes infrastructure for other components, which are often not anticipated.

oops reuse misuse – part 1

part 1 of infinity – introduction

One of the strongest motivations that is espoused for promoting object oriented programming is reuse. While object oriented programming languages provide features, such as encapsulation and inheritance, that enable better reuse, OOP has gotten a bad rap, because the discipline that is required to produce good software must address many concerns aside from reuse in order to be successful.

Reuse is a buzzword software professionals bandy about, when they promote certain software practices. It refers to the ability to incrementally grow software capability by building additional functions to augment what was already built previously. Development organizations like this idea, because it implies predictability, which is the holy grail of software methodology (quality process). If the additional effort required to build and test functionality can be made proportional only to the additional code being incrementally developed, then development can be highly predictable. This would be the key to projects being delivered on time and within budget.

Although reuse in this sense is a good motivation, nobody knows how to make it work. Very intelligent and highly respected individuals will say how to apply principles of reuse, but either they don’t know what they are talking about, or the people who apply their principles don’t understand what they should be doing. Or both. The state of the software industry is a testament to the fact that reuse strategies are being widely applied with a high degree of (and near total) failure. No one achieves anywhere close to the linearity of effort, predictability of process, and scalability of outcome that reuse is meant to achieve.

In an series of articles, of which this is the first, I would like to evaluate various reuse strategies, practices, and object oriented programming techniques. Stay tuned for more in this series.

Business vs Software Use Cases

Let us examine how Use Cases are defined in the context of a business, who uses a software system, and a developer (vendor), who supplies that software.

Use Cases are intended to capture the software requirements. This is done by identifying and describing the units of work (Use Cases) that are performed by the users. Users are who are known as Actors. In this way, the behavior of the system is captured by analyzing how the Actors interact with the software in achieving their goals. In layman’s terms, I sometimes describe Use Case analysis as understanding the problem.

The business wants to describe Use Cases in terms that are familiar to it. Its Actors are named according to the roles within its organization. Its Use Cases are described according to the organization’s business processes. The actions and entities are named according to verbs and nouns that are specific to the business.

Meanwhile, the software developer wants to describe Use Cases in terms that can be generalized to the larger market for such software systems. Its Actors must be generalized to accommodate the many organizations that are typical within its market. Its Use Cases are described according to generalized business processes that are typical within the industry. The actions and entities are named according to abstract verbs and nouns that are not specific to the one particular deployment.

This difference in perspectives causes a communications gap. The business speaks in concretes that are specific to its organization. The developer speaks in abstractions that are generally applicable to the market. There may be a many-to-many relationship between the concretes identified by the business and the abstractions identified by the developer. The perspectives must be reconciled to allow the two parties to communicate effectively.

The abstractions provided by the developer become design elements to realize the business Use Cases.