Tag Archives: architecture

vertical integration

Applications have been pursuing operational efficiency through vertical integration for years. This is generally understood to mean assembling infrastructure (machine and operating system) with platform components (database, middleware) and application components into an engineered system that is pre-integrated and optimized to work together.

Now, the evolution to cloud services is following the same pattern. IaaS is integrated into PaaS. IaaS and PaaS are integrated with application components to deliver SaaS. However, just as we see in on-premise enterprise information systems, applications do not operate in silos. They are integrated by business processes, and they must collaborate to enforce business policies across business functions and organizations.

Marketing is deeply interwoven with sales. Product configuration, pricing, and quotation are tied to order capture and fulfillment. Fulfillment involves inventory, shipping, provisioning, billing, and financial accounting. Customer service is linked with various service assurance components, billing care, and also quote and order capture. All components need views of accounts, assets (products and services subscribed to), agreements, contracts, and warranties. Service usage and demand all feed analytics to drive marketing campaigns that generate more sales. What a tangled web.

What is clear from this picture is that vertical integration does not end with infrastructure, platform, and a software application. Applications contribute components that combine with business processes and business policies to construct higher level applications. This may continue for many layers of integration according to the self-similar paradigm.

The evolution to cloud should recognize the need for integration of SaaS components with business processes and business policies. However, it does not appear as though cloud services have anticipated the need for vertical integration to continue in layers. To construct assemblies, the platform should provide a means of defining such assemblies, so that they can be replicated by packaging and deploying them on infrastructure at various scales. The platform should provide a consistent programming and configuration model for extending and customizing applications in ways that are natural to being reapplied layer by layer.

Vertical integration is not an elegantly solved problem for on-premise applications. On-premise application integration is notoriously complex due to heterogeneity and vastly inconsistent interfaces and programming models. One component’s notion of customer is another’s notion of party. Two components with notions of customer do not agree on its schema and semantics. A product to one component is an offer to another. System integration projects routinely cost five to ten times the software license cost of the application components, because of the difficulty of overcoming impedance mismatches, gaps in functional capabilities, duct tape, and bubblegum.

Examining today’s cloud platforms and the applications built upon them, it is looking like we have not learned much from our past mistakes. We are faced with the same costly and clunky integration nightmare with no breakthrough in sight.

encapsulation

Encapsulation is the packing of data and functions into a single component.

Under this definition, encapsulation means that the internal representation of an object is generally hidden from view outside of the object’s definition.

In 2004, I wrote OOPs, here comes SOA (again) to comment on how Service-Oriented Architecture (SOA) stands in stark contrast to Object-Oriented Programming (OOP).

In my previous article, Going Meta, we can see that beyond the scale of a single object, the architecture of an enterprise application breaks encapsulation by deliberately exposing data representations of entities. If data hiding through encapsulation is a fundamental principle of Object-Oriented Programming, that principle certainly breaks down at several levels. Technical challenges are partly to blame. However, I believe there are non-technical motivations to abandon data hiding.

From a technical perspective, the impedance mismatch between the middle tier and the database tier demands that the database schema be a central design consideration that is agreed upon by all stakeholders. The boundary between the middle tier and the database tier is all about the data and CRUD operations. This boundary may be more service-oriented, if the logic is implemented in the database tier as stored procedures, but the programming language available in the database is seldom expressiveness enough or natural enough for most developers to embrace. In the middle tier, domain services encapsulate the application logic, but the boundary between the middle tier and its clients (Web browsers and machine-to-machine integration with other applications) again is all about serialized data in the form of request, response, and fault messages for remote procedure calls (SOAP and RESTful services) or event messages (publish-subscribe). The entire data model (with very few exceptions for data that is used only for computations that are private to the logic) is exposed through these interfaces.

From a business perspective, it is natural to think in terms of business processes and the data artifacts (e.g., documents, files) that flow between tasks. The application services are merely a means of implementing those tasks; another means of implementing a task may be for a human to perform it either without the aid of software or with the aid of software that lacks the integration to enable the task to be performed automatically. Users do not think of their interactions with the software in terms of data hiding. They are very aware of the data structures that are relevant to the business.

Object-Oriented Programming is relegated to the micro level of an enterprise application. Encapsulation or data hiding is a concept that is relevant only to modules of logic, and these concepts do not extend naturally throughout the architecture for an enterprise application for both technical and business reasons. When developing enterprise applications and systems that involve business processes that integrate across applications, Object-Oriented Programming is a paradigm that sadly has little impact and relevance at a macro level, where SOA fills the vacuum. It seems like the software industry and computer science are still at a very immature stage of evolution, as we go without a programming paradigm that can unify how software is developed at the micro and macro levels.

Reliable Messaging with REST

Marc de Graauw’s article Nobody Needs Reliable Messaging remains as relevant today as it did in 2010, when it was first published. It echoes the principles outlined in Scalable, Reliable, and Secure RESTful services from 2007.

It basically says that you don’t need for REST to support WS-ReliableMessaging delivery requirements, because reliable delivery can be accomplished by the business logic through retries, so long as in the REST layer its methods are idempotent (the same request will produce the same result). Let’s examine the implications in more detail.

First, we must design the REST methods to be idempotent. This is no small feat. This is a huge topic that deserves its own separate examination. But let’s put this topic aside for now, and assume that we have designed our REST web services to support idempotence.

If we are developing components that call REST web services for process automation, the above principle says that the caller is responsible for retrying on failure.

The caller must be able to distinguish a failure to deliver the request from a failure by the server to perform the requested method. The former should be retried, expecting that the failure is temporary. The latter is permanent.

The caller must be able to implement retry in an efficient manner. If the request is retried immediately in a tight loop, it is likely to continue to fail for the same reason. Network connectivity issues sometimes take a few minutes to be resolved. However, if the reason for failure is because the server is overloaded, having all clients retry in a tight loop will exacerbate the problem by slamming the server with a flood of requests, when it is least able to process them. It would be helpful if clients would behave better by backing off for some time and retrying after a delay. Relying on clients to behave nicely on their honor is sure to fail, if their retry logic is coded ad hoc without following a standard convention.

The caller must be able to survive crashes and restarts, so that an automated task can be relied upon to reach a terminal state (success or failure) after starting. Therefore, message delivery must be backed by a persistent store. Delivery must be handled asynchronously so that it can be retried across restarts (including service migration to replacement hardware after a hardware failure), and so that the caller is not blocked waiting.

The caller must be able to detect when too many retry attempts have failed, so that it does not get stuck waiting forever for the request to be delivered. Temporary problems that take too long to be resolved need to be escalated for intervention. These requests should be diverted for special handling, and the caller should continue with other work, until someone can troubleshoot the problem. Poison message handling is essential so that retrying does not result in an infinite loop that would gum up the works.

POST methods are not idempotent, so retry must be handled very carefully to account for side-effects. Even if the request is guaranteed to be delivered, and it is processed properly (exactly once) by the server, the caller must be able to determine if the method succeeded reliably, because the reply can be lost. One approach is to deliver the reply reliably from the server back to the caller. Again, all of the above reliable delivery qualities apply. The interactions to enable this round trip message exchange certainly look very foreign to the simple HTTP synchronous interaction. Either the caller would poll for the reply, or a callback mechanism would be needed. Another approach is to enable the caller to confirm that the original request was processed. With either approach, the reliable execution requirement needs to alter the methods of the REST web services. To achieve better quality of service in the transport, the definition of the methods need to be radically redesigned. (If you are having a John McEnroe “you cannot be serious” moment right about now, it is perfectly understandable.)

Taking these requirements into consideration, it is clear that it is not true that “nobody needs reliable messaging”. Enterprise applications with automated processes that perform mission-critical tasks need the ability to perform those tasks reliably. If reliable message delivery is not handled at the REST layer, the responsibility for retry falls to the message sender. We still need reliable messaging; we must implement the requirement ourselves above REST, and this becomes troublesome without a standard framework that behaves nicely. If we accept that REST can provide only idempotence toward this goal, we must implement a standard framework to handle delivery failures, retry with exponential back off, and divert poison messages for escalation. That is to say, we need a reliable messaging framework on top of REST.

[Note that when we speak of a “client” above, we are not talking about a user sitting in front of a Web browser. We are talking about one mission-critical enterprise application communicating with another in a choreography to accomplish some business transaction. An example of a choreography is the interplay between a buyer and a seller through the systems for commerce, quote, procurement, and order fulfillment.]

OLTP database requirements

Here is what I want from a database in support of enterprise applications for online transaction processing.

  1. ACID transactions – Enterprise CRM, ERP, and HCM applications manage data that is mission critical. People’s jobs, livelihoods, and businesses rely on this data to be correct. Real money is on the line.
  2. Document oriented – A JSON or XML representation should be the canonical way that we should think of objects stored in the database.
  3. Schema aware – A document should conform to a schema (JSON Schema or XML Schema). Information has a structure and meaning, and it should have a formal definition.
  4. Schema versioned – A document schema may evolve in a controlled manner. Software is life cycle managed, and its data needs to evolve with it for compatibility, upgrades, and migration.
  5. Relational – A subset of a document schema may be modeled as relational tables with foreign keys and indexes to support SQL queries, which can be optimized for high performance.

The fundamental shift is from a relational to a document paradigm as the primary abstraction. Relational structures continue to play an adjunct role to improve query performance for those parts of the document schema that are heavily involved in query criteria (WHERE clauses). The document paradigm enables the vast majority of data to be stored and retrieved without having to rigidly conform to relational schema, which cannot evolve as fluidly. That is not to say that data stored outside of relational tables is less important or less meaningful. To the contrary, some of the non-relational data may be the most critical to the business. This approach is simply recognizing information that is not directly involved in query criteria can be treated differently to take advantage of greater flexibility in schema evolution and life cycle management.

Ideally, the adjunct relational tables and SQL queries would be confined by the database to its internal implementation. When exposing a document abstraction to applications, the database should also present a document-oriented query language, such as XQuery or its equivalent for JSON, which would be implemented as SQL, where appropriate as an optimization technique.

NoSQL database technology is often cited as supporting a document paradigm. NoSQL technologies as they exist today do not meet the need, because they do not support ACID transactions and they do not support adjunct structures (i.e., relational tables and indexes) to improve query performance in the manner described above.

Perhaps the next best thing would be to provide a Java persistent entity abstraction, much like EJB3/JPA, which would encapsulate the underlying representation in a document part (e.g., as a XMLType or a JSON CLOB column) and a relational part, all stored in a SQL database. This would also provide JAXB-like serialization and deserialization to and from JSON and XML representations. This is not far from what EclipseLink does today.

Applied Cosmology: The Holographic Principle

The Holographic Principle says that a full description of a volume of space is encoded in the surface that bounds it. This arises from black hole thermodynamics, where the black hole entropy increases with its surface area, not its volume. Everything there is to know about the black hole’s internal content is on its boundary.

Software components have boundaries that are defined by interfaces, which encapsulate everything an outsider needs to know to use it. Everything about its interior is represented by its surface at the boundary. It can be treated like a black box.

Applied Cosmology: The Self Similar Paradigm

Robert Oldershaw’s research on The Self Similar Cosmological Paradigm [http://www3.amherst.edu/~rloldershaw/concepts.html] recognizes that nature is organized in stratified hierarchy, where every level is similar. The shape and motions of atoms is similar to stellar systems. Similarities extend from the stellar scale to the galactic scale, and beyond.

Managing complexity greatly influences software design. Stratified hierarchy is familiar to this discipline.

At the atomic level, we organize our code into units. Each unit is a module with a boundary, which exposes its interface that governs how clients interact with this unit. The unit’s implementation is hidden behind this boundary, enabling it to undergo change independently of other units, as much as possible.

We build upon units by reusing modules and integrating them together into larger units, which themselves are modular and reusable in the same way. Assembly of modules into integrated components is the bread and butter of object-oriented programming. This approach is able to scale up to the level of an application, which exhibits uniformity of platform technologies, programming language, design metaphors, conventions, and development resources (tools, processes, organizations).

The next level of stratification exists because of the need to violate the uniformity across applications. However, the similarity is unbroken. We remain true to the principles of modular reuse. We continue to define a boundary with interfaces that encapsulate the implementation. We continue to integrate applications as components into larger scale components that themselves can be assembled further.

Enterprises are attempting to enable even higher levels of stratification. They define how an organization functions and how it interfaces with other organizations. This is with respect to protocols for human interaction as well as information systems. Organizations are integrated into business units that are integrated into businesses at local, national, multi-national, and global scales. Warren Buffett’s Berkshire Hathaway has demonstrated how entire enterprises exhibit such modular assembly.

This same pattern manifests itself across enterprises and across industries. A company exposes its products and services through an interface (branding, pricing, customer experience) which encapsulates its internal implementation. Through these protocols, we integrate across industries to create supply chains that provide ever more complex products and services.

Applied Cosmology: Machian Dynamics in Configuration Management

Julian Barbour wrote the book titled “The End of Time: The Next Revolution in Physics” [http://www.platonia.com/ideas.html]. He explains how our failure to unify General Relativity with Quantum Theory is because of our ill-conceived preoccupation with time as a necessary component of such a theory. A proper description of reality is composed of the relationships between real things, not a description with respect to an imaginary background (space and time). Therefore, all you have is a configuration of things, which undergoes a change in arrangement. The path through this configuration-space is what we perceive as the flow of time.

We apply this very model of the universe in configuration management.

Software release management is a configuration management problem, where the things in configuration-space are source files. A path through configuration space captures the versions of these source files relative to each other as releases of software are built. Our notion of time is with respect to these software releases.

Enterprise resource management in the communications industry involves many configuration management problems in various domains. We normally refer to such applications as Operations Support Systems.

In network resource management, the configuration-space includes devices and other resources in the network, their connectivity, and the metadata (what is normally called a “device configuration” which needs to be avoided in the context of this discussion for obvious reasons) associated with that connectivity arrangement.

In service resource management, the configuration-space includes services, their resource allocations, and the subscription metadata (what is normally called a “service configuration” which needs to be avoided in the context of this discussion for obvious reasons) or “design”.

We develop such applications using a notion of configuration-space, because such systems cannot operate in a world that is limited to its dependence on a background of space and time. We need to be able to travel backward and forward in time arbitrarily to see how the world looked in the past from the perspective of a particular transaction. We need to be able to hypothesize many possible futures, perhaps only one of which is brought into reality through a rigorous process of analysis, design, planning, procurement, construction, and project management. Reality is always from the perspective of the observer, and one’s frame of reference is always somewhere on the path in configuration-space.

Software engineering is applied cosmology

Engineering is applied science. Some people believe that software engineering is applied computer science. In a limited sense, it is. But software is not entirely separated from hardware. Applications are not entirely separated from processes. Systems are not entirely separated from enterprises. Corporations are not entirely separated from markets. For this reason, I believe what we do is not software engineering at all. It is not limited to applied computer science. Our engineering discipline is actually applied cosmology.

designs are useless

“In preparing for battle I have always found that plans are useless, but planning is indispensable.” -Eisenhower (from Planning Extreme Programming)

I believe this is true, because “no plan survives contact with the enemy”. In software, the enemy is in the form of dark spirits hidden within the code, both legacy and yet to be written. Because plans (the schedule of work to be done) are intimately tied to designs (models of the software), it must also be true that no design survives contact with the enemy. Any programmer who begins writing code based on a preconceived design will almost immediately feel the pain of opposing forces that beg to be resolved through refactoring; programmers who lack this emotional connection with their code are probably experiencing a failure of imagination (to improve the design).

Therefore, I think we can return to the original quote and state its corollary: designs are useless, but designing is indispensable.

All this is to say that for the above reasons, I think these ideas strongly support the notion that the process artifacts (e.g., Functional Solution Approach, Functional Design, Technical Design) are more or less useless, because as soon as they are written they are obsoleted by improvements that are discovered contemporaneously with each line of code written, but the act of producing them (the thought invested in designing) is indispensable to producing good software. This leads me to conclude that we should not fuss so much about the actual content of the artifacts, so long as they capture the essence to show a fruitful journey through designing—that problems have been thought through and decisions have been based on good reasoning. Worrying about the content being perfectly precise, comprehensive, and consistent ends up being a waste of effort, since the unrelenting act of designing will have already moved beyond the snapshot captured in the artifact.

Coincidentally, this theme also aligns with the notion of a learning organization espoused by Lean. The value of designing is the facilitation of learning.

transparent persistence

Advantages

Transparent persistence has emerged into the mainstream over the past few years with the popularity of JDO and JPA for enterprise application development. This approach offers the following advantages.

  1. Domain modeling is expressed naturally as plain old Java objects (POJOs) without having to program any of the SQL or JDBC calls that are traditionally coded by hand.
  2. Navigation through relationships – objects are naturally related through references, and navigating a relationship will automatically load the related object on demand.
  3. Modified objects are stored automatically when the transaction is committed.
  4. Persistence by reachability – related objects are automatically stored, if they are reachable from another persistent object.

The programming model is improved by eliminating the tedium that is traditionally associated with object persistence. Loading, storing, and querying are all expressed in terms of the Java class and field names, as opposed to the physical schema names. The programmer is largely insulated from the impedance mismatch between Java objects and the relational database. The software can be expressed purely in terms of the domain model, as represented by Java objects.

Challenges

When developing domain objects, persistence is only one aspect. The business logic that applies to the graph of related objects is the most important concern. Transparent persistence introduces challenges to executing business logic to enforce constraints and complex business rules when creating, updating, and deleting persistent objects through reachability.

For example, an equipment rental application may need to enforce the following constraints:

  • When creating equipment, it must be related to a location.
  • When creating a rental, it must be related to a customer, and ensure that the equipment is available for the duration of the rental.
  • When updating a rental, it must ensure that equipment is available for the duration of the rental.

JPA 2.0 does not provide sufficient mechanisms for enforcing these constraints, when creating or updating these entities through reachability. The responsibility is placed on a service object to manage these graphs of entities. The constraint checking must be enforced by the service object per transaction. Java EE 5 does not provide any assistance to ensure that the constraint checks (implemented in Java) are deferred until commit, so that they are not repeated, when performing a sequence of operations in the same transaction.

Adding a preCommit event to a persistent object would provide a good place for expressing constraints. Allowing this event to be deferred until transaction commit would provide the proper optimization for good performance. Of course, preCommit would need to prevent any further modifications to the persistent objects enlisted in the transaction. This would factor out many of the invariants so that they are expressed per entity, removing the responsibility from every operation on service objects, which is prone to programmer error. The domain model would be greatly improved.