All posts by Ben Eng

I am a software architect in the Communications Applications Global Business Unit (CAGBU) within Oracle. I currently work on cloud services for Business Support System (BSS) and Operations Support System (OSS) applications for communications service providers. My previous responsibilities include architecture and product management for the the RODOD and RSDOD solutions to provide an integrated suite of BSS and OSS applications for communications providers. I founded the Oracle Communications Service & Subscriber Management (S&SM) application and the Oracle Communications Unified Inventory Management (UIM) application. I pioneered the adoption of Object Relational Mapping (ORM) based persistence techniques within OSS applications. I introduced the XML Schema based entity relationship modeling language, which is compiled into the persistent object modeling service. I established the notion of a valid time temporal object model and database schema for life cycle management of entities and the ability to travel through time by querying with a temporal frame of reference or a time window of interest. I established the patterns for resource consumption for capacity management. I championed the development of Web based user interfaces and Web Services for SOA based integration of OSS applications. I was responsible for single handedly developing the entire prototype that formed the foundation of the current generation of the OSS inventory application. I have been engaged in solution architecture with service providers to adopt and deploy Oracle's OSS applications across the globe. I am responsible for requirements analysis and architectural design for the Order-to-Activate Process Integration Pack (PIP) proposed to integrate the OSS application suite for the communications industry using the Application Integration Architecture (AIA). Any opinions expressed on this site are my own, and do not necessarily reflect the views of Oracle.

going meta – the human-machine interface

Anatomy of an n-tier application

A fully functioning web app involves several layers of software, each with its own technology, patterns, and techniques.

At the bottom of the stack is the database. A schema defines the data structures for storage. A query language is used to operate on the data. Regardless whether the database is relational, object-relational, NoSQL, or some other type, the programming paradigm at the database tier is distinctly different than and quite foreign from the layers above.

Above the database is the middle tier or application server. This is where the server-side business logic, APIs, and Web components reside.

There is usually a set of persistent entities, which provide an object abstraction of the database schema. The database query language (e.g., SQL) may be abstracted into an object query language (e.g., JPQL) for convenience. The majority of CRUD (create, read, update, delete) operations can be done naturally in the programming language without needing to formulate statements in the database query language. This provides a persistent representation of the model of the application.

Above the persistent entities is a layer of domain services. The transactional behavior of the business logic resides in this layer. This provides the API (local) that encapsulates the essence of the application functions.

The domain services are usually exposed as SOAP or RESTful services to remote clients for access from Web browsers and for machine-to-machine integration. This would necessitate that JSON and/or XML representations be derived from the persistent entities (i.e., using JAXB). This provides a serialized representation of the model of the application.

We finally come to the presentation layer, which is divided into server-side components residing in the application server and client-side components that execute in the Web browser. Usually there is a presentation-oriented representation called a view-model, which matches the information rendered on views or input on forms. The view and controls are constructed from HTML, CSS, and JavaScript. The programming paradigm in these technologies is distinctly different than the layers below.

Extending the application

Let’s examine what it would take to extend an application with a simple type (e.g., string) property on an entity. The database schema would need to be altered. A persistent entity would need a field, getter and setter methods, and a binding between the field and a column in the database schema. The property may be involved in the logic of the domain services. Next, the JSON and XML binding objects would need to be augmented with the property, and logic would be added to transform between these objects and the persistent entities used by the domain services. At the presentation layer, the view-model would be augmented with the property to expose it to the views. Various views to show an entity’s details and search results would likewise be enhanced to render the property. For editing and searching, a field would need to be added on forms with corresponding validation of any constraints associated with that property and on-submit transaction handling.

That is an awful lot of repetitive work at every layer. There are many technologies and skill sets involved. Much of the work is trivial and tedious. The entire process is far from efficient. It is worse if there is division of labor among multiple developers who require coordination.

A better platform

When confronted with coordinating many concomitant coding activities to accomplish a single well-defined goal, it is natural for an engineer to solve the more general problem rather than doing tedious work repeatedly. The solution is to “go meta”; instead of programming inefficiently, develop a better language to program in. Programming has evolved from machine language to assembly language for humans to express instructions more intuitively. Assembly evolved to structured languages with a long history of advances in control and data flow. Programming languages have evolved in conjunction with virtualization of the machine (i.e., bytecode) to provide better abstractions of software and hardware capabilities. In the spirit of Guy L. Steele’s Growing a Language talk from OOPSLA ’98, components, libraries, and frameworks have been developed using a programming language that itself supports extending the language itself within limits. All of these innovations continually raise the level of abstraction to increase human productivity.

We are hitting the limits of what can be expressed efficiently in today’s languages. We have a database storage abstraction that is separate from server-side application logic, which is itself separate from client-side (Web browser) presentation. There is growing support for database and server-side abstractions to scale beyond the confines of individual machines. Clustering enables a software to take advantage of multiple machines to distribute load and provide redundancy in case of failure. However, our abstractions seem to stop at the boundaries between database storage, server-side application logic, and client-side presentation. Hence, we have awkward impedance mismatches when integrating top-to-bottom. We also have impedance mismatches when integrating together heterogeneous application components or services, as RESTful and SOAP Web Services technologies cross the boundaries between distributed software components, but this style of control and data flow (remote procedure calls) is entirely foreign to the programming language. That is why we must perform inconvenient translations between persistent entities and their bindings to various serialized representations (JSON, XML).

It seems natural that these pain points will be relieved by again raising the level of abstraction so that these inefficiencies will be eliminated. Ease of human expression will better enable programming for non-programmers. We are trying to shape the world so that humans and machines can work together harmoniously. Having languages that facilitate effective communication is a big part of that. To get this right, we need to go meta.

you aren’t gonna need it – short-sighted

“You aren’t gonna need it” (YAGNI) is a principle espoused by extreme programming (XP). It says to implement things only when you actually need them, never when you foresee the need. I see serious problems with this principle; it is short-sighted.

From a product management perspective, the product owner’s job is to define a release roadmap and a backlog of requirements for development that they foresee a need in the market. There is a vision and product strategy that guides what should be developed and how to develop it. Does the YAGNI principle apply equally to this backlog? It can be argued that anything on the future roadmap is subject to change. Therefore, developers should not be building features, infrastructure, and platform capabilities in anticipation of those future needs, no matter how confident they are that the backlog will be implemented.

Doing the minimum to meet the immediate need leads to technical debt. YAGNI causes developers to defer solving more difficult architectural problems until the need becomes critical. Architectural problems (“-ilities”) are often systemic, which means as the code base grows the cost to refactor and fix becomes greater.

Security and error handling are examples of systemic issues that are very difficult to fix later. If the problem grows too large, it becomes too costly to fix, because the work cannot be accomplished within a sprint without leaving the code broken (i.e., failed builds, failed tests), and that is absolutely forbidden. That constraint makes it impractical to fix large scale architectural problems throughout the code base, when the problem has become intractable. The software eventually collapses under the weight of its technical debt because of compounded interest, as the debt is multiplied with a growing code base.

If we take YAGNI too seriously, we are being deliberately short-sighted, ignoring the requirements that we can (and should!) foresee. YAGNI encourages us to discount future requirements, expecting that they may not come to pass. A myopic approach tends to lead to a dead end, if we do not take care to set a course in the right direction, and to ensure that we have equipped ourselves to travel there.

If you allow YAGNI to make the road ahead too difficult to travel, those future requirements certainly will not come to pass, because you’ll be broken down on the side of the road. There won’t be anyone to come rescue you, because although you could foresee the need for roadside assistance, you didn’t pay for it on your auto insurance policy, because “you aren’t gonna need it”.

service and resource – structural model

I have written about TM Forum SID before in What is wrong with TM Forum SID? My criticisms were focused on deficiencies in behavioral modeling. In this article, I turn my attention to the structural model itself.

Let’s start with the concept of Resource. SID defines a model for resources to represent communications network functions. [GB922 Logical and Compound Resource R14.5.0 §1.1.2] This approach seems self-evident. So far, so good. (My intent is not to evaluate how effective the SID resource model is in achieving its goal.)

When we examine the concept of Service, we run into difficulties. In [GB922 Service Overview R14.5.0 §1.1.3], this overview of “service” makes no attempt to provide a precise definition of the term. This section references other standards efforts that have attempted to address the topic. It references various eTOM process areas that apply to service. Finally, it discusses the things that surround and derive from service. All the while, “service” remains undefined, as the document proceeds to a detailed structural decomposition. I don’t consider this a fatal flaw, because we can fill this gap ourselves through contemplation given SID’s circling around the abstraction to evade nailing its definition.

I would define “service” as something of value that can be delivered as a subscription by the resources of a communications network. That wasn’t too difficult. In the context of SID, this definition of “service” is not intended to include human activities that are provided to clients; that is an entirely different concept.

SID specializes “service” into two concepts: (1) customer-facing service and (2) resource-facing service. A CFS is a service that may be commercialized (branded, priced, and sold) as a product to customers. An RFS is not commercialized.

Here is where we begin to see things go wrong. When we model services, such as network connectivity, it may be a CFS under certain circumstances, but it may be a RFS under other circumstances. I think, at this point, SID should have recognized that the concepts of “service” and “resource” are roles that can be taken on by entities. They are not superclasses that are specialized. Using our example of network connectivity, when it is commercialized, it becomes a service, and when it is used to enable (directly or indirectly) something else to be delivered, it acts as a resource. The concepts of “service” and “resource” should be thought of more like “manager” and “employee”. A person is not intrinsically a manager or an employee; a person may take on one or both of these roles contextually. By not recognizing this pattern, this flaw in SID has made modeling very awkward for many types of communications network technologies, especially for layered services, and services that are built from other services (which are treated as resources).

social applications – in the work place

Social applications like Facebook and Twitter have flourished in our personal lives. But their usefulness for work is limited to advertising and other marketing activities. Engagement is through sharing of status updates, links, photos, likes, and comments. This is a decade old approach that has not advanced much.

In Mark Zuckerberg’s interview for Startup School, he shows his understanding that Facebook is a social platform for building social apps. However, it is my opinion that all of the players in the social networking space do not have good vision into the future. Facebook and Twitter treat social interactions as ends in themselves. That is why they present information in a timeline, and they seek out trending topics. Information is like news that is stale after it is read. Engagement is a vehicle for targeted marketing.

Google has tried to compete with Facebook, but they can’t seem to find a formula for success. The article Why Google+ failed, according to Google insiders outlines their failure to achieve mass adoption and engagement. Providing an alternative to Facebook without a discernible improvement is not competitive, because users have no good reason to migrate away from an established network of friends.

Facebook “friend” relationships are more likely to be friends, family, and casual acquaintances. Facebook “follow” and “like” relationships are more likely to be public figures, celebrities, and business-to-consumer connections. Facebook is not the platform for professional relationships, work-related interactions, and business associations. LinkedIn is used for professional relationships with recruiting as its primary function. We should recognize that none of these platforms provides an application platform for actually doing work using social tools. Google failed to recognize this opportunity, as they began to integrate G+ with mail, storage, and other services. Providing a wall for posting information and comments is an extremely limited function for social interaction. It seems like no one has bothered to analyze how workers engage with each other to perform their jobs, so as to identify how social tools can facilitate these interactions to be done with improved productivity.

We do see companies like Atlassian developing tools like JIRA and Confluence for assisting teams to work together. These tools recognize how social interactions are embedded into the information and processes that surround business functions. We need this kind of innovation applied across the board throughout the tools that we use in the enterprise.

Productive work relies on effective communication, coordination, and collaboration. These are social functions. Social networking is already mature in project management, wikis (crowd sourcing information), and discussion forums. But these are often peripheral to the tools that many workers use to perform their primary job functions. We need to be looking at the social interactions that surround these tools to redevelop these tools to facilitate improvements in social interaction.

Let’s explore where social interactions are poor in our work environments today.

As our businesses expand across the globe, our teams are composed of workers who reside in different places and time zones. Remote interactions between non-collocated teams can be extremely challenging and inefficient compared to workers who can have regular face-to-face interactions with tools like white boards and pens. There is a huge opportunity for tablet applications to better support remote workers.

As businesses scale, we may discover that the traditional organizational structures are too rigid to support the ever-accelerating pace of agility that we demand. Perhaps social tools can facilitate innovations in how workers organize themselves. As highly skilled and experienced workers mature, they become more capable of taking the initiative, making good decisions independently, and behaving in a self-motivated manner. Daniel Pink has identified that autonomy, mastery, and purpose are the intrinsic motivators that lead to happy and productive employees. Perhaps with social tooling, it is possible for organizations to evolve to take advantage of spontaneous order among workers instead of relying mostly on top-down management practices for assigning work.

These are two ways in which social networking may apply to enterprises in ways that are not well supported today. All we have to do is examine the pain points in our work environments to identify innovations that may be possible. It is quite surprising to me that we are not already seeing social tools revolutionize the work place, especially in the technology sector where start-ups do not have an entrenched culture and management style.

Reliable Messaging with REST

Marc de Graauw’s article Nobody Needs Reliable Messaging remains as relevant today as it did in 2010, when it was first published. It echoes the principles outlined in Scalable, Reliable, and Secure RESTful services from 2007.

It basically says that you don’t need for REST to support WS-ReliableMessaging delivery requirements, because reliable delivery can be accomplished by the business logic through retries, so long as in the REST layer its methods are idempotent (the same request will produce the same result). Let’s examine the implications in more detail.

First, we must design the REST methods to be idempotent. This is no small feat. This is a huge topic that deserves its own separate examination. But let’s put this topic aside for now, and assume that we have designed our REST web services to support idempotence.

If we are developing components that call REST web services for process automation, the above principle says that the caller is responsible for retrying on failure.

The caller must be able to distinguish a failure to deliver the request from a failure by the server to perform the requested method. The former should be retried, expecting that the failure is temporary. The latter is permanent.

The caller must be able to implement retry in an efficient manner. If the request is retried immediately in a tight loop, it is likely to continue to fail for the same reason. Network connectivity issues sometimes take a few minutes to be resolved. However, if the reason for failure is because the server is overloaded, having all clients retry in a tight loop will exacerbate the problem by slamming the server with a flood of requests, when it is least able to process them. It would be helpful if clients would behave better by backing off for some time and retrying after a delay. Relying on clients to behave nicely on their honor is sure to fail, if their retry logic is coded ad hoc without following a standard convention.

The caller must be able to survive crashes and restarts, so that an automated task can be relied upon to reach a terminal state (success or failure) after starting. Therefore, message delivery must be backed by a persistent store. Delivery must be handled asynchronously so that it can be retried across restarts (including service migration to replacement hardware after a hardware failure), and so that the caller is not blocked waiting.

The caller must be able to detect when too many retry attempts have failed, so that it does not get stuck waiting forever for the request to be delivered. Temporary problems that take too long to be resolved need to be escalated for intervention. These requests should be diverted for special handling, and the caller should continue with other work, until someone can troubleshoot the problem. Poison message handling is essential so that retrying does not result in an infinite loop that would gum up the works.

POST methods are not idempotent, so retry must be handled very carefully to account for side-effects. Even if the request is guaranteed to be delivered, and it is processed properly (exactly once) by the server, the caller must be able to determine if the method succeeded reliably, because the reply can be lost. One approach is to deliver the reply reliably from the server back to the caller. Again, all of the above reliable delivery qualities apply. The interactions to enable this round trip message exchange certainly look very foreign to the simple HTTP synchronous interaction. Either the caller would poll for the reply, or a callback mechanism would be needed. Another approach is to enable the caller to confirm that the original request was processed. With either approach, the reliable execution requirement needs to alter the methods of the REST web services. To achieve better quality of service in the transport, the definition of the methods need to be radically redesigned. (If you are having a John McEnroe “you cannot be serious” moment right about now, it is perfectly understandable.)

Taking these requirements into consideration, it is clear that it is not true that “nobody needs reliable messaging”. Enterprise applications with automated processes that perform mission-critical tasks need the ability to perform those tasks reliably. If reliable message delivery is not handled at the REST layer, the responsibility for retry falls to the message sender. We still need reliable messaging; we must implement the requirement ourselves above REST, and this becomes troublesome without a standard framework that behaves nicely. If we accept that REST can provide only idempotence toward this goal, we must implement a standard framework to handle delivery failures, retry with exponential back off, and divert poison messages for escalation. That is to say, we need a reliable messaging framework on top of REST.

[Note that when we speak of a “client” above, we are not talking about a user sitting in front of a Web browser. We are talking about one mission-critical enterprise application communicating with another in a choreography to accomplish some business transaction. An example of a choreography is the interplay between a buyer and a seller through the systems for commerce, quote, procurement, and order fulfillment.]

OLTP database requirements

Here is what I want from a database in support of enterprise applications for online transaction processing (OLTP).

  1. ACID transactions – Enterprise CRM, ERP, and HCM applications manage data that is mission critical. People’s jobs, livelihoods, and businesses rely on this data to be correct. Real money is on the line.
  2. Document oriented – A JSON or XML representation should be the canonical way that we should think of objects stored in the database.
  3. Schema aware – A document should conform to a schema (JSON Schema or XML Schema). Information has a structure and meaning, and it should have a formal definition.
  4. Schema versioned – A document schema may evolve in a controlled manner. Software is life cycle managed, and its data needs to evolve with it for compatibility, upgrades, and migration.
  5. Relational – A subset of a document schema may be modeled as relational tables with foreign keys and indexes to support SQL queries, which can be optimized for high performance.

The fundamental shift is from a relational to a document paradigm as the primary abstraction. Relational structures continue to play an adjunct role to improve query performance for those parts of the document schema that are heavily involved in query criteria (WHERE clauses). The document paradigm enables the vast majority of data to be stored and retrieved without having to rigidly conform to relational schema, which cannot evolve as fluidly. That is not to say that data stored outside of relational tables is less important or less meaningful. To the contrary, some of the non-relational data may be the most critical to the business. This approach is simply recognizing information that is not directly involved in query criteria can be treated differently to take advantage of greater flexibility in schema evolution and life cycle management.

Ideally, the adjunct relational tables and SQL queries would be confined by the database to its internal implementation. When exposing a document abstraction to applications, the database should also present a document-oriented query language, such as XQuery or its equivalent for JSON, which would be implemented as SQL, where appropriate as an optimization technique.

NoSQL database technology is often cited as supporting a document paradigm. NoSQL technologies as they exist today do not meet the need, because they do not support ACID transactions and they do not support adjunct structures (i.e., relational tables and indexes) to improve query performance in the manner described above.

Perhaps the next best thing would be to provide a Java persistent entity abstraction, much like EJB3/JPA, which would encapsulate the underlying representation in a document part (e.g., as a XMLType or a JSON CLOB column) and a relational part, all stored in a SQL database. This would also provide JAXB-like serialization and deserialization to and from JSON and XML representations. This is not far from what EclipseLink does today.

innovation – a new definition

innovation [noun] – context violating exaptation.

Ever since I read this tweet in 2012 by Fast Company, I have redefined innovation in this way.

Here is the first definition of exaptation from Dictionary.com.

noun, Biology
1. a process in which a feature acquires a function that was not acquired through natural selection.

By taking something or a combination of things and applying it to a purpose to which it was not intended (violating its original context), one may discover that it is well-suited to perform a different function. This discovery becomes an innovation.

Ignorance of the law

Ignorance of the law is no excuse. That is the principle we are expected to live by. If we embrace the full implication of this principle, it may merit being adopted as a Constitutional principle that places the most effective constraints on government overreach.

If ignorance of the law is not an allowable excuse, it is imperative for government to enact laws that people can read and comprehend to remain in compliance. Moreover, the laws for crimes and misdemeanors, as well as the regulations that every person must comply with must be readable and comprehensible in totality for the average person without requiring professional legal counsel. This requires that all crimes, misdemeanors, and regulations must not exceed a certain maximum number of words in their totality. That limit should be established to be what an average student can read and comprehend by investing one hour per day during four years of high school. The government is forbidden from writing laws and regulations that exceed this limit, so as not to instigate ignorance of the law.

faster than light travel

The article NASA May Have Accidentally Created a Warp Field is getting people excited about faster than light travel.

You don’t need to travel faster than light to go arbitrarily far in arbitrarily less time. All you need to do is travel closer to the speed of light. As you get closer to c, time dilation and space contraction will contribute to bring arbitrarily distant destinations within reach. Although the travelers will experience relatively manageable passages of time, it is their friends observing from home who will age much more quickly. Travelers moving at nearly c in space have most of their velocity contributing to movement through space dimensions and almost none through time. At home, we are moving at c almost entirely in the time dimension, remaining motionless in space. The laws of physics give everything no option but to move at c through spacetime; we can only choose what part of our motion is through the space dimensions and the remainder is through time.

The benefits imagined from warping space are to alleviate this huge difference in the passage of time, so that travelers can go places and return without generations dying off before they return home. The “faster than light” travel is about how outside observers perceive the traveler’s motion, so that they can share in the experience within their lifetimes. Travelers have no need for FTL motion to reach any destination within their own lifetime, with enough acceleration to move at close to c through space. The desire for FTL motion is for non-travelers who don’t want to die waiting for the travelers to return.

The search for intelligent life

The search for intelligent life outside of our solar system is a difficult one. We tend to think that if we expand the scope of our search to include more galaxies, this is sufficient. But we must accept that even if we had the technology to examine every galaxy exhaustively in perfect detail, we are only covering a minuscule part of the search space, which is almost entirely inaccessible to us by the laws of physics.

We can only see something in the current snapshot in time. Let’s try to imagine a search for human radio signals on Earth from the perspective of a distant alien civilization. The Earth is 4.4 billion years old. Humans started producing radio signals in 1894, so these radio signals have been transmitting for the past 121 years. These signals have only had the opportunity to propagate 121 light years away from Earth in that time. Beyond that distance, no alien civilization would be able to detect these signals. Moreover, an alien civilization would have to coincidentally have developed at a pace in which their technology was at least as advanced at exactly the right time to detect such signals during this tiny window in time upon their arrival. This is a 121 year window out of the 13.82 billion years in which the universe has existed.