All posts by Ben Eng

I am a software architect in the Communications Global Business Unit (CGBU) within Oracle. I currently work on communications solutions that integrate a complete stack of Business Support System (BSS) and Operations Support System (OSS) applications. My previous responsibilities include product management of the Oracle Communications OSS Suite to provide an integrated suite of OSS applications for communications providers.

I founded the Oracle Communications Service & Subscriber Management (S&SM) application and the Oracle Communications Unified Inventory Management (UIM) application. I pioneered the adoption of Object Relational Mapping (ORM) based persistence techniques within OSS applications. I introduced the XML Schema based entity relationship modeling language, which is compiled into the persistent object modeling service. I established the notion of a valid time temporal object model and database schema for life cycle management of entities and the ability to travel through time by querying with a temporal frame of reference or a time window of interest. I established the patterns for resource consumption for capacity management. I championed the development of Web based user interfaces and Web Services for SOA based integration of OSS applications. I was responsible for single handedly developing the entire prototype that formed the foundation of the current generation of the OSS inventory application. I have been engaged in solution architecture with service providers to adopt and deploy Oracle’s OSS applications across the globe. I am responsible for requirements analysis and architectural design for the Order-to-Activate Process Integration Pack (PIP) proposed to integrate the OSS application suite for the communications industry using the Application Integration Architecture (AIA).

Any opinions expressed on this site are my own, and do not necessarily reflect the views of Oracle.

Net Neutrality

Whenever government policies are implemented in the name of consumer protection, we can be sure that it is not consumers being protected, but rather crony industry incumbents. It is presented as a false alternative between government regulation or absence of regulation, when the strongest form of regulation with the greatest degree of consumer protection is the free market, where consumers decide how their dollars are spent. Good products from well-behaving businesses are rewarded. Bad products and ill-behaving businesses are punished, often to extinction. Moreover, when consumers are under-serviced, entrepreneurs enter the market to compete against under-performing incumbents by offering innovative new products and business practices to meet the demand for superior goods and services, often disrupting the status quo. Meanwhile, government regulations necessarily entrench the status quo. “Best practices” can only be best until innovations overtake them, at which time they become obsolete. Government regulations often continue to burden an industry with obsolete practices that prevent innovations from flourishing. Thus, incumbents are protected from agile upstarts.

Net Neutrality is promoted ostensibly to protect consumers from Internet Service Providers (ISPs) throttling traffic to disadvantage competitive “over the top” (OTT) content providers (e.g., Netflix) while favoring the ISP’s own content services (e.g., television in the case of a cable ISP). Another hypothetical straw man is for ISPs to charge customers to enable access to various information services. I would argue that no ISP would pursue such goals, because of the backlash and consequent mass-exodus of customers to the embrace of the ISP’s competition. ISPs would also want to avoid anti-trust concerns. Paranoia about ISP misbehavior disregards the lack of a business case. Net Neutrality was enacted in response to no ISPs actually implementing any anti-competitive traffic management on any significant scale.

Consumers want to preserve a “free and open Internet”—rightly so. ISPs have the practical capability to throttle traffic by origin (content provider), traffic type (e.g., video), or consumption (e.g., data limits for heavy users). They have no practical (cost-effective) mechanism to understand the meaning of the content to selectively filter it. ISPs have only blunt instruments to wield.

Unlike ISPs, content providers (e.g., Netflix, Google, Facebook, Twitter, Cloudflare, GoDaddy) are responsible for “information” services, which fall outside the scope of Net Neutrality for “transmission” by carriers. While ISPs have not attempted to damage a free and open Internet, we have already seen content providers behave very badly toward free speech, since they have the ability to understand the meaning of their content.

If a “free and open Internet” is what is desired, censorship, bans, de-platforming, and de-monetization by companies, who are the strongest advocates of Net Neutrality, are certainly antithetical to that aim. What is their real motive?

Content providers enjoy having their traffic delivered to customers worldwide. They only pay for the bandwidth to the networks they are directly connected to. They are not charged for their traffic transiting other networks, while routed to their end users. Content providers obviously like this arrangement, and they want to preserve this status quo (protecting their crony interests).

Without Net Neutrality, although ISPs may not have a business case for charging customers (end users) for differentiated services, they would have a strong business case for providing differentiated services (various levels of higher reliability, low latency, low jitter, and guaranteed bandwidth) to content providers. Improvements in high quality delivery (called “paid prioritization”) would be beneficial to innovative applications that may not be viable today. For example, remote surgery. With paid prioritization, this would motivate content providers to buy connectivity into an ISP’s network to provide higher quality service to their customers, who receive their Internet access from that ISP. Or to otherwise share revenue with the ISP for such favorable treatment of their traffic. The environment becomes much more competitive between content providers, while more revenue would be shared with the ISPs. ISPs would then be motivated to invest more heavily to improve their networks to capture more of this revenue opportunity. Consumers benefit from higher quality services, better networks, and increased competition (differentiation based on quality) among content providers.

Personal Assistants

Continuing the series on Revolutionizing the Enterprise, where we left off at Sparking the Revolution, I would like to further emphasize immediate opportunities for productive improvements, which do not need to venture into much-hyped speculative technologies like blockchain and artificial intelligence.

In the previous article, I identified communication and negotiation as skills where software agents can contribute superior capabilities to improve human productivity by offloading tedium and toil. Basic elements of this problem can be solved without applying advanced technology like AI. Machine learning can provide additional value by discerning a person’s preferences and priorities. For example, this person is always preferring to reschedule dentist appointments but never reschedules family events to accommodate work. Automating the learning of rules enables the prioritization of activities to be automated, further offloading cognitive load.

In my own work, I wish I had a personal assistant, who could shadow my every move. I want it to record my activities so I can replay them later. I want these activities to be in the most concise and compact form, not only as audio and video. For example, as I execute commands in a bash shell, I want to record the command line arguments, the inputs, and the outputs, so this textual information can be copied to technical documentation. As I point and click through a graphical user interface, I want these events to be described as instructions (e.g., input “John Doe” in the field labeled “Name” and click on the “Submit” button).

With a history of my work in this form, this information will be useful for a number of purposes.

  • Someone who pioneers a procedure will eventually need to document it for knowledge transfer. Operating procedures teach others how to accomplish the same tasks by observing how it was done.
  • Pair programming is often inconvenient due to team members being located remote from each other and separated by time zones. An activity log can enable two remote workers to collaborate more effectively.
  • Context switching by between tasks is expensive in terms of organizing one’s thoughts. Remembering what a person was doing, so that they can resume later would save time and improve effectiveness.

The above would be a good starting point for a personal assistant without applying any form of AI or analytics. Then, imagine what might be possible as future enhancements. Procedures can be optimized. Bad habits can be replaced by better ones. Techniques used by more effective workers can be taught to others. Highly repeatable tasks can be automated to remove that burden from humans.

I truly believe the places to begin innovating to revolutionize the enterprise are the mundane and ordinary, which machines have the patience, discipline, and endurance to perform better than humans. More ambitious technological capabilities are good value-adds, but we should start with the basics to establish personal assistants in the enterprise as participants in ordinary work, not as esoteric tools in obscure niches.

[Image credit – Robotics and the Personal Assistant of the Future]

climate change

While I am not a climate scientist and do not claim to have any special expertise in climate research, my interest is to analyze how a layman should organize his thoughts around this complex topic so that critical thinking can be applied to separate truth from propaganda and political posturing.

We have seen the media and leading voices on the topic of climate change promote the following ideas:

The term “denial” conflates healthy skepticism over extraordinary claims with the non-recognition of scientific facts that are largely beyond dispute. Extraordinary claims are certainly under dispute by experts in their fields, and the disingenuous label of “denial” is intended to chill skepticism, a hallmark of science, with authoritarian dogma (“settled science” is the antithesis of science). Here is a sampling of dissent:

The case being made by the climate change activists contains several elements:

  1. Global temperatures have been rising at an alarming rate since the beginning of the industrial revolution. This is where science investigates the facts through measurements that are indisputable except for accuracy, error, and methods to ensure that the data is true. Unfortunately, even at this fundamental level certain scientists have exhibited misbehavior (Climategate), such as data manipulation, lack of transparency of unaltered data sets, what those data alterations were, exclusion of participants from peer review, exclusion of participants from publications, and suppression of dissent. While such misbehavior undermines the credibility of the science, on net the evidence does support the position that global temperatures have risen 1.2°C since the pre-industrial era.
  2. Warming is predominantly caused by increased CO2 concentrations in the atmosphere. The strongest argument in favor of attributing the cause of warming to CO2, as opposed to solar activity and any number of variables that affect the phenomena under measurement (e.g., proxies to temperature such as tree rings are sensitive to many factors like sunlight, rainfall, soil conditions, nutrients, and pestilence which cannot be separated from the effects of temperature), is the observation that surface temperatures have risen as lower stratospheric temperatures have dropped, which is predicted if greenhouse warming due to CO2 is the cause.
  3. Warming is unprecedented and unnatural.  This is where science can provide insights into historical events and patterns. Articles such as Nature Unbound III: Holocene climate variability (Part A) and Part B give some perspective into natural trends over millennia that show large temperature variations and atmospheric CO2 levels that are natural and uncorrelated.
  4. Rising atmospheric CO2 levels are the result of emissions from burning fossil fuels, and therefore human activity is to blame. There can be little dispute that the post-industrial rise in atmospheric CO2 is primarily attributable to human activity.
  5. Elevated atmospheric CO2 levels and associated warming are bad. Melting glaciers, rising sea levels, increases in extreme weather events, disruptions of ocean currents, ocean acidification, and even mass extinctions are potential hazards that climate alarmists are warning of. These claims are strongly disputed [1] [2] [3] [4]. Measuring a global trend and determining the cause are problematic. On the other hand, there  is a fair amount of research that suggests that global warming has been beneficial.
  6. Rising atmospheric CO2 levels and warming trend will be catastrophic. Predictions of catastrophic levels of warming are based on climate models, which have had a very poor track record to date. Models have made predictions that do not comport with observations.
  7. Intervention is required to curtail human activity that emits CO2.
  8. Government policies are the proper means of intervention.
  9. The specific policies being advocated are the best solutions to prevent catastrophe and provide the best net benefit.

By the time we reach the final three claims about solutions, we must have already drawn conclusions from the previous six that global warming is catastrophic and predominantly caused by CO2  emissions from human activities. Any critical examination of the evidence would not support such a conclusion.  The case for climate alarmism falls apart at the third claim. The evidence favors the Lukewarmers, “those who argue that carbon dioxide indeed is warming surface temperatures, but that its effect is modest and that we are inadvertently adapting”. However, let’s roll with the “hotheads” to see where they want to lead us.

When exploring practical solutions, we move beyond scientific research into the realm of engineering, which is applied science. How to solve the problem can either be compatible with liberty – relying on voluntary action; or the solutions can rely on coercion and force through government action. This falls into the political realm.

When evaluating how best to deploy scarce resources (e.g., labor, factors of production, capital investment) among various alternative solutions, we move beyond the physical sciences into the realm of economics, which is a social science. Humans cannot be treated as inanimate objects without free will, rationality, and rights.

Government policies cannot be implemented without expecting people to resist, avoid, or bypass them. Policies cannot anticipate how human ingenuity and innovation can provide better solutions; or how policies may impair such solutions from being developed, as crony regulations that protect incumbents and government “picking winners”  have a tendency to do.

Government funding of scientific research has conflicts of interest. You tend to get the results that you pay for, because researchers understand that their funding will only continue if the government’s favored outcomes are achieved and their policy goals are supported.

Government funding of their preferred solutions results in cronyism. Let’s examine green energy subsidies. Here are some examples:

If climate change advocates cared about practical solutions to replacing CO2 emitting energy generation, they would support modern and future nuclear power technologies. This topic is explored in the article titled What are some policies that would improve millions of lives, but people still oppose?

What climate alarmists leave unsaid is their aim to scale back human activity to reduce the impact on the environment. It is a greater priority to preserve wilderness than to improve the standard of living for humans. People have no right to exist in their world view.

The popular movement against climate change is not primarily about science. Its main aim is political advocacy. That is, scientific arguments are used to support the political lobbying for government mandated economic solutions to future problems that are predicted by models based on the scientific explanations of physical phenomena that contribute to climate. In the realm of political debate and economics, the physical sciences are just a useful idiot, where cherry-picked results are used to promote the preferred policy goals. Popular opinion is driven by the desired political outcome, not by the truth of the science. Their goal is to shift power away from individuals seeking to improve their standard of living, and concentrating power in governments to implement collectivist policies that are used to implement cronyism and corruption.

Rights

This article is a derivation of the concept of rights from first principles. This is the notion of natural rights consistent with The Jeffersonian Perspective. Throughout this article, I shall use the term “man” to refer to a human being, not to gender.

Man exists as a being of a certain nature, which is distinctly different than other beings. Man is a living being with the unique ability to reason. Man relies on his ability to reason for survival.

For man to exist qua man, as a living reasoning being according to his nature, certain conditions must exist. Man’s rationality is conditional upon being free to think and act according to his reasoning mind. Freedom is the absence of coercion and force from other men. The mind is impotent when man is threatened or attacked with the force of violence.

Rights are based on the mutual recognition of man’s nature and the conditions required for man’s survival. Rights are the mutual respect for freedom of thought and action so that man can exist according to his nature.

Sparking the Revolution

In my previous article, Revolutionizing the Enterprise, I provided an outlook for how emerging technologies may help to transform how we do work. Now, let’s explore how we might provide the spark that starts the fire to burn down the old and welcome the new. The world does not change in a radical way without a progression of steps that pave a path for getting from here to there. What might the first step be to introducing robots and AIs as personal assistants into the regular work lives of numerous employees?

We need only look to our daily struggles to identify where every person would see the value of machine intelligence. Organizing a meeting among several participants can be challenging. You need to find a convenient time when every participant is available. You need to find a suitable venue that can accommodate everyone. If folks need to travel, the complexity rises enormously, because each traveler’s attendance is then dependent upon successfully booking travel arrangements. The risk of a single unsatisfied requirement causing the meeting to be non-viable rises with each participant and their special needs. If the meeting needs to be moved to accommodate certain participants, this would then trigger a storm of activity to renegotiate, and a flurry of activity to explore how calendars can be readjusted with a cascade of renegotiations of other appointments, each having its own priority and constraints.

This kind of negotiation among a network of people is virtually impossible to accomplish by humans among each other, because of the latency for human communications. However, if every human could be represented by an agent, who could negotiate on their behalf, this kind of activity could become painless. Imagine how many hours of phone tag, email, and travel booking could be saved. Even if an agent were not entrusted to finalize decisions on travel booking, all of the negotiation and arrangements could be prepared and presented for final approval by the human; or even involve the human at key decision points by presenting a short list of options to guide the way forward for the agent.

I believe, ordinary mundane problems such as this one, which every person has experienced, will serve as an opportunity to introduce machine intelligence to work alongside us. The off-loading of such unproductive and non-creative toil to an automated personal assistant would be a welcome change that would be seen as another useful tool, rather than a radical development. And that’s how the revolutionary should begin.

Revolutionizing the Enterprise

It has been over five years since I wrote an article titled Enterprise Collaboration, in which I identified the need for innovations to transform how people do their work. Since then, we have seen no significant advances. Enterprise applications continue to move very slowly to the cloud, driven primarily by cost efficiencies with little noticeable functional improvement except at the margins (big data analytics, social, search, mobile, user experience).

Where can we go from here?

I still firmly believe that a global work force needs to be decoupled in space and time. Mobility and cloud services will continue to provide an improving platform to enable work to be performed at any time from wherever people want. We should enable people to their work as effectively from the office as from home, in their vehicles, during air travel, at the coffee shop, or anywhere else they happen to be. Advances in computing power, miniaturization, virtual reality, alternative display and input technologies (e.g., electronic skin, heads up displays, voice recognition, brain computer interfaces, etc.), and networking will continue to provide an improving platform for inventing better ways of doing work and play. This path does not need too much imagination to foresee.

Recently, we have seen an up-tick in applying artificial intelligence. Every major company seems to be embracing AI in some form. Image recognition and natural language are areas that have been researched for decades, and they are now being employed more ubiquitously in every day applications. These technologies lower the barrier between the virtual world and the real world, where humans want to interact with machine intelligence on their own terms.

However, I believe an area where AI will provide revolutionary benefits is in decision support and autonomous decision-making. So much of what people do at work is tedium that they wish could be automated. Some forms of tedium are drudgery, such as reporting status and time to management, organizing and scheduling meetings among team members, planning work and tracking progress, and keeping people informed. These tasks are routine and time-consuming, not creative and value-producing. Machines can interact among themselves to negotiate on behalf of humans for the most mundane tasks that people don’t really care too much to be involved in. Machines can slog through an Internet full of information to gather, prune, and organize the most relevant set of facts that drive decisions. Machines can carry out tasks on their own time, freeing up humans to work on more important or interesting things.

Personal assistants as computing applications are a new phenomenon. Everyone has heard of Amazon Echo and Google Assistant by now. I can imagine advances in this capability expanding into all areas of work and personal life to help off-load tedium. As AI becomes more capable, we should see them taking over mundane tasks, like research (e.g., comparing products to offer recommendations toward a purchasing decision, comparing providers toward recommending a selection), planning, coordinating, note taking, recalling relevant information from memory, distilling large volumes of information into a concise summary, etc. Eventually, AI will even become capable enough to take over mundane decision-making tasks that a person no longer cares to make (e.g., routinely replenish supplies of consumables from the lowest priced supplier, repetitive tasks).

The other phenomenon that will revolutionize the work place even more than in the past is robotics. Robots have already revolutionized manufacturing for decades by replacing repetitive error-prone labor-intensive tasks with perfectly reproducible error-free automation. We are seeing politics influence businesses to apply robots, where human labor sufficed in the past, purely because of the increasing cost of labor. Minimum wage legislation (bans on jobs that pay less than some mandated floor in wages) that raises labor costs above the value produced will force businesses to rethink how to operate profitably. Beyond entry-level jobs, such as fast food service, self-driving cars and trucks are already in trials for ride-sharing and long haul cargo transport. As robots become more dexterous, mobile, compact, and intelligent, we will see them become personal assistants to perform physical tasks as much as we see them in software form perform computing tasks. We should anticipate that robots will serve in a broad spectrum of capacities from low-skilled drudgery to highly-skilled artisans and professions.

The future enterprise will involve a work force where humans, AIs, and robots collaborate closely. Humans have a comparative advantage in performing creative and path-finding tasks with ill-defined goals, many unknowns, and little experience to draw upon. Robots and AIs have a comparative advantage in performing repetitive, well-defined, and tedious tasks. Together, they will transform the enterprise in ways that we have never seen before.

black hole and big bang singularities

Here is Ben’s theory about the impossibility of black hole and big bang singularities.

I think once we understand the Higgs mechanism better, we will discover that above a certain temperature, which rises as we work backward in time to make the universe more dense, the opposite of “condensation” happens. The bosons no longer have mass. No mass, no gravity; no longer contributing gravity means the very force that is squeezing things together stops squeezing at the core. I believe this should put a limit on how dense things can be, so it is impossible to form a singularity, if you cannot pass this density limit at the extreme interior.

I wish I knew a lick of math, so I could even comprehend what SU(2) × U(1) means. Sadly, I’ll have no chance of writing a paper and winning a Nobel prize. Math is hard.

cloud services for the enterprise

The Innovator’s Dilemma describes how the choice to sustain an incumbent technology may need to be weighed against pursuing disruptive new technologies. Nascent technologies tend to solve a desirable subset of a problem with greater efficiency. They change the game by making what used to be a costly high-end technology available as a commodity that is affordable to the masses. It turns out that high-end customers can often live without the rich capabilities of the costly solution, and they would rather save on cost. Meanwhile, with the success that the low-end solution is gaining in the market, it can invest in maturing its product to encroach into the high-end market. Eventually, the incumbent product’s market is entirely taken over by the rapidly growing upstart, who was able to establish a foothold in a larger installed base.

That is the situation we find ourselves in today with enterprise applications. Large companies rely on expensive software licenses for Customer Relationship Management, Enterprise Resource Management, and Human Capital Management applications deployed on-premise. Small and medium sized businesses may not be able to afford the same kinds of feature rich software, because not only is the software license and annual maintenance cost expensive, but commercial off the shelf software for enterprises are typically platforms that require months of after-market solution development, customization, and system integration to tailor the software to the business policies and processes specific to the enterprise. The evolution to cloud services aims to disrupt this situation.

Let us explore the ways that cloud services aim to be disruptive.

As described above, traditional enterprise software are platforms. An incumbent product that wants to evolve to cloud without disrupting its code base will merely be operating in a sustaining mode, not achieving significant gains in efficiency. Being more PaaS-like, the prohibitive cost and onerous effort of after-market solution development remains a huge barrier to entry for customers. To become SaaS-like, a cloud service must be useful by default, immediately of value to the end users of its enterprise tenants.

Cloud services are disruptive by providing improved user experiences. Of course, this means a friendlier Web user interface that is optimized for users to perform their work more easily and intuitively. User interfaces need to be responsive to device screen size, orientation, locale, and input method. Cloud services also provide advantages for enterprise collaboration by enabling the work force to be mobile. Workers need to become more decoupled in space and time, so they can be more geographically dispersed and global in reach. Cloud services should assist in transforming how employees work together, not just replacing the same old ways of doing our jobs using a Web browser instead of a desktop application. Mobile applications may even enable new ways of interacting that are not recognizable today.

Cloud services are disruptive economically. Subscription pricing replaces perpetual software licensing and annual maintenance costs along with the capital costs of hardware infrastructure, IT staffing to operate an on-premise deployment, and on-going infrastructure maintenance, upgrades, and scaling. Subscription pricing in and of itself is not transformational. It is only superficially different by virtue of amortizing the traditional cost of on-premise deployment over many recurring payments. The main benefit is in eliminating the financial risk associated with huge up-front capital expenditures in case the project fails. Migrating a traditional on-premise application into the cloud is not really financially disruptive unless it can significantly alter the costs involved. In fact, by taking on the capital cost of infrastructure and the operational cost of the deployment, the software vendor has now cannibalized its on premise application business and replaced it with a lower margin business with high upfront costs and risk—this is a terrible formula for profitability and a healthy business.

Multi-tenancy provides this disruptive benefit. Multi-tenancy enables a cloud service to support users from multiple tenants. This provides significant cost advantages over single-tenant deployments in terms of resource utilization, simplified operations, and economies of scale. Higher deployment density translates directly into higher profit, but by itself multi-tenancy provides no visible benefit to users. The disruption comes when the vendor realizes that at scale multi-tenancy enables a new tenant to be provisioned at near zero cost. This opens up the possibility of offering an entry level service to new tenants at a low price point, because the cost to the vendor is zero. Zero cost entry-level pricing is transformational by virtue of making a cloud service available to small enterprises who would never have been able to afford such capabilities in the past. This enables innovation to be done by individual or small scale entrepreneurs (start-ups), who have the most radical, risky, and unconventional, paradigm-shifting ideas.

Elastic scaling provides another disruptive benefit. It enables a cloud service to perform as required as a tenant grows from seeding a proof-of-concept demonstrator to large scale (so-called Web scale) production. The expertise, techniques, and resources needed to scale a deployment are difficult and costly to acquire. When a vendor can provide this pain-free, an enormous burden is lifted from the tenant’s shoulders.

Cloud services evolve with the times through DevOps and continuous delivery. Traditional on-premise applications tend to be upgraded rarely due to the risk and high development cost of customization, which tends to suffer from compatibility breakage. Enterprise applications are often not upgraded for years. “If it ain’t broke, don’t fix it.” Even though the software vendor may be investing heavily in feature enhancements, functional and performance improvements, and other innovations, users don’t see the benefits in a timely manner, because the enterprise cannot afford the pain of upgrading. When the vendor operates the software as a SaaS offering, upgrades can be deployed frequently and painlessly for all tenants. Users enjoy the benefit of software improvements immediately, as the cloud service stays up-to-date with the current competitive business environment.

Combining the abilities to provision a tenant to be useful immediately by default, to start at near zero cost, to scale with growth, and to evolve with the times, cloud services provide tools that can enable business agility. A business needs to be able to turn on a dime, changing what they sell and how they operate in order to stay ahead of their competitors. Cloud services are innovative and disruptive in these ways in order to enable their enterprise tenants to be innovative and disruptive.

intent modeling as a programming language

In Tom Nolle’s blog article titled What NFV Needs is “Deep Orchestration”!, he identifies the need for a modernized Business Support System and Operations Support System (BSS/OSS) to improve operations efficiency and service agility by extending service-level automation uniformly downward into the network resources, including the virtual network functions and the NFV infrastructure.

Service orchestration is the automation of processes to fulfill orders for life cycle managed information and communications technology services. Traditionally, this process automation problem has been solved through business process modeling and work flow management, which includes a great deal of system integration to glue together heterogeneous software components that do not naturally work together. The focus of process modeling is on “how” to achieve the desired result, not on “what” that result is. The “what” is the intent; the content of the order captures the intent.

To achieve agility in launching services, we must be able to model services in a manner that allows a service provider to redefine the service to suit the current business need. This modeling must be done by product managers and business analysts, experts in the service provider’s business. Any involvement of software developers and system integrators will necessarily require programming at a level of abstraction that is far below the concepts that are natural to the service provider’s business. The software development life cycle is very costly and risky, because the abstractions are so mismatched with the business. When service modeling directly results in its realization in a completely automated executable runtime without involving other humans in any software development activities, this becomes Programming for Non-programmers.

The key is Going Meta. The “what” metadata is the intent modeling. The “how” metadata is the corresponding fulfillment and provisioning behavior (service-level automation). If the “what” and “how” can be designed as a language that can be expressed in modular packages, which are reusable by assembling higher level intent based on lower level components, this would provide an approach that would facilitate the service agility users are looking for. Bundling services together and utilizing lower level services as resources that support a higher level service are familiar techniques, which would be familiar to users who are designing services. When users express themselves using this language, they are in fact programming, but because the language is made up entirely of abstractions that are familiar and natural to the business, it does not feel burdensome. General purpose programming languages like Java feel burdensome, because the abstractions are for a low level computational machine, not a high level business-oriented machine for service-level automation of human intent.

Our challenge in developing a modernized BSS/OSS is to invent this language for intent modeling for services. An IETF draft titled YANG Data Models for Intent-based NEtwork MOdel attempts to define a flavor of intent modeling for network resources. An IETF draft titled Intent Common Information Model attempts to define a flavor of intent modeling that is very general, but it is far removed from any executable runtime that can implement it, because it is so imprecise (not machine executable). ETSI NFV MANO defines an approach that captures intent as descriptors for network services as network functions and their connections. However, these abstractions are not expressive enough to extend upward into the service layer, across the entire spectrum of network technologies (physical and virtualized), and into the “how” for automation, to enable the composition of resources into services and the utilization of services as resources to support higher level services that can be commercialized. More thought is needed to design a good language for this purpose and a virtual machine that is capable of executing the code that is produced from it.

vertical integration

Applications have been pursuing operational efficiency through vertical integration for years. This is generally understood to mean assembling infrastructure (machine and operating system) with platform components (database, middleware) and application components into an engineered system that is pre-integrated and optimized to work together.

Now, the evolution to cloud services is following the same pattern. IaaS is integrated into PaaS. IaaS and PaaS are integrated with application components to deliver SaaS. However, just as we see in on-premise enterprise information systems, applications do not operate in silos. They are integrated by business processes, and they must collaborate to enforce business policies across business functions and organizations.

Marketing is deeply interwoven with sales. Product configuration, pricing, and quotation are tied to order capture and fulfillment. Fulfillment involves inventory, shipping, provisioning, billing, and financial accounting. Customer service is linked with various service assurance components, billing care, and also quote and order capture. All components need views of accounts, assets (products and services subscribed to), agreements, contracts, and warranties. Service usage and demand all feed analytics to drive marketing campaigns that generate more sales. What a tangled web.

What is clear from this picture is that vertical integration does not end with infrastructure, platform, and a software application. Applications contribute components that combine with business processes and business policies to construct higher level applications. This may continue for many layers of integration according to the self-similar paradigm.

The evolution to cloud should recognize the need for integration of SaaS components with business processes and business policies. However, it does not appear as though cloud services have anticipated the need for vertical integration to continue in layers. To construct assemblies, the platform should provide a means of defining such assemblies, so that they can be replicated by packaging and deploying them on infrastructure at various scales. The platform should provide a consistent programming and configuration model for extending and customizing applications in ways that are natural to being reapplied layer by layer.

Vertical integration is not an elegantly solved problem for on-premise applications. On-premise application integration is notoriously complex due to heterogeneity and vastly inconsistent interfaces and programming models. One component’s notion of customer is another’s notion of party. Two components with notions of customer do not agree on its schema and semantics. A product to one component is an offer to another. System integration projects routinely cost five to ten times the software license cost of the application components, because of the difficulty of overcoming impedance mismatches, gaps in functional capabilities, duct tape, and bubblegum.

Examining today’s cloud platforms and the applications built upon them, it is looking like we have not learned much from our past mistakes. We are faced with the same costly and clunky integration nightmare with no breakthrough in sight.