Tag Archives: platform

Decentralization: Be Unstoppable and Ungovernable

The trucker’s freedom convoy in Canada has revealed how individuals are vulnerable to tyrannical (rights violating) actions. Governments and corporations cooperated with authoritarian diktats across jurisdictional boundaries. Maajid Nawaz warns of totalitarian power over the populace using a social credit system imposed via central bank digital currency (CBDC) regimes being developed to eliminate cash. “Programmable” tokens will give the state power to control who may participate in financial transactions, with whom, when, for what, and how much. Such a regime would enable government tyranny to reign supreme over everyone and across everything within its reach. We need decentralization.

Centralized dictatorial power is countered by decentralization. Decentralization is especially effective when designed into technology to be immutable after the technology proliferates. The design principle is known as Code is law. The Proof of Work (PoW) consensus algorithm in Bitcoin is one such technology. CBDC is an attempt to prevent Bitcoin from becoming dominant. Criticism of PoW using too much electricity is another enemy tactic.

National and supranational powers (above nation states) are working against decentralization in order to preserve their dominance. The World Economic Forum (WEF) is installing its people into national legislatures and administrations to enact policies similar to those of the Chinese Communist Party (CCP). They seek to concentrate globalized power for greater centralization of control.

We look toward Web3 and beyond to enable decentralization of digital services. As we explore decentralized applications, we must consider the intent behind distributed architectures for decentralization. What do we want from Web3?

Unstoppable Availability

Traditionally, we think about availability with regard to failure modes in the infrastructure, platform services, and application components. Ordinarily, we do not design for resiliency to the total loss of infrastructure and platform, because we don’t consider our suppliers to be potentially hostile actors. However, multinational corporations are partnering with foreign governments to impose extrajudicial punishments on individuals. This allows governments to extend their reach to those who reside outside their jurisdictions. Global integration and the unholy nexus of governments with corporations put individuals everywhere within the reach of unjust laws and authoritarian diktats. It is clear now that this is one of the greatest threats that must be mitigated.

Web3 technologies, such as blockchain, grew out of recognition that fiat is the enemy of the people. We must decentralize by becoming trustless and disintermediated. Eliminate single points of failure everywhere. Run portably on compute, storage, and networking that are distributed across competitive providers. Choose a diversity of providers in adversarial jurisdictions across the globe. Choose providers that would be uncooperative with government authorities. When totalitarianism comes, Bitcoin is the countermove. Decouple from centralized financial systems, including central banking and fiat currencies. Become unstoppable and ungovernable, resistant to totalitarianism.

To become unstoppable, users need to gain immunity from de-platforming and supply chain disruption. Users need to be able to keep custody of their own data. Users need to self-host the application logic that operates on their data. Users need to compose other users’ data for collaboration without going through intermediaries (service providers who can block or limit access).

To achieve resiliency, users need to be able to migrate their software components to alternative infrastructure and platform providers, while maintaining custody of their data across providers. At a minimum, this migration must be doable by performing a human procedure with some acceptable interruption of service. Ideally, the possible deployment topologies would have been pre-configured to fail-over or switch-over automatically as needed with minimal service disruption. Orchestrating the name resolution, deployment, and configuration of services across multiple heterogeneous (competitive) clouds is a key ingredient.

Custody of data means that the owner must maintain administrative control over its storage and access. The owner must have the option of keeping a copy of it on physical hardware that the owner controls. Self-hosting means that the owner must maintain administrative control over the resources and access for serving the application functions to its owner. That hosting must be unencumbered and technically practical to migrate to alternative resources (computing, financial, and human).

If Venezuela can be blocked from “some Ethereum services”, that is a huge red flag. Service providers should be free to block undesirable users. But if the protocol and platform enables authorities to block users from hosting and accessing their own services, then the technology is worthless for decentralization. Decentralization must enable users to take their business elsewhere.

Ungovernable Privacy

Privacy is a conundrum. Users need a way to identify themselves and authenticate themselves to exert ownership over their data and resources. Simultaneously, a user may have good reason to keep their identity hidden, presenting only a pseudonym or remaining cloaked in anonymity in public, where appropriate. Meanwhile, governments are becoming increasingly overbearing in their imposition of “Know Your Customer” (KYC) regulations on businesses ostensibly to combat money laundering. This is at odds with the people’s right to privacy and being free from unreasonable searches and surveillance. Moreover, recruiting private citizens to spy on and enforce policy over others is commandeering, which is also problematic.

State actors have opposed strong encryption. They have sought to undermine cryptography by demanding government access to backdoors. Such misguided, technologically ignorant, and morally bankrupt motivations disqualify them from being taken seriously, when it comes to designing our future platforms and the policies that should be applied.

Rights are natural (a.k.a. “God-given” or inalienable). They (including privacy) are not subject to anyone’s opinion regardless of their authority or stature. Cryptographic technology should disregard any influence such authorities want to exert. We must design for maximum protection of confidentiality, integrity, and availability. Do not comply. Become ungovernable.

Composability

While the capabilities and qualities of the platform are important, we should also reconsider the paradigm for how we interact with applications. Web2 brought us social applications for human networking (messaging, connecting), media (news, video, music, podcasts), and knowledge (wikis). With anything social, group dynamics invariably also expose us to disharmony. Web2 concentrated power into a few Big Tech platforms; the acronym FAANG was coined to represent Facebook (now Meta), Amazon, Apple, Netflix, and Google (now Alphabet).

With centralized control comes disagreement over how such power should be wielded as well as corruption and abuse of power. It also creates a system that is vulnerable to indirect aggression, where state actors can interfere or collude with private actors to side-step Constitutional protections that prohibit governments from certain behaviors.

David Sacks speaks with Bari Weiss about Big Tech’s assault on free speech and the hazard of financial technologies being used to deny service to individuals, as was done to the political opponents of Justin Trudeau in Canada in response to the freedom convoy protests.

Our lesson, after enduring years of rising tension in the social arena and culminating in outright tyranny, is that centralized control must disappear. Social interactions and all forms of transactions must be disintermediated (Big Tech must be removed as the middlemen). The article Mozilla unveils vision for web evolution shows Mozilla’s commitment to an improved experience from a browser perspective. However, we also need a broader vision from an application (hosted services) perspective.

The intent behind my thoughts on Future Distributed Applications and Browser based capabilities is composability. The article Ceramic’s Web3 Composability Resurrects Web 2.0 Mashups talks about how Web2 composability of components enabled mashups, and it talks about Web3 enabling composability of data. The focus is shifting from the ease of developing applications from reusable components to satisfying the growing needs of end users.

Composability is how users with custody of their own data can collaborate among each other in a peer-to-peer manner to become social, replacing centralized services with disintermediated transactions among self-hosted services. The next best alternative to self-hosting is enabling users to choose between an unlimited supply of community-led hosted services that can be shared by like-minded mutually supportive users. The key is to disintermediate users from controlling entities run by people who hate them.

State of Technology

The article My First Web3 Webpage is a good introduction to Web3 technologies. This example illustrates some very basic elements, including name resolution, content storage and distribution, and the use of cryptocurrency to pay for resources. It is also revealing of how rudimentary this stuff is relative to the maturity of today’s Web apps. Web3 and distributed apps (dApps) are extremely green. Here is a more complicated example. Everyone is struggling to understand what Web3 is. Even search is something that needs to be rethought.

The article Why decentralization isn’t the ultimate goal of Web3 should give us pause. Moxie Marlinespike, Jack Dorsey, Mark Andreeson, and other industry veterans are warning us about the current crop of Web3 technologies being fraudulent and conflicted. Vitalik Buterin’s own views confess that the technology may not be going in the right direction. Ethereum’s deficiencies are becoming evident. This demands great caution and high suspicion.

Here is a great analysis of the critiques against today’s Web3 technologies. It is very clarifying. One important point is the ‘mountain man fantasy’ of self-hosting; no one wants to run their own servers. The cost and burden of hosting and operating services today is certainly prohibitive.

Even if the mountain man fantasy is an unrealistic expectation for the vast majority, so long as the threat of deplatforming and unpersoning is real, people will have a critical need for options to be available. When Big Tech censors and bans, when the mob mobilizes to ruin businesses and careers, when tyrannical governments freeze bank accounts and confiscate funds, it is essential for those targeted to have a safe haven that is unassailable. Someone living in the comfort of normal life doesn’t need a cabin in the woods, off-grid power, and a buried arsenal. But when you need to do it, living as a mountain man won’t be fantastic. Prepping for that fall back is what decentralization makes possible.

In the long term, self-hosting should be as easy, effortless, and affordable as installing desktop apps and mobile apps. We definitely need to innovate to make running our apps as cloud services cheap, one-click, and autonomous, before decentralization with self-hosting can become ubiquitous. Until then, our short-term goal should be to at least make decentralization practical, even if it is only accessible initially to highly motivated, technologically savvy early adopters. We need pioneers to blaze the trail in any new endeavor.

As I dive deeper into Web3, it is becoming clear the technology choices lean toward Ethereum blockchain to the exclusion of all else. Is Ethereum really the best blockchain to form a DAO? In Ethereum, writing application logic is expected to be smart contracts. Look at the programming languages available for smart contracts. Even without examining any of these languages, my immediate reaction is revulsion. Who would want to abandon popular general purpose programming languages and their enormous ecosystems? GTFO.

We need a general purpose Web architecture for dApps that are not confined to a niche. I imagine container images served by IPFS as a registry, and having a next-gen Kubernetes-like platform to orchestrate container execution across multicloud infrastructures and consuming other decentralized platform services (storage, load balancing, access control, auto-scaling, etc.). If the technology doesn’t provide a natural evolution for existing applications and libraries of software capabilities, there isn’t a path for broad adoption.

We are early in the start of a new journey in redesigning the Web. There is so much more to understand and invent, before we have something usable for developing real-world distributed apps on a decentralized platform. The technology may not exist yet to do so, despite the many claims to the contrary. This will certainly be more of a marathon, rather than a sprint.

going meta – the human-machine interface

Anatomy of an n-tier application

A fully functioning web app involves several layers of software, each with its own technology, patterns, and techniques.

At the bottom of the stack is the database. A schema defines the data structures for storage. A query language is used to operate on the data. Regardless whether the database is relational, object-relational, NoSQL, or some other type, the programming paradigm at the database tier is distinctly different than and quite foreign from the layers above.

Above the database is the middle tier or application server. This is where the server-side business logic, APIs, and Web components reside.

There is usually a set of persistent entities, which provide an object abstraction of the database schema. The database query language (e.g., SQL) may be abstracted into an object query language (e.g., JPQL) for convenience. The majority of CRUD (create, read, update, delete) operations can be done naturally in the programming language without needing to formulate statements in the database query language. This provides a persistent representation of the model of the application.

Above the persistent entities is a layer of domain services. The transactional behavior of the business logic resides in this layer. This provides the API (local) that encapsulates the essence of the application functions.

The domain services are usually exposed as SOAP or RESTful services to remote clients for access from Web browsers and for machine-to-machine integration. This would necessitate that JSON and/or XML representations be derived from the persistent entities (i.e., using JAXB). This provides a serialized representation of the model of the application.

We finally come to the presentation layer, which is divided into server-side components residing in the application server and client-side components that execute in the Web browser. Usually there is a presentation-oriented representation called a view-model, which matches the information rendered on views or input on forms. The view and controls are constructed from HTML, CSS, and JavaScript. The programming paradigm in these technologies is distinctly different than the layers below.

Extending the application

Let’s examine what it would take to extend an application with a simple type (e.g., string) property on an entity. The database schema would need to be altered. A persistent entity would need a field, getter and setter methods, and a binding between the field and a column in the database schema. The property may be involved in the logic of the domain services. Next, the JSON and XML binding objects would need to be augmented with the property, and logic would be added to transform between these objects and the persistent entities used by the domain services. At the presentation layer, the view-model would be augmented with the property to expose it to the views. Various views to show an entity’s details and search results would likewise be enhanced to render the property. For editing and searching, a field would need to be added on forms with corresponding validation of any constraints associated with that property and on-submit transaction handling.

That is an awful lot of repetitive work at every layer. There are many technologies and skill sets involved. Much of the work is trivial and tedious. The entire process is far from efficient. It is worse if there is division of labor among multiple developers who require coordination.

A better platform

When confronted with coordinating many concomitant coding activities to accomplish a single well-defined goal, it is natural for an engineer to solve the more general problem rather than doing tedious work repeatedly. The solution is to “go meta”; instead of programming inefficiently, develop a better language to program in. Programming has evolved from machine language to assembly language for humans to express instructions more intuitively. Assembly evolved to structured languages with a long history of advances in control and data flow. Programming languages have evolved in conjunction with virtualization of the machine (i.e., bytecode) to provide better abstractions of software and hardware capabilities. In the spirit of Guy L. Steele’s Growing a Language talk from OOPSLA ’98, components, libraries, and frameworks have been developed using a programming language that itself supports extending the language itself within limits. All of these innovations continually raise the level of abstraction to increase human productivity.

We are hitting the limits of what can be expressed efficiently in today’s languages. We have a database storage abstraction that is separate from server-side application logic, which is itself separate from client-side (Web browser) presentation. There is growing support for database and server-side abstractions to scale beyond the confines of individual machines. Clustering enables a software to take advantage of multiple machines to distribute load and provide redundancy in case of failure. However, our abstractions seem to stop at the boundaries between database storage, server-side application logic, and client-side presentation. Hence, we have awkward impedance mismatches when integrating top-to-bottom. We also have impedance mismatches when integrating together heterogeneous application components or services, as RESTful and SOAP Web Services technologies cross the boundaries between distributed software components, but this style of control and data flow (remote procedure calls) is entirely foreign to the programming language. That is why we must perform inconvenient translations between persistent entities and their bindings to various serialized representations (JSON, XML).

It seems natural that these pain points will be relieved by again raising the level of abstraction so that these inefficiencies will be eliminated. Ease of human expression will better enable programming for non-programmers. We are trying to shape the world so that humans and machines can work together harmoniously. Having languages that facilitate effective communication is a big part of that. To get this right, we need to go meta.

Reliable Messaging with REST

Marc de Graauw’s article Nobody Needs Reliable Messaging remains as relevant today as it did in 2010, when it was first published. It echoes the principles outlined in Scalable, Reliable, and Secure RESTful services from 2007.

It basically says that you don’t need for REST to support WS-ReliableMessaging delivery requirements, because reliable delivery can be accomplished by the business logic through retries, so long as in the REST layer its methods are idempotent (the same request will produce the same result). Let’s examine the implications in more detail.

First, we must design the REST methods to be idempotent. This is no small feat. This is a huge topic that deserves its own separate examination. But let’s put this topic aside for now, and assume that we have designed our REST web services to support idempotence.

If we are developing components that call REST web services for process automation, the above principle says that the caller is responsible for retrying on failure.

The caller must be able to distinguish a failure to deliver the request from a failure by the server to perform the requested method. The former should be retried, expecting that the failure is temporary. The latter is permanent.

The caller must be able to implement retry in an efficient manner. If the request is retried immediately in a tight loop, it is likely to continue to fail for the same reason. Network connectivity issues sometimes take a few minutes to be resolved. However, if the reason for failure is because the server is overloaded, having all clients retry in a tight loop will exacerbate the problem by slamming the server with a flood of requests, when it is least able to process them. It would be helpful if clients would behave better by backing off for some time and retrying after a delay. Relying on clients to behave nicely on their honor is sure to fail, if their retry logic is coded ad hoc without following a standard convention.

The caller must be able to survive crashes and restarts, so that an automated task can be relied upon to reach a terminal state (success or failure) after starting. Therefore, message delivery must be backed by a persistent store. Delivery must be handled asynchronously so that it can be retried across restarts (including service migration to replacement hardware after a hardware failure), and so that the caller is not blocked waiting.

The caller must be able to detect when too many retry attempts have failed, so that it does not get stuck waiting forever for the request to be delivered. Temporary problems that take too long to be resolved need to be escalated for intervention. These requests should be diverted for special handling, and the caller should continue with other work, until someone can troubleshoot the problem. Poison message handling is essential so that retrying does not result in an infinite loop that would gum up the works.

POST methods are not idempotent, so retry must be handled very carefully to account for side-effects. Even if the request is guaranteed to be delivered, and it is processed properly (exactly once) by the server, the caller must be able to determine if the method succeeded reliably, because the reply can be lost. One approach is to deliver the reply reliably from the server back to the caller. Again, all of the above reliable delivery qualities apply. The interactions to enable this round trip message exchange certainly look very foreign to the simple HTTP synchronous interaction. Either the caller would poll for the reply, or a callback mechanism would be needed. Another approach is to enable the caller to confirm that the original request was processed. With either approach, the reliable execution requirement needs to alter the methods of the REST web services. To achieve better quality of service in the transport, the definition of the methods need to be radically redesigned. (If you are having a John McEnroe “you cannot be serious” moment right about now, it is perfectly understandable.)

Taking these requirements into consideration, it is clear that it is not true that “nobody needs reliable messaging”. Enterprise applications with automated processes that perform mission-critical tasks need the ability to perform those tasks reliably. If reliable message delivery is not handled at the REST layer, the responsibility for retry falls to the message sender. We still need reliable messaging; we must implement the requirement ourselves above REST, and this becomes troublesome without a standard framework that behaves nicely. If we accept that REST can provide only idempotence toward this goal, we must implement a standard framework to handle delivery failures, retry with exponential back off, and divert poison messages for escalation. That is to say, we need a reliable messaging framework on top of REST.

[Note that when we speak of a “client” above, we are not talking about a user sitting in front of a Web browser. We are talking about one mission-critical enterprise application communicating with another in a choreography to accomplish some business transaction. An example of a choreography is the interplay between a buyer and a seller through the systems for commerce, quote, procurement, and order fulfillment.]

OLTP database requirements

Here is what I want from a database in support of enterprise applications for online transaction processing (OLTP).

  1. ACID transactions – Enterprise CRM, ERP, and HCM applications manage data that is mission critical. People’s jobs, livelihoods, and businesses rely on this data to be correct. Real money is on the line.
  2. Document oriented – A JSON or XML representation should be the canonical way that we should think of objects stored in the database.
  3. Schema aware – A document should conform to a schema (JSON Schema or XML Schema). Information has a structure and meaning, and it should have a formal definition.
  4. Schema versioned – A document schema may evolve in a controlled manner. Software is life cycle managed, and its data needs to evolve with it for compatibility, upgrades, and migration.
  5. Relational – A subset of a document schema may be modeled as relational tables with foreign keys and indexes to support SQL queries, which can be optimized for high performance.

The fundamental shift is from a relational to a document paradigm as the primary abstraction. Relational structures continue to play an adjunct role to improve query performance for those parts of the document schema that are heavily involved in query criteria (WHERE clauses). The document paradigm enables the vast majority of data to be stored and retrieved without having to rigidly conform to relational schema, which cannot evolve as fluidly. That is not to say that data stored outside of relational tables is less important or less meaningful. To the contrary, some of the non-relational data may be the most critical to the business. This approach is simply recognizing information that is not directly involved in query criteria can be treated differently to take advantage of greater flexibility in schema evolution and life cycle management.

Ideally, the adjunct relational tables and SQL queries would be confined by the database to its internal implementation. When exposing a document abstraction to applications, the database should also present a document-oriented query language, such as XQuery or its equivalent for JSON, which would be implemented as SQL, where appropriate as an optimization technique.

NoSQL database technology is often cited as supporting a document paradigm. NoSQL technologies as they exist today do not meet the need, because they do not support ACID transactions and they do not support adjunct structures (i.e., relational tables and indexes) to improve query performance in the manner described above.

Perhaps the next best thing would be to provide a Java persistent entity abstraction, much like EJB3/JPA, which would encapsulate the underlying representation in a document part (e.g., as a XMLType or a JSON CLOB column) and a relational part, all stored in a SQL database. This would also provide JAXB-like serialization and deserialization to and from JSON and XML representations. This is not far from what EclipseLink does today.

transparent persistence

Advantages

Transparent persistence has emerged into the mainstream over the past few years with the popularity of JDO and JPA for enterprise application development. This approach offers the following advantages.

  1. Domain modeling is expressed naturally as plain old Java objects (POJOs) without having to program any of the SQL or JDBC calls that are traditionally coded by hand.
  2. Navigation through relationships – objects are naturally related through references, and navigating a relationship will automatically load the related object on demand.
  3. Modified objects are stored automatically when the transaction is committed.
  4. Persistence by reachability – related objects are automatically stored, if they are reachable from another persistent object.

The programming model is improved by eliminating the tedium that is traditionally associated with object persistence. Loading, storing, and querying are all expressed in terms of the Java class and field names, as opposed to the physical schema names. The programmer is largely insulated from the impedance mismatch between Java objects and the relational database. The software can be expressed purely in terms of the domain model, as represented by Java objects.

Challenges

When developing domain objects, persistence is only one aspect. The business logic that applies to the graph of related objects is the most important concern. Transparent persistence introduces challenges to executing business logic to enforce constraints and complex business rules when creating, updating, and deleting persistent objects through reachability.

For example, an equipment rental application may need to enforce the following constraints:

  • When creating equipment, it must be related to a location.
  • When creating a rental, it must be related to a customer, and ensure that the equipment is available for the duration of the rental.
  • When updating a rental, it must ensure that equipment is available for the duration of the rental.

JPA 2.0 does not provide sufficient mechanisms for enforcing these constraints, when creating or updating these entities through reachability. The responsibility is placed on a service object to manage these graphs of entities. The constraint checking must be enforced by the service object per transaction. Java EE 5 does not provide any assistance to ensure that the constraint checks (implemented in Java) are deferred until commit, so that they are not repeated, when performing a sequence of operations in the same transaction.

Adding a preCommit event to a persistent object would provide a good place for expressing constraints. Allowing this event to be deferred until transaction commit would provide the proper optimization for good performance. Of course, preCommit would need to prevent any further modifications to the persistent objects enlisted in the transaction. This would factor out many of the invariants so that they are expressed per entity, removing the responsibility from every operation on service objects, which is prone to programmer error. The domain model would be greatly improved.

java christmas wish list 2008

JPA preCommit

Similar to the other events (PrePersist, PreRemove, PostPersist, PostRemove, PreUpdate, PostUpdate, and PostLoad), JPA needs to add a preCommit event. This would be useful for enforcing constraints (invariants) using Java logic, similar to how less expressive deferred constraints can be enforced in SQL.

read-only transaction

javax.transaction.UserTransaction needs the ability to begin a transaction with an awareness of whether the transaction will be read-only or read-write. A read-only transaction would prevent writes (inserts, updates, and deletes) from being done.

dynamic immutability

It would be helpful if an instance of an object can be mutable, when used by some classes (e.g., builder, factory, repository, deserializer), and immutable, when used by others. This would facilitate the ability to load persistent objects from a data store, derive transient fields from persistent fields, and marking the instance as immutable if the transaction is read-only. I do not want to develop entities that have both a mutable class and an immutable class; and access control (private, protected) is not sufficient, if the mutability is dependent on context (e.g., read-only transaction).

OOPs, here comes SOA (again)

Last week, I posted the following facetious comment about Service Oriented Architecture (SOA).

Posted By: Ben Eng on May 24, 2004 @ 09:27 PM in response to Message #123260

Strip all the behavior off of our domain objects. Define data structures to pass around. Provide sets of operations as stateless services, which perform functions on data, and return data to its callers. That sounds revolutionary.

All we have to do now is eliminate the object-oriented programmers, and the revolution will be complete.

There is a serious side to the issue. The recent popularity of SOA should give object oriented programmers much pause. Where do objects fit in the world of services? Are objects passé?

In the book Bitter EJB (p.57), the distinction is made between service-driven and domain-driven approaches. I am strong proponent of domain-driven approaches. However, maybe three years ago, I began feeling that a purely domain-driven approach was awkward for modeling object graphs with very sophisticated global constraints. This is a natural topic of interest, considering that I model telecommunications networks for a living.

Factoring out the global constraints, instead of embedding them within the domain objects, seemed natural. I started to develop stateless session local interfaces (using plain old Java objects, not EJB), which use domain objects as parameters and return values. Then I found myself adding conversational state to the session objects. I’m not sure, where I’m going with this yet. We’ll see after a few more weeks or months of experimentation.

After having developers fumble around with EJBs for the past four years, I am becoming disillusioned with the technology. XML and Web Services is going even further down the road of coarse-grained distributed components, which are a consequence of high latency due to expensive data marshaling and remote communications. Whenever EJB or Web Services is involved, the client code looks like garbage – calling functions on data, rather than methods on objects. Optimizing the interface to reduce client-server round trips pollutes the interface, until it also looks like garbage; objects become so polluted with denormalized data that the concepts are no longer recognizable. When faced with sophisticated object graphs, rather than simple services, the approach leads ultimately to garbage and more garbage. I’m tempted to call the approach Garbage Oriented Architecture, rather than SOA.

proprietary J2EE

I am less than impressed with this article.

The author is arguing that proprietary extensions threaten the promise of standards based application portability—that Java’s promise of “write once, run anywhere” is dead, and was never a reality anyway. This demonstrates to me a tremendous lack of vision.

The J2EE platform is growing as a language. This means starting small and collecting the best contributions over time into a larger, more capable language. The language evolves with the community that uses it. A standards based approach to application development suggests that if you constrain yourself to using only the standard features and interfaces, then you should be portable to any implementation. Generally, that is true of J2EE; standards based portability is certainly more a reality with J2EE than on any other equivalent platform (e.g., Microsoft .NET).

Proprietary extensions by application server vendors are expected. Applications written to the standard will still be portable, because they do not depend on proprietary extensions. In time, the scope of standards will expand, as technologies mature and become commonplace. The industry’s best practices become encoded into standards, because the market demands it for portability (reducing costs). Innovation (proprietary extensions) will happen further out into the leading edge, where technologies are unproven. Vendors can differentiate their product based on quality of implemention and value added features. The latter requires extending beyond the standard features.

The article paints a picture of a static standard, which is being eroded. In reality the J2EE standard is constantly evolving to expand its capabilities. Application server vendors are moving in lock step to embrace the newest standard; often, their products have already implemented the standard by the time it is officially published. In reality, J2EE application portability is continually improving rather than being threatened due to this natural evolution.