I am a software architect in the Communications Applications Global Business Unit (CAGBU) within Oracle. I currently work on cloud services for Business Support System (BSS) and Operations Support System (OSS) applications for communications service providers. My previous responsibilities include architecture and product management for the the RODOD and RSDOD solutions to provide an integrated suite of BSS and OSS applications for communications providers.
I founded the Oracle Communications Service & Subscriber Management (S&SM) application and the Oracle Communications Unified Inventory Management (UIM) application. I pioneered the adoption of Object Relational Mapping (ORM) based persistence techniques within OSS applications. I introduced the XML Schema based entity relationship modeling language, which is compiled into the persistent object modeling service. I established the notion of a valid time temporal object model and database schema for life cycle management of entities and the ability to travel through time by querying with a temporal frame of reference or a time window of interest. I established the patterns for resource consumption for capacity management. I championed the development of Web based user interfaces and Web Services for SOA based integration of OSS applications. I was responsible for single handedly developing the entire prototype that formed the foundation of the current generation of the OSS inventory application. I have been engaged in solution architecture with service providers to adopt and deploy Oracle's OSS applications across the globe. I am responsible for requirements analysis and architectural design for the Order-to-Activate Process Integration Pack (PIP) proposed to integrate the OSS application suite for the communications industry using the Application Integration Architecture (AIA).
Any opinions expressed on this site are my own, and do not necessarily reflect the views of Oracle.
Here is my childhood story about dichlorodiphenyltrichloroethane.
I grew up on a 30 acre vegetable farm north of Toronto. When I was a toddler, my parents would leave me to play by myself as they and my older sisters worked the fields. I’d climb on the mountain of stacked bags of fertilizer and DDT.
My family would fill burlap sacks full of DDT and walk among the vegetable fields, shaking the powder into a fog. Womp! Womp! Womp! I don’t remember if they bothered to wear respirators.
One day, my mom returned to the DDT mountain to find me digging into a bag with my hands elbow deep. I had smeared the white powder all over my face. I told her I was wearing it as makeup. She tells everyone that is why I grew up to be healthy and strong.
The moral of the story is: DDT was so safe, farmers used it without any protection and children played among it without being harmed. After DDT banned, we switched to toxic chemicals like Malathion, Parathion, Captan, Diazinon, and others.
Unlike DDT which harmed no one and nothing, after spraying these potent chemicals, we would return the next day to find a field sometimes littered with dead birds. No one sprays these without wearing respirators. Deadly stuff.
A black hole’s charge and angular momentum would cause a magnetic field (Maxwell’s Law) to accelerate accreting matter into relativistic jets, as we can observe. Matter spirals around, orbiting the black hole’s equator, and falls inward toward the black hole event horizon. Some or all of this in-falling matter is redirected to jets firing out from the poles of black hole, and this matter is accelerated to relativistic speeds (near the speed of light). This involves a huge transfer of energy from the black hole to the matter from the accretion disc.
Could that transfer of energy carry information away from the black hole? Maybe this hypothesis isn’t sufficient by itself to explain how a black hole’s mass shrinks — only how its angular momentum slows.
We can also hypothesize the magnetic field is strong enough to produce pairs of particles that add to the jets. Photons of the magnetic field collide to create electron-positron pairs. Now we have a mechanism, pair production, for mass to escape the black hole in a manner that is distinct from Hawking (thermal) radiation. That would be how information flows out, so it is not lost.
The trucker’s freedom convoy in Canada has revealed how individuals are vulnerable to tyrannical (rights violating) actions by governments and corporations cooperating with authoritarian diktats across jurisdictional boundaries. Maajid Nawaz warns of totalitarian power over the populace using a social credit system imposed via central bank digital currency (CBDC) regimes being developed to eliminate cash. “Programmable” tokens will give the state power to control who may participate in financial transactions, with whom, when, for what, and how much. Such a regime would enable government tyranny to reign supreme over everyone and across everything within its reach.
Centralized dictatorial power is countered by decentralization, especially that which is designed into technology to become unchangeable by humans after the technology proliferates. The design principle is known as Code is law. The Proof of Work (PoW) consensus algorithm in Bitcoin is one such technology. CBDC is an attempt to prevent Bitcoin from becoming dominant. Criticism of PoW using too much electricity is another tactic. National and supranational powers (above nation states) are working against decentralization in order to preserve their dominance. The World Economic Forum (WEF) is installing its people into national legislatures and administrations to enact policies similar to those of the Chinese Communist Party (CCP) to concentrate globalized power for greater centralization of control.
We look toward Web3 and beyond to enable decentralization of digital services. As we explore decentralized applications, we must consider the intent behind distributed architectures for decentralization. What do we want from Web3?
Traditionally, we think about availability with regard to failure modes in the infrastructure, platform services, and application components. Ordinarily, we do not design for resiliency to the total loss of infrastructure and platform, because we don’t consider our suppliers to be potentially hostile actors. However, global integration and the unholy nexus of multinational corporations with foreign governments to impose extrajudicial punishments on individuals, who reside outside the jurisdiction of governments hostile to their cause, put them within the reach of unjust laws and tyrannical diktats of authoritarians. It is clear now that this is one of the greatest threats that must be mitigated.
Web3 technologies, such as blockchain, grew out of recognition that fiat is the enemy of the people, and we must decentralize by becoming trustless and disintermediated. Eliminate single points of failure everywhere. Run portably on compute, storage, and networking that are distributed across competitive providers in adversarial jurisdictions across the globe without cooperation. When totalitarianism comes, Bitcoin is the countermove. Decouple from centralized financial systems, including central banking and fiat currencies. Become unstoppable and ungovernable, resistant to totalitarianism.
To become unstoppable, users need to gain immunity from de-platforming and supply chain disruption through decentralization. Users need to be able to keep custody of their own data. Users need to self-host the application logic that operates on their data. Users need to compose other users’ data for collaboration without going through intermediaries (service providers who can block or limit access). Then, to achieve resiliency, users need to be able to migrate their software components to alternative infrastructure and platform providers, while maintaining custody of their data across providers. At a minimum, this migration must be doable by performing a human procedure with some acceptable interruption of service. Ideally, the possible deployment topologies would have been pre-configured to fail-over or switch-over automatically as needed with minimal service disruption. Orchestrating the name resolution, deployment, and configuration of services across multiple heterogeneous (competitive) clouds is a key ingredient.
Custody of data means that the owner must maintain administrative control over its storage and access, as well as having the option of keeping a copy of it on physical hardware that the owner controls. Self-hosting means that the owner must maintain administrative control over the resources and access for serving the application functions to its owner, and for that hosting to be unencumbered and technically practical to migrate to alternative resources (computing, financial, and human).
If Venezuela can be blocked from “some Ethereum services”, that is a huge red flag. Service providers should be free to block undesirable users. But if the protocol and platform enables authorities to block users from hosting and accessing their own services, then the technology is worthless for decentralization. Decentralization must enable users to take their business elsewhere.
Privacy is a conundrum. Users need a way to identify themselves and authenticate themselves to exert ownership over their data and resources. Simultaneously, a user may have good reason to keep their identity hidden, presenting only a pseudonym or remaining cloaked in anonymity in public, where appropriate. Meanwhile, governments are becoming increasingly overbearing in their imposition of “Know Your Customer” (KYC) regulations on businesses ostensibly to combat money laundering. This is at odds with the people’s right to privacy and being free from unreasonable searches and surveillance. Moreover, recruiting private citizens to spy on and enforce policy over others is commandeering, which is also problematic.
State actors have opposed strong encryption, and they have sought to undermine cryptography by demanding government access to backdoors. Such misguided, technologically ignorant, and morally bankrupt motivations disqualify them from being taken seriously, when it comes to designing our future platforms and the policies that should be applied. Rights are natural (a.k.a. “God-given” or inalienable), and therefore they (including privacy) are not subject to anyone’s opinion regardless of their authority or stature. Cryptographic technology should disregard any influence such authorities want to exert, and design for maximum protection of confidentiality, integrity, and availability. Do not comply. Become ungovernable.
While the capabilities and qualities of the platform are important, we should also reconsider the paradigm for how we interact with applications. Web2 brought us social applications for human networking (messaging, connecting), media (news, video, music, podcasts), and knowledge (wikis). With anything social, group dynamics invariably also expose us to disharmony. Web2 concentrated power into a few Big Tech platforms; the acronym FAANG was coined to represent Facebook (now Meta), Amazon, Apple, Netflix, and Google (now Alphabet). With centralized control comes disagreement over how such power should be wielded as well as corruption and abuse of power. It also creates a system that is vulnerable to indirect aggression, where state actors can interfere or collude with private actors to side-step Constitutional protections that prohibit governments from certain behaviors.
David Sacks speaks with Bari Weiss about Big Tech’s assault on free speech and the hazard of financial technologies being used to deny service to individuals, as was done to the political opponents of Justin Trudeau in Canada in response to the freedom convoy protests.
Our lesson, after enduring years of rising tension in the social arena and culminating in outright tyranny, is that centralized control must disappear. Social interactions and all forms of transactions must be disintermediated (Big Tech must be removed as the middlemen). The article Mozilla unveils vision for web evolution shows Mozilla’s commitment to an improved experience from a browser perspective. However, we also need a broader vision from an application (hosted services) perspective.
The intent behind my thoughts on Future Distributed Applications and Browser based capabilities is composability. The article Ceramic’s Web3 Composability Resurrects Web 2.0 Mashups talks about how Web2 composability of components enabled mashups, and it talks about Web3 enabling composability of data. The focus is shifting from the ease of developing applications from reusable components to satisfying the growing needs of end users. Composability is how users with custody of their own data can collaborate among each other in a peer-to-peer manner to become social, replacing centralized services with disintermediated transactions among self-hosted services. The next best alternative to self-hosting is enabling users to choose between an unlimited supply of community-led hosted services that can be shared by like-minded mutually supportive users. The key is to disintermediate users from controlling entities run by people who hate them.
Here is a great analysis of the critiques against today’s Web3 technologies. It is very clarifying. One important point is the ‘mountain man fantasy’ of self-hosting; no one wants to run their own servers. The cost and burden of hosting and operating services today is certainly prohibitive.
Even if the mountain man fantasy is an unrealistic expectation for the vast majority, so long as the threat of deplatforming and unpersoning is real, people will have a critical need for options to be available. When Big Tech censors and bans, when the mob mobilizes to ruin businesses and careers, when tyrannical governments freeze bank accounts and confiscate funds, it is essential for those targeted to have a safe haven that is unassailable. Someone living in the comfort of normal life doesn’t need a cabin in the woods, off-grid power, and a buried arsenal. But when you need to do it, living as a mountain man won’t be fantastic. Prepping for that fall back is what decentralization makes possible.
In the long term, self-hosting should be as easy, effortless, and affordable as installing desktop apps and mobile apps. We definitely need to innovate to make running our apps as cloud services cheap, one-click, and autonomous, before decentralization with self-hosting can become ubiquitous. Until then, our short-term goal should be to at least make decentralization practical, even if it is only accessible initially to highly motivated, technologically savvy early adopters. We need pioneers to blaze the trail in any new endeavor.
We need a general purpose Web architecture for dApps that are not confined to a niche. I imagine container images served by IPFS as a registry, and having a next-gen Kubernetes-like platform to orchestrate container execution across multicloud infrastructures and consuming other decentralized platform services (storage, load balancing, access control, auto-scaling, etc.). If the technology doesn’t provide a natural evolution for existing applications and libraries of software capabilities, there isn’t a path for broad adoption.
We are early in the start of a new journey in redesigning the Web. There is so much more to understand and invent, before we have something usable for developing real-world distributed apps on a decentralized platform. The technology may not exist yet to do so, despite the many claims to the contrary. This will certainly be more of a marathon, rather than a sprint.
In the journey to developing Web3, we must understand what is motivating decentralization. We are attempting to reinvent the Web to address deficiencies that have placed individuals in jeopardy of censorship, cancellation, and political persecution at the hands of Big Tech platforms, state actors, and adversarial groups intent on harm. Historical ideals to preserve the “free and open Internet” have been abandoned. If a “free and open Internet” is to be preserved, it cannot rely on the honor and voluntary cooperation of humans. Technologies must become permissionless, trustless, and unassailable, so that dishonorable and uncooperative humans can coexist.
Protecting a user’s right to free speech by having the user take custody of their own data, and ensuring that data cannot be made inaccessible.
Protecting a user’s right to free association by ensuring that data in the user’s custody can be published to whatever audience the owner wishes to reach.
Protecting an audience’s right to free association by ensuring access to data published by others, and ensuring that applications can compose that data for the intended use, including for social collaboration.
Protecting a user’s access to platform capabilities for providing the application services that process that data.
Protecting a user’s ability to transact business with others without being subject to third party intermediaries cancelling them.
Protecting a user’s privacy by ensuring the user can share their data only with others who are granted authorization. In some circumstances, a user may want to remain anonymous, so that their real-world identity cannot be exposed for doxxing. Hostile detractors often try to cancel people by targeting their business, sources of income, reputation, relationships, sensitive information, even their personal safety.
Let’s keep these requirements in mind as we explore technologies that can help realize Web3 in restoring the ideal of a free and open Internet in the face of large factions of society who are hostile to (or wobbly on) freedom and openness.
According to Diffusion of Innovations theory, crypto-currencies like Bitcoin are in the early adopter phase. How might we develop technologies to bring crypto-currencies into the early majority phase, where its use becomes mainstream?
The obvious place to start is digital services, since crypto-currency transactions are necessarily digital. Digital services will begin accepting Bitcoin or other popular crypto-currencies as payment, as acceptance grows among the general population. Most services rely on payment gateways to interface to the payment card industry, but this assumes that payment transactions are denominated in fiat currencies. We must consider whether adoption of crypto-currency will be advanced by the payment card industry (Visa, MasterCard, American Express, Discover). PCI is extremely stodgy, being in bed with central banking and the financial services sector (along with their crony ties to the political establishment through regulatory agencies and the Federal Reserve), so you can pretty much count them out as a trustworthy partner in any radical anti-establishment endeavor. PayPal, Square, Stripe, and other more progressive payment processors may come around, but their ties to fiat currency and PCI may hinder them. The obvious place to begin is with crypto-currency exchanges, which can already provide conversion services between fiat and crypto. The problem with crypto has been high transaction fees, slow transaction settlement, opposition from the regulatory establishment, and lack of integration with payment systems for retail transactions.
An unexplored opportunity is to enable digital service providers to use other forms of pseudo-currency that have low transaction fees. Whereas Bitcoin remains obscure for conducting business between ordinary people, consumers are already quite comfortable with vouchers, coupons, rewards, loyalty points, or gift cards. Pseudo-currencies suffer from a closed network of vendors (limited fungibility) and non-convertibility, but the end user doesn’t incur transaction fees, because the vendors eat the cost of operating the network to benefit from the cross-selling opportunities generated within the network. Perhaps this vector for adoption can provide crypto an acceptable way of infiltrating the mainstream economy without raising regulatory ire, since loyalty reward programs are already well-established. Adding convertibility between a loyalty pseudo-currency and crypto would provide backdoor access into retail transactions within closed loyalty networks. It’s a beach head.
The value proposition is that service providers can be given the option to join a network of vendors who accept the same pseudo-currency (as a proxy to crypto). This allows customers who earn loyalty rewards from one vendor to spend them at another within the network. This is the same concept as how airlines, hotels, and rental car companies can join in alliances, like Star Alliance, Oneworld, or SkyTeam. The difference a pseudo-currency system would make is that it would provide the infrastructure that would allow merchants throughout the world to join together into alliances, and to create such alliances arbitrarily between themselves. This opens up this valuable capability to small and medium sized businesses, who would otherwise not be able to afford the global infrastructure such a loyalty reward system would require.
Telecom carriers are in a good position to provide a loyalty reward system for partners, who offer digital services using the carrier’s network and infrastructure.
Today, content publishing platforms, such as Substack, Locals, and traditional corporate media sites, offer subscription services. The subscriber is charged on a monthly recurring basis to gain access to paywalled content. Users who follow a link to read an article must sign up for a subscription even when they only want to read a single article without being obliged to a long-term commitment.
Moreover, users are loathe to authorize many online services to take automatic payments from their payment cards, especially less well-known brands of unknown reputation and with no established relationship. Naturally, users are reluctant to provide their payment card information indiscriminately. Fraud and hacking are legitimate concerns. All of these concerns, which are critical barriers to converting clicks into revenue for content creators, can be ameliorated by offering digital services as partners with a trusted telecom carrier who can charge the subscriber through carrier billing. This would provide a better user experience to access content, and this would improve conversions to generate revenue by removing friction.
One of my friends on Facebook had this thought.
Magazines and newspapers: You know we’d be happy to pay you by the article, right? That if you offered that option instead of slamming a monthly-subscription paywall in front of us, we’d pay for a few articles a month and our micropayments would add up to the equivalent of many monthly subscribers. Maybe more than you’d lose, since those who subscribe are happy to do that and the rest of us would be posting and linking, bringing you micropayers who just navigate away from your paywall now? Yeah, just saying.
I’m guessing micro-payments are not offered today in part because payment processors charge transaction fees with some minimum that make this unprofitable, and also inputting payment card information from a customer to make a one-off micro-payment would seem like way too much work to collect a few pennies.
If we could solve the micro-payments problem for cloud services, that would open up huge opportunities throughout the digital economy. We do see hints of technology emerging to enable this, such as “super chats” in YouTube, where viewers of a live video stream can tip small amounts of money to support the presenter. But the real need is for this capability to be generalized to enable arbitrary micro-payment transactions in every context on the Internet, and for this to become prolific everywhere. The revenue opportunity is enormous — equal or larger in scale to Google’s ad revenues, as this change in paradigm is precisely the replacement for ad-based revenues. The ad model supports “free” content, but it relies on users to tolerate the clutter of ads, while also luring some users to convert ad impressions into clicks and purchases. The ad model is known to create perverse incentives for Big Tech platforms to implement algorithms that place users into information bubbles, manufacture outrage to increase engagement, and keep users occupied on the platform for longer durations (promoting addiction). Micro-payments supported content would ameliorate the harms of an ad-supported model.
Ad monetization is like a micro-payments platform. Each click is charged a few cents, and these charges are accumulated over a billing period at which point the bill is settled. Because of the threat to ad revenues, you will not see Google blaze the trail for micro-payments.
We should look to carrier billing to take advantage by solving this problem through aggregation of charges, as carriers normally do for service usage. The connection between how customers are billed and invoiced and how money flows decouples the payment flow from the buying flow. This means that the carrier acts somewhat like a “bank”, of sorts. That is, postpaid purchases are aggregated into itemized charges on a bill. The bill collects all the charges together for settlement on a monthly basis.
Then, look at how African carriers enable unbanking for poorer people by leveraging the subscriber’s account balance to become positive or negative (like a bank account), and to enable money transfers between subscriber accounts to facilitate financial transactions. This exact paradigm should be seen as an opportunity to expand the two features (carrier billing with account balances that work like bank accounts) to innovate in the area of enabling a micro-payments platform that revolutionizes both the online world and commerce between individuals.
What I envision is the following. What Zelle is to banking — an integration between banks to do money transfers between users via email or other methods of communicating a transaction between users — the Internet needs a general purpose “money flow” protocol that facilitates integration between web sites and entities that can facilitate money flow — be they carriers or other commercial entities that can handle the charging, billing, invoicing, and settlement functions. The key is to enable arbitrary web sites to integrate to participate in money flow (easy setup like Zelle), and for the end user interaction with these web sites to enable one-click confirmation of a micro-payment (“do you want to pay 5c to read this article?”). And of course, for these integrations to fall outside of PCI-DSS compliance; otherwise, it is not viable from an ease-of-integration and cost perspective.
Opportunity for Carriers
Whereas the imaginary killer apps for 5G (augmented reality, IoT) still have no concrete implementations yet that are compelling in the market, the revenue opportunity for carrier billing, micro-payments, and unbanking are more immediate, realistic, and obviously under-served.
This strategy is synergistic with the roll out and adoption of 5G capabilities to develop killer apps of the future. Carriers can offer to partners to host their digital services without charging for utilization of the carrier’s network and infrastructure resources. Instead, use Apple’s successful revenue model of taking a fixed percentage of the partner’s revenue from selling their digital services. With carrier billing, the carrier handles the revenue sharing and settlement, as they are adept at doing today for roaming.
Then, a network of digital service vendors would join this micropayment ecosystem. As these purchasing transactions are performed by users, the carrier records these charges. On a monthly basis, each vendor gets paid an aggregate amount from all users. It’s equivalent to an ad network micro-payment platform, except products are paid for directly, and there are no ads. People hate ads.
Using this strategy, carriers can package together their network infrastructure, their platform services, their monetization system, and their loyal customer base to offer digital (over-the-top) service providers privileged access as revenue sharing partners. By doing so, a carrier would then be able to hitch their wagon to high margin revenue opportunities created by innovative new digital services, instead of being relegated to the ever-decreasing profit-per-bit dumb pipes business.
One approach to better empowering users and upstart services to avoid Big Tech censorship, suppression, and control is to build capabilities into the browser for mashing up and mixing in complementary services. This would provide a client-side (browser based) approach for third party complementary services to extend incumbent services without needing the incumbent’s authorization or cooperation. This would be one element of building Future Distributed Applications.
Using this approach, social media sites (Facebook, Instagram, Twitter, YouTube, Reddit, etc.) that enforce authoritarian content moderation policies can be complemented by alternative services, where prohibited users and comments can be linked. Users could see the conversation with content merged from every desired source beyond Big Tech control. This approach for distributing comments that form a single conversation would be applicable to many services.
Comment on content where a user’s comments would be suppressed.
Annotate or review an article where commenting is not enabled. Allow an annotation to link precisely to a specific range of text, so it can be presented inline.
Add links to relevant content not referenced by the original.
This paradigm would enable end users to control how content is consumed, so that Web sites cannot censor or bias what information is presented about controversial topics.
Applying browser add-ons that mix-in complementary services would also enable end users to take information and process it in personalized ways, such as for fact-checking, reputation, rating, gaining insights through analytics, and discovering related (or contrarian) information. Complementary content could be presented by injecting HTML, or by rendering additional layers, frames, tabs, or windows, as appropriate.
Browser add-ons are only supported on the desktop, not mobile devices. Mobile devices would need to be supported for this paradigm to become broadly useful.
There is friction between a microservices architecture and life cycle management goals for application releases. One significant motivation for microservices is independent life cycle management, so that capabilities with well-defined boundaries can be developed and operated by self-contained, self-directed teams. This allows for more efficient workflows, so that a fast-moving code base is not held back by other slower-moving code bases.
Typically, an application (a collection of services that form an integrated whole and are offered together as a product to users) is rolled out with major and minor releases on some cadence. Major releases include large feature enhancements and some degree of compatibility breakage, so these may happen on an annual or semi-annual basis. Minor releases or patches may happen quarterly, monthly, or even more frequently. With microservices, the expectation is that each service may release on its own schedule without coordination with all others even within the scope of an integrated application. A rapid release cadence is conducive to responsiveness for bug fixes and security fixes, which protect against exposing vulnerabilities to exploits.
One advantage of applications on the cloud is that a single release of software can be rolled out to all users in short order. This removes the substantial burden on developers to maintain multiple code branches, as they had to do in the past for on-premises deployments. Unfortunately, the burden is not entirely lifted, because as software under development graduates toward production use, various pre-release versions must be made available for pre-production staging, testing, and quality assurance.
Development is already complex, needing feature development toward a future release to proceed in parallel with being able to implement bug fixes for the release that is already in production (assuming all users are on only the latest). These parallel streams of development will be in various phases of pre-production testing toward being released to production, and in various phases of integration testing with a longer-term future release schedule. Varying levels of severity for bugs mean that the urgency for fixes varies. For example, emergency fixes need to be released as a patch to production immediately, if they are needed for security vulnerabilities that are exploitable. Whereas, fixes for functional defects may wait for the next release on the regular cadence. Cherry-picking and merging fixes across code branches is tedium that every developer dreads. Independent life cycle management of source code organized according to microservices is seen as helping to decouple coordination across development teams, which are organized according to microservice boundaries.
Independent life cycle management of services relies on both backward compatibility and forward compatibility. Integration between services needs to be tolerant of mismatched versions to be resilient to independent release timing, including both upgrades, rollbacks due to failed upgrades, and rerunning an upgrade after a prior failure. Backward compatibility enables a new version of a service to interoperate with an older client. Forward compatibility enables the current version of a service—soon to be upgraded—to interoperate with a newer client, especially during the span of time (brief or lengthy) in which one may be upgraded before the other. In my article about system integration, I explained the numerous problems that make compatibility difficult to achieve. Verification of API compatibility through contract testing is the best practice, but test coverage is seldom perfect. Moreover, no contract language specifies everything that impacts compatibility. Mocking will never be representative of non-functional qualities. This is one of many reasons why confidence in verification cannot be achieved without a fully integrated system. This is how the desire for independent life cycles for microservices is thwarted. The struggle is more real than most people realize. As software professionals, we enter into every new project with fresh optimism that this time we will do things properly to achieve utopia (well, at least independent life cycle would be a small victory), and at each and every turn we are confronted by this one insurmountable obstacle.
Application features involve workflows that span two or more collaborating microservices. For example, a design-time component provides the product modeling for a runtime component for selling and ordering those products. Selling and ordering cannot function without the product model, so the collaboration between those services must integrate properly for features to work. Most features rely on collaborations involving several services. Often, the work resulting from one service is needed to drive the processing in other services, as was the case in the selling and ordering example above. This pattern is repeated broadly in most applications. Once all collaborations are accounted for across the supported use cases, the integrations across services would naturally cover every service. The desire for an independent life cycle for each service that composes the application faces the interoperability challenges across this entire scope. There goes our independence.
Given the need to certify a snapshot of all services that compose an application to work properly together, we need a mechanism to correlate the versioning of source code to versions of binaries (container images) for deployment. Source code can be tagged with a release. This includes Helm charts, Kubernetes YAML files, Ansible playbooks, and whatever other artifacts support the control plane and operations pipelines for the application. A snapshot must be taken of the Helm chart version and their corresponding container image versions, so that the complete deployment can be reproduced.
This identifies an application release as a set of releases of services deployed together. This information aids in troubleshooting, bug reporting, and reproducing a build of those container images and artifacts from source code, each at the same version as what was released for deployment. This is software release management 101, nothing out of the ordinary. What is noteworthy is our inability to extricate ourselves from the monolithic approach to life cycle management despite adopting a modern microservices architecture.
Worse still, if our application is integrated into a suite of applications, as enterprise applications tend to be, the system integration nightmare broadens the scope to the entire suite. The desire for an independent life cycle even for each application that composes the suite faces interoperability challenges across this entire scope. What a debacle this has turned out to be. The system integration nightmare is the challenge that modern software engineering continues to fail at solving across the industry.
Coordinating human activities across organizations and disciplines is fundamental to DevOps. This requires having documented procedures to handle any situation, tools that enable participants to collaborate effectively, and a shared understanding of what information needs to be captured and communicated to work — especially when the actors are likely to be separated by space and time.
A DevOps procedure is initiated for a reason. These situations include scheduled maintenance, a response to an alert (detection of a condition that deserves attention), or a response to a request for support. In each case, there should be a ‘ticket’ (a record in a tool) that notifies a responsible DevOps engineer to work the issue. Ideally, the relevant procedure that applies to a ticket should be obvious — ideally, referenced explicitly.
When responding to an alert or a support request (usually a complaint about a service malfunction), usually it begins by confirming the reported condition. This requires gathering information about the context and collecting diagnostics to aid in troubleshooting. Ideally, the ticket clearly identified the problem; otherwise, interactions would be needed to gain such clarity. Humans are routinely terrible about assuming the recipient of a request has all the necessary context to understand what is being asked of them and why. To mitigate this inefficiency, tooling and procedural documentation are usually provided to guide how tickets should be written, so that many questions and answers are not needed afterward to satisfy the request.
The engineer who works a ticket should capture the service configuration, relevant logs to determine the failure mode, and any other data for the context associated with the problem for analysis toward an operational fix or for submitting a bug, if applicable.
A designated channel should be used for engineers to collaborate. Each engineer must record in real time the actions taken to troubleshoot, analyze, and correct the problem. This aids in coordination between multiple individuals in different roles across organizations to work together. Good communication enables everyone involved to stay informed and avoid taking actions that interfere with each other. Moreover, an accurate record can be reviewed later as a post-mortem. In the course of the
Severe incidents, such as those that cause a service outage, demand a root cause analysis toward preventative actions, such as process improvements, procedure documentation, operational tooling, or developing a permanent fix (for a software bug). This depends on the ticket capturing the necessary information to trace how the problem originated, such as the transaction processing in progress, events or metrics indicating resource utilization out of bounds (e.g., out of memory, insufficient cpu, I/O bandwidth limited, storage exhaustion) or performance impairment (e.g., lock contention, queue length, latency), or anything else that appears out of the ordinary.
One of the biggest impediments to verifying that a problem is resolved is that DevOps normally does not have access to the service features being exercises by end users. When a functional problem is reported by users, it may not be possible for DevOps to confirm that the problem is fixed from the user’s perspective. Communication with users may need to be mediated by customer support staff. The information on the ticket would need to facilitate this interaction.
Accurate record-keeping also enables later audits in case there are subsequent problems. The record of actions taken can be used to analyze whether these actions (such as configuration changes or software patches) are contributing to other problems. Troubleshooting procedures should include data mining of past incidents (have we seen this problem before? how did we fix it previously?) and auditing what procedures may have impacted the service under investigation (what could have broken this?).
Big Tech censorship and cancel culture are becoming intolerable. Politicization of business is destroying the fabric of society. Corporate oligarchs are implementing partisan agendas to shape public discourse by applying so-called “community standards” for social media content moderation. They de-platform personalities who express opinions that run counter to approved narratives. They silence dissent. Free speech and freedom of association are under threat, as private companies are coerced by state regulatory action, looming threats of state intervention, and mob rule through heckler’s veto, bullying, harassment, doxxing, and cancel culture. Concentration of power and control in a few dominant platforms, such as Google, YouTube, Facebook, Twitter, Wikipedia, and their peers has harmed consumer choice. Anti-competitive behavior, such as collusion among platform and infrastructure services to deny service to competitive upstarts and undesirable non-conformists, has suppressed alternatives like Parler, Gab, and BitChute.
The current generation of dominant platforms does not allow editorial control to be retained by content creators. The platform is viewed as the ultimate authority, and users are limited in their ability to assert control to form self-moderated communities and to set their own community standards. Control is asserted by the central platform authorities.
Control needs to be decoupled from centralized platform authorities and put back in the hands of content creators (authors, podcasters, video makers) and end users (content consumers and social participants). Editorial control over legal content does not belong with Big Tech. What constitutes legal content is dependent on the user’s jurisdiction, not Big Tech’s harmonization of globalist attitudes. To Americans, hate speech is protected speech, and it needs to be freely expressible. Similarly, users in other jurisdictions should be governed according to their own standards.
We need to develop apps with peer-to-peer protocols and end-to-end encryption to cut out the middlemen which will exterminate today’s generation of social media companies. Better yet, application logic itself should be deployable on user-controlled compute with user-controlled encrypted storage on any choice of infrastructure providers (providing a real impetus for the adoption of Edge Computing), so that centralized technology monopolies cannot dominate as they do today. This approach needs to be applied to decentralize all apps, including video, audio podcasts, music, messaging, news, and other content distribution.
I believe the next frontier for the Internet will be the development of a generalized approach on top of HTTP or as an adjunct to HTTP (like bittorrent) to enable distributed apps that put app logic and data storage at end-points controlled by users. This would eliminate control by middlemen over what content can be created and shared.
Applications must be distributed in a topology where a node is dedicated to each user, so that the user maintains control over the processing and data storage associated with their own content. Applications must be portable across cloud infrastructures available from multiple providers. A user should be able to deploy an application node on any choice of infrastructure provider. This would enable users to be immune from being de-platformed.
With an application whose logic and data are distributed in topology and administrative control, the content should be digitally signed so that it can be authenticated (verified to be produced by the user who owns it). This is necessary, so that a user’s application node can be moved to an alternative infrastructure (compute and storage) without other application nodes needing to establish any form of trust. Consumers (the audience with whom the owner shares content) and processors (other computational services that may operate on the content) of the information would be able to verify that the information is authentic, not forged or tampered with. The relationship between users and among application nodes, as well as processors, is based on zero trust.
Processing of information often involves mirroring and syndication. Mirroring with locality for low latency access gives certain types of transaction processing, such as search indexing, the performance characteristics they need. Authorizing a search engine to index one’s content does not automatically grant users of the search engine access to the content. Perhaps only an excerpt is presented by the search engine along with the owner’s address, where the user may request access. A standard protocol is needed to enable this negotiation to be efficient and automated, if the content owner chooses to forego human review and approval.
We need to change how social applications control the relationship between content producers and content consumers. First, for original source content, the root of a new discussion thread, the owner must control how broadly it is published. Second, consumers of content must control what sources of information they consume and how it is presented. Equally important, consumers of an article become producers of reviews and comments, when interacting in a social network. The same principle must apply universally to the follow-on interactions, so that the article’s author should not be able to block haters from commenting, but the author is not obligated to read them. Similarly, readers are not obligated to see hateful commenters, who they want to exclude from their network. The intent should be to enable each person to control their own content and experience, ceding no control to others.
Social applications need self-managed communities with member administered access control and content moderation. Community membership tends to be fluid with subgroups merging and splitting regularly. Each member’s access and content should follow their own memberships rather than being administered by others in those communities. The intent is to mitigate a blacklisted individual being cancelled by mobs. If a cancelled individual can form their own community and move their allies there with ease, cancel culture becomes powerless as a tool of suppression with global reach. Its reach is limited to communities that quarantine themselves.
This notion of social network or community is decentralized. A social application may support a registry of members, which would serve as a superset of potential relationships for content distribution. This would enable a new member to join a social network and request access to their content. Presumably, most members would enable automatic authorization of new members to see their content, if the new member has not been blocked previously. That is, enable a community to default to public square with open participation. However, honor freedom of association, so that no one is forced to interact with those with whom there is no desire to associate, and no one can be banned from forming their own mutually agreed relationships.
We need software innovations to address this urgent need to counter the censors, the cancellers, the de-platformers, the prohibitionists, the silencers of dissent, and the government oppressors. We don’t yet have a good understanding of the requirements which I’ve touched upon above, as I have only scratched the surface. We need an architecture to enable the unstoppable Open Internet that we failed to preserve from the early days. We need to develop a platform that realizes this vision to restore a healthy social fabric for our online communities.