Category Archives: software

metaverse: the good, bad, and ugly

What is the metaverse?

Facebook’s renaming as Meta brought attention to the rise of the metaverse. I would like to explore how to think about the metaverse and related technologies.

the “metaverse” is a hypothetical iteration of the Internet as a single, universal, and immersive virtual world that is facilitated by the use of virtual reality (VR) and augmented reality (AR) headsets.

https://en.wikipedia.org/wiki/Metaverse

This description identifies VR and AR headsets as an essential feature. However, I believe headsets are extraneous. I contend that headsets are not worth pursuing for the general public. What is essential is the evolution to the next generation of the Internet to become a single, universal, and immersive virtual world.

Next Generation of the World Wide Web

Today’s most important Internet technologies are known as the World Wide Web (WWW). The next generation of the Internet would be the third. The first generation delivered mostly static hypertext content from content producers to consumers. Today’s second generation services and platforms enable users to create content and distribute that content to an audience.

Ethereum co-founder Gavin Wood coined the term “Web3” for the third generation. It envisions the WWW to be decentralized with blockchain technologies. Web3 would enable token-based economics.

I have a different perspective on Web3. To expand on this perspective, I wrote What do we want from Web3? In particular, I do not see Ethereum being a suitable basis for Web3. To fully realize the goals, Web3 will require additional innovations and decentralized technologies that are general-purpose and not vulnerable.

WEF interest in the metaverse

The metaverse has even attracted the attention of the World Economic Forum (WEF). They have published a document called Demystifying the Consumer Metaverse.

The World Economic Forum has assembled a global, multi-sector working group of over 150 experts to co-design and guide this initiative. The hope is that this will lead to cross-sector global cooperation and the creation of a human-first metaverse. The metaverse has the potential to be a game-changer, but it must be developed in a way that is inclusive, equitable and safe for everyone.

We can see from their own language that they intend to appoint their own people to co-opt the technology. They wish to set the direction for the technological innovations toward advancing their own agenda. That agenda seeks to design a better world, one in which liberty is curtailed, autonomy is surrendered, choices are restricted, and power is concentrated in anointed leaders.

Metaverse Agenda

Digital Identity

For the WWW to be more universal and immersive, a user should not have to login separately using distinct account credentials, when navigating to each site. The user should have a smooth and seamless experience. Digital identity is an essential element of continuity. A user can specify preferences and localization once, and have sites be personalized everywhere.

In today’s Web2, ad networks use crude techniques to attach an identity to users. IP addresses, tracking cookies, and browser fingerprinting are typical approaches.

Surveillance

Once users have a digital identity that is universally recognized, users can then be tracked. Ad tracking is a creepy annoyance. However, the most serious danger will be surveillance tied to authoritarian control.

The US government regulates economic activity by controlling the money and its flow through financial institutions and payment systems. Know Your Customer (KYC) and Anti-Money Laundering (AML) are policies for this control. Currently, such policies rely on verifying a person’s identity using some form of state issued ID, such as a driver’s license or passport.

We can expect the government to seize the opportunity to co-opt digital identity in Web3. A state-issued digital identity would provide a key element for the government to exert authoritarian control. This topic will be addressed later in this article, once we explore other requisite elements.

Digital identity would lead to the loss of anonymity in transactions. KYC and AML policies apply to financial transactions, but every type of transaction (i.e., any online action taken by the user) could be subject to surveillance. Surveillance of consumer behavior has commercial value to corporations. However, the unholy collusion between government and corporations is a hazard for individual rights, as we will expand on below.

Self-sovereignty

Similar to the need for crypto-currency holders to maintain self-custody of their private keys, a person’s digital identity should be protected similarly. This is known as Self-sovereign identity. The owner must be able to control the revocation and re-issuance of their own digital identity. This may be necessary to counter targeted harassment and cancel culture.

More generally, self-sovereign data refers to users maintaining custody and ownership over their own data. Without self-sovereign data, fourth amendment rights against unreasonable search and seizure have been eroded. Law enforcement requests to communications service providers for customer data have been allowed without warrants. Courts have ruled that customers have no reasonable expectation of privacy for records about them kept by service providers. We recover these rights by reorganizing services and Web3 to become decentralized and by allowing customers to take custody over their data.

Moreover, if users can take custody over their own services through self-hosting, they would gain sovereignty over the applications that implement functions against their data. The combination of self-sovereign identity, self-sovereign data, and self-sovereign services protects against deplatforming and third party policy abuse.

Physical Asset Tokenization

A digital representation of a physical object is termed a digital twin. Applications in the metaverse will rely on digital twins to accomplish many things, such as enabling physical objects to be explored and manipulated virtually in ways that are impractical in the real world.

Physical assets will need to be tokenized to identify them. Web3 includes the Non-Fungible Token (NFT). A NFT is a digital identifier denoting authenticity or ownership.

Once assets have digital identity, it becomes easier to track them for the purpose of monetizing them. One way is to attach digital services to those physical assets. Subscribing for support, maintenance, and warranty repairs is an example of a service that can be monetized online for physical assets.

The Internet of Things (IoT) goes further by connecting physical assets to the Internet. This enables digital services to make use of or add value to those physical assets. Sensors, cameras, and control systems come to mind as obvious use cases. However, everything imaginable could be enhanced with connectivity to digital services.

Intellectual Property Rights

Physical asset tokenization leads to the erosion of ownership. Intellectual property rights, such as those attached to embedded software components, are retained by the manufacturer. The consumer is granted only a license to use with limited rights. The consumer has no right to copy that software to other hardware. This protects the software vendor from loss of revenue.

Does the consumer have a right to sell the physical asset to transfer ownership? One would expect that the hardware and its embedded software are considered to be an integrated whole or bundle. Hopefully, the software license is consistent with that.

A digital service connected to the physical asset is remote and distinct. A boundary clearly separates that remote software from the physical asset. There is no presumption of integration. The terms of service would need to be consulted.

Modern software is maintained with bug fixes and enhancements over time. Increasingly common, the vendor charges the consumer a subscription fee for maintenance and support. Does the consumer have the right to continue using the asset without subscribing? Can ownership of the asset be transferred along with its software subscription?

These questions go to the erosion of ownership. Physical things, such as vehicles and farm equipment, are becoming useless hulks without subscriptions to connected digital services and software maintenance. Instead of owning assets, people are subscribing to a license to use. It’s like renting or leasing. According to the WEF’s Agenda 2030, you will own nothing, and you’ll be happy.

Right to Repair

Software capabilities are essential to the functioning of equipment and devices. The traditional ownership model of farm equipment, vehicles (i.e., cars and trucks), and mobile phones, is evolving to a model where the end user has a license to use. License terms may restrict the user’s rights to modify, maintain, and transfer that asset.

Traditionally, an owner of an asset expects to be able to repair, modify, or build upon the asset. He can do it himself, or he can contract work out to others. Manufacturers are eroding these rights. They don’t want their software to be tampered with.

Many legislatures are passing Right To Repair acts to preserve some semblance of ownership control over physical assets. However, remote connectivity to digital services may never be brought into the fold.

The World is Virtual and Physical

The relationship between physical assets and digital services, including the metaverse, is fraught. We should think of the metaverse as the landscape in which all future digital services reside. Increasingly, physical assets are connected inextricably to digital services. Thus, the physical world and the metaverse become tied.

Programmable Currency

Monetization of digital services will be integral to the metaverse. Crypto-currency will be a key technology in Web3 to enable the digital economy. As novel cryptos gain consumer acceptance, you can be certain that governments will take notice.

Government has an interest in controlling money. Fiat money is manipulated by the central bank and their monetary policy. The supply is increased by making loans, which is a means of counterfeiting money. Money-printing dilutes the purchasing power resulting in inflation. The biggest loans are to finance government deficit spending.

Government seeks to gain control by making money programmable. Central Bank Digital Currency (CBDC) is programmable fiat money for this purpose. Control includes regulating who may spend money, when, where, on what, and how much. Authoritarian control will be total.

  • Who – digital identity
  • When and where – surveillance over services
  • What – physical asset tokenization
  • How much – programmable currency

Authoritarian Control

The Chinese Communist Party (CCP) has implemented a social credit system of authoritarian control. Individuals are assigned a social credit score based on surveillance of their behaviors. Privileges, travel authorization, and access to services may be restricted based on social credit score.

Programmable digital currency will be the perfect tool for authoritarian regimes to control public behavior. With hard cash replaced, no economic activity would escape government surveillance and control. Discrimination and cancel culture will be institutionalized.

Alternative Reality

Instead of the dystopian future that would follow from a metaverse based on flawed Web3 platforms, we must proceed with caution. Every technology must be scrutinized for vulnerabilities to capture and corruption by centralized powers. Decentralization and self-sovereignty must be paramount.

Avoid crypto-currencies that are not Bitcoin. Shitcoins are all scams. They are all corruptible, already corrupted, or corrupt by design. This is especially true of CBDC (or any crypto that is a candidate).

Disregard the hype. Headsets will never gain broad adoption. People will not tolerate being detached from reality for extended periods. Immersive experiences are valuable. However, people need to be able to multitask. Visiting places in the metaverse must be possible while still remaining engaged with normal activities of daily life and work.

Protect your identity and data. Self-sovereign identity and self-sovereign data are essential. True decentralization is essential. Any Web3 platform that does not honor these principles should be rejected.

The future will be bright, if we refuse to accept technologies that leave us vulnerable. It is early enough in the development of Web3 and the metaverse to reject poor technology choices. As consumers are better informed, they can have an enormous influence on what technologies are developed and adopted. Ethereum’s first mover advantage in Web3 and Meta’s first mover advantage in metaverse should be seen as consequential as MySpace and Friendster were to Web2.

Digital Economy of Social Cohesion

This Web2 era of the Internet has culminated in the concentration of economic power in a few of the largest corporations, a phenomenon that is termed Big Tech. Facebook (Meta), Amazon, Apple, Netflix, Google (Alphabet) are known as FAANG, the dominant Big Tech players. Centralization of control and concentration of power go hand in hand. This control is being used for social engineering, which is divisive, and it is destroying social cohesion.

Web2 is described by Britannica as:

the post-dotcom bubble World Wide Web with its emphasis on social networking, content generated by users, and cloud computing from that which came before.

https://www.britannica.com/topic/Web-20

Digital Economy

The digital economy that has emerged from Web2 is based on either extracting fees from users, as Netflix does with subscriptions and Amazon does with Prime, extracting profits from selling goods as Amazon and Apple do, or selling ads as Facebook and Google do. In each case, the business model relies on positioning the Big Tech company as the dominant supplier in the supply chain.

If you produce movies, you have to go through Netflix to reach your audience. Producers of goods have to go through Amazon to sell to your customers. If you produce iPhone apps, you have to go through Apple’s App Store to offer apps to users. If you want to advertise, you have to go through Facebook and Google to reach your audience. In every case, Big Tech is an intermediary that gets rich as the middleman.

Crypto Payments

One feature of Web3 is the incorporation of digital currencies (crypto). This would disintermediate payments by potentially eliminating banks, credit card companies, and payment processors. The payer and the payee would transfer funds directly with a transaction on a blockchain, which itself has no controlling entity and is therefore decentralized (assuming we are talking about Bitcoin, not some shitcoin). Financial transactions paid in crypto require no middlemen. Digital transactions have concentrated power into Big Tech because integration with the fiat financial system is expensive and subject to onerous regulation.

Integrating a crypto payment protocol natively into the Web is a game changer. Not only would it begin to decouple commerce from the fiat financial system, it should also begin to alter the relationship that users have with service providers and each other. Fiat payment processors impose an asymmetric relationship between participants: merchant and consumer. Crypto eliminates that asymmetry by enabling anyone to send funds to anyone with an address who can receive them.

Monetizing with ads destroys Social Cohesion

Google and Facebook have thrived on advertising dollars because of the asymmetrical relationship imposed by the fiat payment system. The Social Dilemma is a Netflix documentary that explains how the ad revenue model provides social media companies perverse incentives to design systems that encourage harmful behavior among the user base. Engagement becomes divisive. Information bubbles form. Users become addicted to dopamine hits. All to lure more eye balls and clicks so that advertisers can be charged for more impressions and conversions. Users hate seeing ads, but it is the price they pay to receive free services, as their engagement is monetized. The users become the product that is sold to advertisers.

Monetizing without ads brings Social Cohesion

How does eliminating fiat asymmetry fix this? Users on social media are content creators. Their opinions are an organic source of reviews, endorsements, and complaints. Every day the most compelling content goes viral because the audience is won over and engages enthusiastically.

What if a decentralized social media platform, instead of directing advertising dollars to Big Tech, rewarded users for content creation and promotion?

Users could be paid to post quality content with their compensation being proportional to the positive engagement they receive from others. This could be achieved through tips from the audience and from promotion fees charged for boosting content. The key is rewarding users for positive contributions. This institutes an incentive structure that increases personal fulfillment and social cohesion. This is what we want to enable with Web3.

Web3 versus Web5 – the future Web

In December, Jack Dorsey (founder of Twitter) disparaged Web3, claiming that the technology is owned by venture capitalists like a16z, cofounded by Marc Andreessen. Jack is critical of the current crop of Web3 protocols and implementations being deficient in decentralization. Others like Moxie Marlinespike have also been critical of Web3. I have offered my own thoughts on this topic in Decentralization: Be Unstoppable and Ungovernable.

Jack is backing an alternative to the Web3 vision called Web5 (Web2 + Web3) through an organization named TBD with open source projects for decentralized identity and decentralized web nodes for personal data storage that work with distributed web apps. On the surface, as I have not had an opportunity to look deeper into this Web5 initiative, the overview seems closely aligned with what I independently have imagined. I feel the potential kinship to what this project is doing.

The Web5 project appears to be directionally attractive with regard to how the problem space is framed. However, we need to examine the solution approach to see if the protocols and architecture being proposed are also attractive. The Web3 approach is decidedly not attractive for the reasons that Jack has highlighted. I credit him for being a vocal dissenter, when honest dissent is warranted.

I will dig deeper into Web5 and offer some insights from my own perspective. If this turns out to be closely aligned to my own passions, I may volunteer to contribute.

Reputation – scoring digital identities

Is it possible to build tech to track reputation for a digital identity across services without having it gamed or turned into a social credit system of institutionalized cancel culture?

The topic of reputation came to mind was promoted by this tweet. Saifedean Ammous (The Bitcoin Standard) had this idea for how Twitter could be improved. It is about reputation. To determine whether someone should be distrusted, a person can look to their own social network to see how often the counterparty is distrusted. Apply the Web of Trust pattern toward distrust. The same system for tracking certification (a form of trust) can be leveraged to track distrust also. This would be a very valuable service for reputation tracking and analysis.

Web of Trust

Reputation may not be an absolute measure. Saifedean’s observation suggests to me that it should work more like Web of Trust, except in the case of blocking it would be a measure of distrust. Each block is an attestation of distrust of the blocked user. The users who Saifedean follows form his web of trust. To some lesser degree the connections to others through his follow list deserve some trust as a transitive relationship. Those blocked by these trusted users can each be assigned a distrust score based on the attestations from Saifedean’s web of trust.

From your perspective, the reputation of someone else would be based on your own web of trust. It is subjective, relative to your social network and the attestations of each member. Attestations of trust contribute to a score for how much a person deserves to be trusted. Attestations of distrust contribute to a score for how much a person deserves to be distrusted.

Calculating reputation scores based on attestations on social networks raises questions and concerns. Scoring can be gamed and abused. Bot nets can be deployed to skew scores by contributing many bogus attestations. This can be used to smear a hated enemy or to fraudulently raise someone’s public stature. We see such gaming in search engine optimization, in likes and dislikes of content on social networks, and in product rating and review sites. Personal and professional reputation is a high stakes affair.

Reputation is not Social Credit Score

Everyone is aware of China’s social credit system. It enables the government to track every citizen’s activities. Individuals are scored according to the government’s preferred behaviors. The government uses the scoring to punish citizens by denying low scoring individuals from participating in society (i.e., economic transactions, travel, etc.). If we introduce reputation scoring, such abuse cannot be permitted.

A reputation system exists to facilitate individuals to freely associate with others or to deny such associations. This system must forbid the government or other powerful entities from being able to coerce others into and out of associating with individuals these powerful entities target for punishment, which would be tyrannical.

Is it possible for such a reputation system to be deployed prolifically without enabling tyrannical regimes or angry mobs from exploiting it in violation of individual rights? Lives are destroyed in this way, as they are cast out of society. Ideally, the system’s protocols would make it impossible for such power to be abused. Reputation is tied to a person’s digital identity. If a person is shackled to a single identity, they are vulnerable to that identity being smeared and targeted for cancellation by adversaries.

There must be some recourse. If a person can change identities and recertify their verified credentials for their replacement identity, this could effectively renounce any references to the old identity. Attestations of distrust of the old identity would have no effect on the new identity. However, attestations of trust for the new identity would need to be earned and reestablished. This makes changing identity a costly migration, only worthwhile if shedding a highly disreputable identity to start fresh.

It is also important for the identity protocols to be decentralized. The system should be open to many providers with no coordination, shared storage, policy enforcement. There must not be an single source of truth for human identity tied to digital identities, otherwise a person can be cancelled by tyranny or abuse of power.

We want to enable Alice to assess Bob based on Bob’s positive and negative attestations among the people connected between them. That allows Alice and Bob to associate or not based on informed consent. We do NOT want voluntary associations to be interfered with by tyrannical powers. Tyrants wish to exert control over collectives to target individuals regardless of consent. Freedom of association must be protected.

Revisiting personal assistant

In 2017, I wrote:

I find myself being more diligent taking notes and then distilling knowledge from my notes into articles for others. I used to be able to complete tasks serially, more or less. Occasionally, I noticed that follow-on tasks were hard to accomplish, because I couldn’t remember what I was thinking originally. So I tended to capture essential (summary and instructional) information into README, comments in code, and docs. I never really took detailed notes to log my step-by-step activities until recently. I started doing this logging, because I find myself multi-tasking and distracted so much by so many concurrent activities, I can hardly focus on getting anything done. The context-switching was so frustrating, and I didn’t have enough stack space in my head to keep anything straight. So now I’m employing some rudimentary techniques to apply computer-assistance by just taking notes. That is why my mind wanders to the topic of personal assistants in the workplace as being the next frontier. Personal Assistants

The Loretta superbowl ad by Google was a very compelling story about the benefits of memory assistance if the human-computer interface is frictionless enough to integrate into normal experience.

Google’s commercial was extremely compelling, except that Google is failing to see how to apply its foresight into its products (Android, Keep) to enable quickly clipping ANY content while using ANY app to save as notes, and then applying machine intelligence and analytics to provide reminders and personal assistance based on computed insights.

The reason I was trying to recall the notes-related insight was actually because I’ve been finding myself buried in unrelated tasks and things to do coming from so many directions every day, it’s hard to keep track of what and when. The obvious solution is to use some kind of task reminder application. The problem with those is that I have to take time out of what I’m doing to go into such an app and create a task, which is distracting and time consuming in itself. What I really want is while I’m in any other application, whether I am reading email, reading Slack messages, attending a Zoom meeting, working a JIRA issue, responding to PagerDuty, writing on Confluence, reviewing code in GitLab, or literally anything, I want a one-click mechanism to “remember this”, so that an app will keep a note, maybe translate it into a task on my to-do list, remind me later if necessary, help me prioritize, and help me remember things. It is the one-click action in the context of the application that I’m already working in that is the key feature to interface to all these other personal assistance capabilities.

Android does support highlighting content and using Share to Keep Notes. However, this does prompt for additional information, and it creates a new Keep document. This is close, but not sufficient. From a user experience perspective, it is fine to highlight and share, but the aim should be to “remember this” without prompting for anything more so that focus is not taken away from the original context.

The app for keeping notes should be light weight. Notes should be more like items on a list, timestamped, and available for subsequent labelling, re-organization, searching, and use (integration with related apps like for task management). It needs to be a clipboard with unlimited history, so that snippets of information can be remembered on the fly for later review, classification, and movement to other applications. Life is a stream of notable events, many of which we want to queue for later recall in a different context, as we are preoccupied with what we are busy with in the moment.

The idea of a personal assistant is that without the user needing to click at each moment to clip something at each interesting moment, an agent can shadow the user’s activity and automatically track what is attracting the user’s attention. It can keep a history of bookmarks and actions across applications based on the user’s activity. Later, this record will help to answer questions about what the user had done earlier and what information was encountered. An assistant would ensure that everything is committed to memory continually without interrupting the user’s normal flow of work. When the user does highlight and share, it indicates something of particular interest that becomes a priority.

Having a recorded history of work can then be analyzed to document procedures, teach others the same workflow, and automate procedures for repeatability. Repetitive tasks are a machine’s specialty. Reducing tedium is what mechanization is meant for.

Decentralization: Be Unstoppable and Ungovernable

The trucker’s freedom convoy in Canada has revealed how individuals are vulnerable to tyrannical (rights violating) actions. Governments and corporations cooperated with authoritarian diktats across jurisdictional boundaries. Maajid Nawaz warns of totalitarian power over the populace using a social credit system imposed via central bank digital currency (CBDC) regimes being developed to eliminate cash. “Programmable” tokens will give the state power to control who may participate in financial transactions, with whom, when, for what, and how much. Such a regime would enable government tyranny to reign supreme over everyone and across everything within its reach. We need decentralization.

Centralized dictatorial power is countered by decentralization. Decentralization is especially effective when designed into technology to be immutable after the technology proliferates. The design principle is known as Code is law. The Proof of Work (PoW) consensus algorithm in Bitcoin is one such technology. CBDC is an attempt to prevent Bitcoin from becoming dominant. Criticism of PoW using too much electricity is another enemy tactic.

National and supranational powers (above nation states) are working against decentralization in order to preserve their dominance. The World Economic Forum (WEF) is installing its people into national legislatures and administrations to enact policies similar to those of the Chinese Communist Party (CCP). They seek to concentrate globalized power for greater centralization of control.

We look toward Web3 and beyond to enable decentralization of digital services. As we explore decentralized applications, we must consider the intent behind distributed architectures for decentralization. What do we want from Web3?

Unstoppable Availability

Traditionally, we think about availability with regard to failure modes in the infrastructure, platform services, and application components. Ordinarily, we do not design for resiliency to the total loss of infrastructure and platform, because we don’t consider our suppliers to be potentially hostile actors. However, multinational corporations are partnering with foreign governments to impose extrajudicial punishments on individuals. This allows governments to extend their reach to those who reside outside their jurisdictions. Global integration and the unholy nexus of governments with corporations put individuals everywhere within the reach of unjust laws and authoritarian diktats. It is clear now that this is one of the greatest threats that must be mitigated.

Web3 technologies, such as blockchain, grew out of recognition that fiat is the enemy of the people. We must decentralize by becoming trustless and disintermediated. Eliminate single points of failure everywhere. Run portably on compute, storage, and networking that are distributed across competitive providers. Choose a diversity of providers in adversarial jurisdictions across the globe. Choose providers that would be uncooperative with government authorities. When totalitarianism comes, Bitcoin is the countermove. Decouple from centralized financial systems, including central banking and fiat currencies. Become unstoppable and ungovernable, resistant to totalitarianism.

To become unstoppable, users need to gain immunity from de-platforming and supply chain disruption. Users need to be able to keep custody of their own data. Users need to self-host the application logic that operates on their data. Users need to compose other users’ data for collaboration without going through intermediaries (service providers who can block or limit access).

To achieve resiliency, users need to be able to migrate their software components to alternative infrastructure and platform providers, while maintaining custody of their data across providers. At a minimum, this migration must be doable by performing a human procedure with some acceptable interruption of service. Ideally, the possible deployment topologies would have been pre-configured to fail-over or switch-over automatically as needed with minimal service disruption. Orchestrating the name resolution, deployment, and configuration of services across multiple heterogeneous (competitive) clouds is a key ingredient.

Custody of data means that the owner must maintain administrative control over its storage and access. The owner must have the option of keeping a copy of it on physical hardware that the owner controls. Self-hosting means that the owner must maintain administrative control over the resources and access for serving the application functions to its owner. That hosting must be unencumbered and technically practical to migrate to alternative resources (computing, financial, and human).

If Venezuela can be blocked from “some Ethereum services”, that is a huge red flag. Service providers should be free to block undesirable users. But if the protocol and platform enables authorities to block users from hosting and accessing their own services, then the technology is worthless for decentralization. Decentralization must enable users to take their business elsewhere.

Ungovernable Privacy

Privacy is a conundrum. Users need a way to identify themselves and authenticate themselves to exert ownership over their data and resources. Simultaneously, a user may have good reason to keep their identity hidden, presenting only a pseudonym or remaining cloaked in anonymity in public, where appropriate. Meanwhile, governments are becoming increasingly overbearing in their imposition of “Know Your Customer” (KYC) regulations on businesses ostensibly to combat money laundering. This is at odds with the people’s right to privacy and being free from unreasonable searches and surveillance. Moreover, recruiting private citizens to spy on and enforce policy over others is commandeering, which is also problematic.

State actors have opposed strong encryption. They have sought to undermine cryptography by demanding government access to backdoors. Such misguided, technologically ignorant, and morally bankrupt motivations disqualify them from being taken seriously, when it comes to designing our future platforms and the policies that should be applied.

Rights are natural (a.k.a. “God-given” or inalienable). They (including privacy) are not subject to anyone’s opinion regardless of their authority or stature. Cryptographic technology should disregard any influence such authorities want to exert. We must design for maximum protection of confidentiality, integrity, and availability. Do not comply. Become ungovernable.

Composability

While the capabilities and qualities of the platform are important, we should also reconsider the paradigm for how we interact with applications. Web2 brought us social applications for human networking (messaging, connecting), media (news, video, music, podcasts), and knowledge (wikis). With anything social, group dynamics invariably also expose us to disharmony. Web2 concentrated power into a few Big Tech platforms; the acronym FAANG was coined to represent Facebook (now Meta), Amazon, Apple, Netflix, and Google (now Alphabet).

With centralized control comes disagreement over how such power should be wielded as well as corruption and abuse of power. It also creates a system that is vulnerable to indirect aggression, where state actors can interfere or collude with private actors to side-step Constitutional protections that prohibit governments from certain behaviors.

David Sacks speaks with Bari Weiss about Big Tech’s assault on free speech and the hazard of financial technologies being used to deny service to individuals, as was done to the political opponents of Justin Trudeau in Canada in response to the freedom convoy protests.

Our lesson, after enduring years of rising tension in the social arena and culminating in outright tyranny, is that centralized control must disappear. Social interactions and all forms of transactions must be disintermediated (Big Tech must be removed as the middlemen). The article Mozilla unveils vision for web evolution shows Mozilla’s commitment to an improved experience from a browser perspective. However, we also need a broader vision from an application (hosted services) perspective.

The intent behind my thoughts on Future Distributed Applications and Browser based capabilities is composability. The article Ceramic’s Web3 Composability Resurrects Web 2.0 Mashups talks about how Web2 composability of components enabled mashups, and it talks about Web3 enabling composability of data. The focus is shifting from the ease of developing applications from reusable components to satisfying the growing needs of end users.

Composability is how users with custody of their own data can collaborate among each other in a peer-to-peer manner to become social, replacing centralized services with disintermediated transactions among self-hosted services. The next best alternative to self-hosting is enabling users to choose between an unlimited supply of community-led hosted services that can be shared by like-minded mutually supportive users. The key is to disintermediate users from controlling entities run by people who hate them.

State of Technology

The article My First Web3 Webpage is a good introduction to Web3 technologies. This example illustrates some very basic elements, including name resolution, content storage and distribution, and the use of cryptocurrency to pay for resources. It is also revealing of how rudimentary this stuff is relative to the maturity of today’s Web apps. Web3 and distributed apps (dApps) are extremely green. Here is a more complicated example. Everyone is struggling to understand what Web3 is. Even search is something that needs to be rethought.

The article Why decentralization isn’t the ultimate goal of Web3 should give us pause. Moxie Marlinespike, Jack Dorsey, Mark Andreeson, and other industry veterans are warning us about the current crop of Web3 technologies being fraudulent and conflicted. Vitalik Buterin’s own views confess that the technology may not be going in the right direction. Ethereum’s deficiencies are becoming evident. This demands great caution and high suspicion.

Here is a great analysis of the critiques against today’s Web3 technologies. It is very clarifying. One important point is the ‘mountain man fantasy’ of self-hosting; no one wants to run their own servers. The cost and burden of hosting and operating services today is certainly prohibitive.

Even if the mountain man fantasy is an unrealistic expectation for the vast majority, so long as the threat of deplatforming and unpersoning is real, people will have a critical need for options to be available. When Big Tech censors and bans, when the mob mobilizes to ruin businesses and careers, when tyrannical governments freeze bank accounts and confiscate funds, it is essential for those targeted to have a safe haven that is unassailable. Someone living in the comfort of normal life doesn’t need a cabin in the woods, off-grid power, and a buried arsenal. But when you need to do it, living as a mountain man won’t be fantastic. Prepping for that fall back is what decentralization makes possible.

In the long term, self-hosting should be as easy, effortless, and affordable as installing desktop apps and mobile apps. We definitely need to innovate to make running our apps as cloud services cheap, one-click, and autonomous, before decentralization with self-hosting can become ubiquitous. Until then, our short-term goal should be to at least make decentralization practical, even if it is only accessible initially to highly motivated, technologically savvy early adopters. We need pioneers to blaze the trail in any new endeavor.

As I dive deeper into Web3, it is becoming clear the technology choices lean toward Ethereum blockchain to the exclusion of all else. Is Ethereum really the best blockchain to form a DAO? In Ethereum, writing application logic is expected to be smart contracts. Look at the programming languages available for smart contracts. Even without examining any of these languages, my immediate reaction is revulsion. Who would want to abandon popular general purpose programming languages and their enormous ecosystems? GTFO.

We need a general purpose Web architecture for dApps that are not confined to a niche. I imagine container images served by IPFS as a registry, and having a next-gen Kubernetes-like platform to orchestrate container execution across multicloud infrastructures and consuming other decentralized platform services (storage, load balancing, access control, auto-scaling, etc.). If the technology doesn’t provide a natural evolution for existing applications and libraries of software capabilities, there isn’t a path for broad adoption.

We are early in the start of a new journey in redesigning the Web. There is so much more to understand and invent, before we have something usable for developing real-world distributed apps on a decentralized platform. The technology may not exist yet to do so, despite the many claims to the contrary. This will certainly be more of a marathon, rather than a sprint.

What do we want from Web3?

In the journey to developing Web3, we must understand what is motivating decentralization. We are attempting to reinvent the Web to address deficiencies. These deficiencies put individuals in jeopardy of censorship, cancellation, and political persecution. They are vulnerable at the hands of Big Tech platforms, state actors, and adversarial groups intent on harm. Historical ideals to preserve the “free and open Internet” have been abandoned. If a “free and open Internet” is to be preserved, it cannot rely on the honor and voluntary cooperation of humans. Technologies must become permissionless, trustless, and unassailable, so that dishonorable and uncooperative humans can coexist.

  1. Protect a user’s right to free speech by having the user take custody of their own data. Ensure that the user’s data cannot be made inaccessible.
  2. Protect a user’s right to free association. Ensure that the data in the user’s custody can be published to whatever audience the owner wishes to reach.
  3. Protect an audience’s right to free association. Ensure access to data published by others. Ensure that applications can compose that data for the intended use, including for social collaboration.
  4. Protect a user’s access to platform capabilities for providing the application services that process that data.
  5. Protect a user’s ability to transact business with others without being subject to third party intermediaries cancelling them.
  6. Protect a user’s privacy. Ensure the user can share their data only with others who are granted authorization. In some circumstances, a user may want to remain anonymous, so that their real-world identity cannot be exposed for doxxing. Hostile detractors often try to cancel people by targeting their business, sources of income, reputation, relationships, sensitive information, even their personal safety.

Let’s keep these requirements in mind as we explore technologies that can help realize Web. Restore the ideal of a free and open Internet in the face of large factions of society who are hostile to (or wobbly on) freedom and openness.

Browser based composition

One approach to better empowering users and upstart services to avoid Big Tech censorship, suppression, and control is to build capabilities into the browser for mashing up and mixing in complementary services. This would provide a client-side (browser based) approach for third party complementary services to extend incumbent services without needing the incumbent’s authorization or cooperation. This would be one element of building Future Distributed Applications. Enhance the browser to support composition of services and data.

Using this approach, social media sites (Facebook, Instagram, Twitter, YouTube, Reddit, etc.) that enforce authoritarian content moderation policies can be complemented by alternative services. Content from prohibited users can be composed into the Big Tech content. Users could see the conversation with content merged from every desired source beyond Big Tech control. This approach for distributing comments that form a single conversation would be applicable to many services.

  • Comment on content where a user’s comments would be suppressed.
  • Annotate or review an article where commenting is not enabled. Allow an annotation to link precisely to a specific range of text, so it can be presented inline.
  • Add links to relevant content not referenced by the original.

This paradigm would enable end users to control how content is consumed, so that Web sites cannot censor or bias what information is presented about controversial topics.

Applying browser add-ons that mix-in complementary services would also enable end users to take information and process it in personalized ways, such as for fact-checking, reputation, rating, gaining insights through analytics, and discovering related (or contrarian) information. Complementary content could be presented by injecting HTML, or by rendering additional layers, frames, tabs, or windows, as appropriate.

Browser add-ons are only supported on the desktop, not mobile devices. Mobile devices would need to be supported for this paradigm to become broadly useful.

Microservices Life Cycle

There is friction between a microservices architecture and life cycle management goals for application releases. One significant motivation for microservices is independent life cycle management, so that capabilities with well-defined boundaries can be developed and operated by self-contained, self-directed teams. This allows for more efficient workflows, so that a fast-moving code base is not held back by other slower-moving code bases.

Typically, an application (a collection of services that form an integrated whole and are offered together as a product to users) is rolled out with major and minor releases on some cadence. Major releases include large feature enhancements and some degree of compatibility breakage, so these may happen on an annual or semi-annual basis. Minor releases or patches may happen quarterly, monthly, or even more frequently. With microservices, the expectation is that each service may release on its own schedule without coordination with all others even within the scope of an integrated application. A rapid release cadence is conducive to responsiveness for bug fixes and security fixes, which protect against exposing vulnerabilities to exploits.

One advantage of applications on the cloud is that a single release of software can be rolled out to all users in short order. This removes the substantial burden on developers to maintain multiple code branches, as they had to do in the past for on-premises deployments. Unfortunately, the burden is not entirely lifted, because as software under development graduates toward production use, various pre-release versions must be made available for pre-production staging, testing, and quality assurance.

Development is already complex, needing feature development toward a future release to proceed in parallel with being able to implement bug fixes for the release that is already in production (assuming all users are on only the latest). These parallel streams of development will be in various phases of pre-production testing toward being released to production, and in various phases of integration testing with a longer-term future release schedule. Varying levels of severity for bugs mean that the urgency for fixes varies. For example, emergency fixes need to be released as a patch to production immediately, if they are needed for security vulnerabilities that are exploitable. Whereas, fixes for functional defects may wait for the next release on the regular cadence. Cherry-picking and merging fixes across code branches is tedium that every developer dreads. Independent life cycle management of source code organized according to microservices is seen as helping to decouple coordination across development teams, which are organized according to microservice boundaries.

Independent life cycle management of services relies on both backward compatibility and forward compatibility. Integration between services needs to be tolerant of mismatched versions to be resilient to independent release timing, including both upgrades, rollbacks due to failed upgrades, and rerunning an upgrade after a prior failure. Backward compatibility enables a new version of a service to interoperate with an older client. Forward compatibility enables the current version of a service—soon to be upgraded—to interoperate with a newer client, especially during the span of time (brief or lengthy) in which one may be upgraded before the other. In my article about system integration, I explained the numerous problems that make compatibility difficult to achieve. Verification of API compatibility through contract testing is the best practice, but test coverage is seldom perfect. Moreover, no contract language specifies everything that impacts compatibility. Mocking will never be representative of non-functional qualities. This is one of many reasons why confidence in verification cannot be achieved without a fully integrated system. This is how the desire for independent life cycles for microservices is thwarted. The struggle is more real than most people realize. As software professionals, we enter into every new project with fresh optimism that this time we will do things properly to achieve utopia (well, at least independent life cycle would be a small victory), and at each and every turn we are confronted by this one insurmountable obstacle.

Application features involve workflows that span two or more collaborating microservices. For example, a design-time component provides the product modeling for a runtime component for selling and ordering those products. Selling and ordering cannot function without the product model, so the collaboration between those services must integrate properly for features to work. Most features rely on collaborations involving several services. Often, the work resulting from one service is needed to drive the processing in other services, as was the case in the selling and ordering example above. This pattern is repeated broadly in most applications. Once all collaborations are accounted for across the supported use cases, the integrations across services would naturally cover every service. The desire for an independent life cycle for each service that composes the application faces the interoperability challenges across this entire scope. There goes our independence.

Given the need to certify a snapshot of all services that compose an application to work properly together, we need a mechanism to correlate the versioning of source code to versions of binaries (container images) for deployment. Source code can be tagged with a release. This includes Helm charts, Kubernetes YAML files, Ansible playbooks, and whatever other artifacts support the control plane and operations pipelines for the application. A snapshot must be taken of the Helm chart version and their corresponding container image versions, so that the complete deployment can be reproduced.

This identifies an application release as a set of releases of services deployed together. This information aids in troubleshooting, bug reporting, and reproducing a build of those container images and artifacts from source code, each at the same version as what was released for deployment. This is software release management 101, nothing out of the ordinary. What is noteworthy is our inability to extricate ourselves from the monolithic approach to life cycle management despite adopting a modern microservices architecture.

Worse still, if our application is integrated into a suite of applications, as enterprise applications tend to be, the system integration nightmare broadens the scope to the entire suite. The desire for an independent life cycle even for each application that composes the suite faces interoperability challenges across this entire scope. What a debacle this has turned out to be. The system integration nightmare is the challenge that modern software engineering continues to fail at solving across the industry.

DevOps Transparency and Coordination

Coordinating human activities across organizations and disciplines is fundamental to DevOps. This requires having documented procedures to handle any situation, tools that enable participants to collaborate effectively, and a shared understanding of what information needs to be captured and communicated to work — especially when the actors are likely to be separated by space and time.

A DevOps procedure is initiated for a reason. These situations include scheduled maintenance, a response to an alert (detection of a condition that deserves attention), or a response to a request for support. In each case, there should be a ‘ticket’ (a record in a tool) that notifies a responsible DevOps engineer to work the issue. Ideally, the relevant procedure that applies to a ticket should be obvious — ideally, referenced explicitly.

When responding to an alert or a support request (usually a complaint about a service malfunction), usually it begins by confirming the reported condition. This requires gathering information about the context and collecting diagnostics to aid in troubleshooting. Ideally, the ticket clearly identified the problem; otherwise, interactions would be needed to gain such clarity. Humans are routinely terrible about assuming the recipient of a request has all the necessary context to understand what is being asked of them and why. To mitigate this inefficiency, tooling and procedural documentation are usually provided to guide how tickets should be written, so that many questions and answers are not needed afterward to satisfy the request.

The engineer who works a ticket should capture the service configuration, relevant logs to determine the failure mode, and any other data for the context associated with the problem for analysis toward an operational fix or for submitting a bug, if applicable.

A designated channel should be used for engineers to collaborate. Each engineer must record in real time the actions taken to troubleshoot, analyze, and correct the problem. This aids in coordination between multiple individuals in different roles across organizations to work together. Good communication enables everyone involved to stay informed and avoid taking actions that interfere with each other. Moreover, an accurate record can be reviewed later as a post-mortem. In the course of the

Severe incidents, such as those that cause a service outage, demand a root cause analysis toward preventative actions, such as process improvements, procedure documentation, operational tooling, or developing a permanent fix (for a software bug). This depends on the ticket capturing the necessary information to trace how the problem originated, such as the transaction processing in progress, events or metrics indicating resource utilization out of bounds (e.g., out of memory, insufficient cpu, I/O bandwidth limited, storage exhaustion) or performance impairment (e.g., lock contention, queue length, latency), or anything else that appears out of the ordinary.

One of the biggest impediments to verifying that a problem is resolved is that DevOps normally does not have access to the service features being exercises by end users. When a functional problem is reported by users, it may not be possible for DevOps to confirm that the problem is fixed from the user’s perspective. Communication with users may need to be mediated by customer support staff. The information on the ticket would need to facilitate this interaction.

Accurate record-keeping also enables later audits in case there are subsequent problems. The record of actions taken can be used to analyze whether these actions (such as configuration changes or software patches) are contributing to other problems. Troubleshooting procedures should include data mining of past incidents (have we seen this problem before? how did we fix it previously?) and auditing what procedures may have impacted the service under investigation (what could have broken this?).

The above guidance can be summarized as follows.

  • Say what you do
  • Do what you say
  • Write it down