social applications – in the work place

Social applications like Facebook and Twitter have flourished in our personal lives. But their usefulness for work is limited to advertising and other marketing activities. Engagement is through sharing of status updates, links, photos, likes, and comments. This is a decade old approach that has not advanced much.

In Mark Zuckerberg’s interview for Startup School, he shows his understanding that Facebook is a social platform for building social apps. However, it is my opinion that all of the players in the social networking space do not have good vision into the future. Facebook and Twitter treat social interactions as ends in themselves. That is why they present information in a timeline, and they seek out trending topics. Information is like news that is stale after it is read. Engagement is a vehicle for targeted marketing.

Google has tried to compete with Facebook, but they can’t seem to find a formula for success. The article Why Google+ failed, according to Google insiders outlines their failure to achieve mass adoption and engagement. Providing an alternative to Facebook without a discernible improvement is not competitive, because users have no good reason to migrate away from an established network of friends.

Facebook “friend” relationships are more likely to be friends, family, and casual acquaintances. Facebook “follow” and “like” relationships are more likely to be public figures, celebrities, and business-to-consumer connections. Facebook is not the platform for professional relationships, work-related interactions, and business associations. LinkedIn is used for professional relationships with recruiting as its primary function. We should recognize that none of these platforms provides an application platform for actually doing work using social tools. Google failed to recognize this opportunity, as they began to integrate G+ with mail, storage, and other services. Providing a wall for posting information and comments is an extremely limited function for social interaction. It seems like no one has bothered to analyze how workers engage with each other to perform their jobs, so as to identify how social tools can facilitate these interactions to be done with improved productivity.

We do see companies like Atlassian developing tools like JIRA and Confluence for assisting teams to work together. These tools recognize how social interactions are embedded into the information and processes that surround business functions. We need this kind of innovation applied across the board throughout the tools that we use in the enterprise.

Productive work relies on effective communication, coordination, and collaboration. These are social functions. Social networking is already mature in project management, wikis (crowd sourcing information), and discussion forums. But these are often peripheral to the tools that many workers use to perform their primary job functions. We need to be looking at the social interactions that surround these tools to redevelop these tools to facilitate improvements in social interaction.

Let’s explore where social interactions are poor in our work environments today.

As our businesses expand across the globe, our teams are composed of workers who reside in different places and time zones. Remote interactions between non-collocated teams can be extremely challenging and inefficient compared to workers who can have regular face-to-face interactions with tools like white boards and pens. There is a huge opportunity for tablet applications to better support remote workers.

As businesses scale, we may discover that the traditional organizational structures are too rigid to support the ever-accelerating pace of agility that we demand. Perhaps social tools can facilitate innovations in how workers organize themselves. As highly skilled and experienced workers mature, they become more capable of taking the initiative, making good decisions independently, and behaving in a self-motivated manner. Daniel Pink has identified that autonomy, mastery, and purpose are the intrinsic motivators that lead to happy and productive employees. Perhaps with social tooling, it is possible for organizations to evolve to take advantage of spontaneous order among workers instead of relying mostly on top-down management practices for assigning work.

These are two ways in which social networking may apply to enterprises in ways that are not well supported today. All we have to do is examine the pain points in our work environments to identify innovations that may be possible. It is quite surprising to me that we are not already seeing social tools revolutionize the work place, especially in the technology sector where start-ups do not have an entrenched culture and management style.

Reliable Messaging with REST

Marc de Graauw’s article Nobody Needs Reliable Messaging remains as relevant today as it did in 2010, when it was first published. It echoes the principles outlined in Scalable, Reliable, and Secure RESTful services from 2007.

It basically says that you don’t need for REST to support WS-ReliableMessaging delivery requirements, because reliable delivery can be accomplished by the business logic through retries, so long as in the REST layer its methods are idempotent (the same request will produce the same result). Let’s examine the implications in more detail.

First, we must design the REST methods to be idempotent. This is no small feat. This is a huge topic that deserves its own separate examination. But let’s put this topic aside for now, and assume that we have designed our REST web services to support idempotence.

If we are developing components that call REST web services for process automation, the above principle says that the caller is responsible for retrying on failure.

The caller must be able to distinguish a failure to deliver the request from a failure by the server to perform the requested method. The former should be retried, expecting that the failure is temporary. The latter is permanent.

The caller must be able to implement retry in an efficient manner. If the request is retried immediately in a tight loop, it is likely to continue to fail for the same reason. Network connectivity issues sometimes take a few minutes to be resolved. However, if the reason for failure is because the server is overloaded, having all clients retry in a tight loop will exacerbate the problem by slamming the server with a flood of requests, when it is least able to process them. It would be helpful if clients would behave better by backing off for some time and retrying after a delay. Relying on clients to behave nicely on their honor is sure to fail, if their retry logic is coded ad hoc without following a standard convention.

The caller must be able to survive crashes and restarts, so that an automated task can be relied upon to reach a terminal state (success or failure) after starting. Therefore, message delivery must be backed by a persistent store. Delivery must be handled asynchronously so that it can be retried across restarts (including service migration to replacement hardware after a hardware failure), and so that the caller is not blocked waiting.

The caller must be able to detect when too many retry attempts have failed, so that it does not get stuck waiting forever for the request to be delivered. Temporary problems that take too long to be resolved need to be escalated for intervention. These requests should be diverted for special handling, and the caller should continue with other work, until someone can troubleshoot the problem. Poison message handling is essential so that retrying does not result in an infinite loop that would gum up the works.

POST methods are not idempotent, so retry must be handled very carefully to account for side-effects. Even if the request is guaranteed to be delivered, and it is processed properly (exactly once) by the server, the caller must be able to determine if the method succeeded reliably, because the reply can be lost. One approach is to deliver the reply reliably from the server back to the caller. Again, all of the above reliable delivery qualities apply. The interactions to enable this round trip message exchange certainly look very foreign to the simple HTTP synchronous interaction. Either the caller would poll for the reply, or a callback mechanism would be needed. Another approach is to enable the caller to confirm that the original request was processed. With either approach, the reliable execution requirement needs to alter the methods of the REST web services. To achieve better quality of service in the transport, the definition of the methods need to be radically redesigned. (If you are having a John McEnroe “you cannot be serious” moment right about now, it is perfectly understandable.)

Taking these requirements into consideration, it is clear that it is not true that “nobody needs reliable messaging”. Enterprise applications with automated processes that perform mission-critical tasks need the ability to perform those tasks reliably. If reliable message delivery is not handled at the REST layer, the responsibility for retry falls to the message sender. We still need reliable messaging; we must implement the requirement ourselves above REST, and this becomes troublesome without a standard framework that behaves nicely. If we accept that REST can provide only idempotence toward this goal, we must implement a standard framework to handle delivery failures, retry with exponential back off, and divert poison messages for escalation. That is to say, we need a reliable messaging framework on top of REST.

[Note that when we speak of a “client” above, we are not talking about a user sitting in front of a Web browser. We are talking about one mission-critical enterprise application communicating with another in a choreography to accomplish some business transaction. An example of a choreography is the interplay between a buyer and a seller through the systems for commerce, quote, procurement, and order fulfillment.]

OLTP database requirements

Here is what I want from a database in support of enterprise applications for online transaction processing (OLTP).

  1. ACID transactions – Enterprise CRM, ERP, and HCM applications manage data that is mission critical. People’s jobs, livelihoods, and businesses rely on this data to be correct. Real money is on the line.
  2. Document oriented – A JSON or XML representation should be the canonical way that we should think of objects stored in the database.
  3. Schema aware – A document should conform to a schema (JSON Schema or XML Schema). Information has a structure and meaning, and it should have a formal definition.
  4. Schema versioned – A document schema may evolve in a controlled manner. Software is life cycle managed, and its data needs to evolve with it for compatibility, upgrades, and migration.
  5. Relational – A subset of a document schema may be modeled as relational tables with foreign keys and indexes to support SQL queries, which can be optimized for high performance.

The fundamental shift is from a relational to a document paradigm as the primary abstraction. Relational structures continue to play an adjunct role to improve query performance for those parts of the document schema that are heavily involved in query criteria (WHERE clauses). The document paradigm enables the vast majority of data to be stored and retrieved without having to rigidly conform to relational schema, which cannot evolve as fluidly. That is not to say that data stored outside of relational tables is less important or less meaningful. To the contrary, some of the non-relational data may be the most critical to the business. This approach is simply recognizing information that is not directly involved in query criteria can be treated differently to take advantage of greater flexibility in schema evolution and life cycle management.

Ideally, the adjunct relational tables and SQL queries would be confined by the database to its internal implementation. When exposing a document abstraction to applications, the database should also present a document-oriented query language, such as XQuery or its equivalent for JSON, which would be implemented as SQL, where appropriate as an optimization technique.

NoSQL database technology is often cited as supporting a document paradigm. NoSQL technologies as they exist today do not meet the need, because they do not support ACID transactions and they do not support adjunct structures (i.e., relational tables and indexes) to improve query performance in the manner described above.

Perhaps the next best thing would be to provide a Java persistent entity abstraction, much like EJB3/JPA, which would encapsulate the underlying representation in a document part (e.g., as a XMLType or a JSON CLOB column) and a relational part, all stored in a SQL database. This would also provide JAXB-like serialization and deserialization to and from JSON and XML representations. This is not far from what EclipseLink does today.

innovation – a new definition

innovation [noun] – context violating exaptation.

Ever since I read this tweet in 2012 by Fast Company, I have redefined innovation in this way.

Here is the first definition of exaptation from Dictionary.com.

noun, Biology
1. a process in which a feature acquires a function that was not acquired through natural selection.

By taking something or a combination of things and applying it to a purpose to which it was not intended (violating its original context), one may discover that it is well-suited to perform a different function. This discovery becomes an innovation.

Ignorance of the law

Ignorance of the law is no excuse. That is the principle we are expected to live by. If we embrace the full implication of this principle, it may merit being adopted as a Constitutional principle that places the most effective constraints on government overreach.

If ignorance of the law is not an allowable excuse, it is imperative for government to enact laws that people can read and comprehend to remain in compliance. Moreover, the laws for crimes and misdemeanors, as well as the regulations that every person must comply with must be readable and comprehensible in totality for the average person without requiring professional legal counsel. This requires that all crimes, misdemeanors, and regulations must not exceed a certain maximum number of words in their totality. That limit should be established to be what an average student can read and comprehend by investing one hour per day during four years of high school. The government is forbidden from writing laws and regulations that exceed this limit, so as not to instigate ignorance of the law.

faster than light travel

The article NASA May Have Accidentally Created a Warp Field is getting people excited about faster than light travel.

You don’t need to travel faster than light to go arbitrarily far in arbitrarily less time. All you need to do is travel closer to the speed of light. As you get closer to c, time dilation and space contraction will contribute to bring arbitrarily distant destinations within reach. Although the travelers will experience relatively manageable passages of time, it is their friends observing from home who will age much more quickly. Travelers moving at nearly c in space have most of their velocity contributing to movement through space dimensions and almost none through time. At home, we are moving at c almost entirely in the time dimension, remaining motionless in space. The laws of physics give everything no option but to move at c through spacetime; we can only choose what part of our motion is through the space dimensions and the remainder is through time.

The benefits imagined from warping space are to alleviate this huge difference in the passage of time, so that travelers can go places and return without generations dying off before they return home. The “faster than light” travel is about how outside observers perceive the traveler’s motion, so that they can share in the experience within their lifetimes. Travelers have no need for FTL motion to reach any destination within their own lifetime, with enough acceleration to move at close to c through space. The desire for FTL motion is for non-travelers who don’t want to die waiting for the travelers to return.

The search for intelligent life

The search for intelligent life outside of our solar system is a difficult one. We tend to think that if we expand the scope of our search to include more galaxies, this is sufficient. But we must accept that even if we had the technology to examine every galaxy exhaustively in perfect detail, we are only covering a minuscule part of the search space, which is almost entirely inaccessible to us by the laws of physics.

We can only see something in the current snapshot in time. Let’s try to imagine a search for human radio signals on Earth from the perspective of a distant alien civilization. The Earth is 4.4 billion years old. Humans started producing radio signals in 1894, so these radio signals have been transmitting for the past 121 years. These signals have only had the opportunity to propagate 121 light years away from Earth in that time. Beyond that distance, no alien civilization would be able to detect these signals. Moreover, an alien civilization would have to coincidentally have developed at a pace in which their technology was at least as advanced at exactly the right time to detect such signals during this tiny window in time upon their arrival. This is a 121 year window out of the 13.82 billion years in which the universe has existed.

my position on abortion

Since the topic of abortion is in the news again, I will once again restate what I believe to be a reasonable position on this issue.

I agree with Rand Paul that abortion is personally offensive. However, a legislator’s personal feelings ought to have nothing to do with public policy and the protection of rights.

I agree with Rand Paul that a seven pound baby in the uterus has rights. Finding the best way of protecting the rights of both the mother and child is not easy.

I agree with Rand Paul that certain exemptions should be permitted by law to perform a late term abortion. This debate is not about abortions before 24 weeks of pregnancy, the limit of viability.

I believe a fetus’ right to life can be asserted when the baby is viable outside the womb. This may be facilitated through modern medical technology at an early stage of gestation, as evidenced by many premature births. Therefore, it is perfectly reasonable to protect a baby’s right to life, when it develops into a viable person who can exist separated from the mother’s body. At that point in a pregnancy, killing the fetus should be disallowed, and the baby should either be allowed to develop to full term in the womb at the mother’s option; or delivered prematurely. Sure, there are dangers to a baby for it to be delivered prematurely, but certainly less danger than homicide. If the parents do not want the baby, it can be adopted by another family. The demand for babies far exceeds the supply.

See also: on the rights of a fetus.

Universal rest frame

I wonder whether there is a preferred rest frame in the universe. What are the implications, if there isn’t one? I have questions.

Sometimes we see stories about searching for the origins of high energy particles called cosmic rays. These are massive particles like protons, which have been accelerated by something in deep space to nearly the speed of light. The usual suspects are black holes, neutron stars, supernovae, and other exotic phenomena. The puzzling thing is that some of these particles seem to have traveled great distances, farther than thought possible without losing momentum (slowing down by bumping into things like photons).

What I wonder is whether the human perspective on Earth is far too biased. Einstein’s theory of special relativity says that there is no preferred rest frame in the universe. A fast moving particle is moving fast relative to us, but it is equally valid to say that the particle is at rest, and it is we who are moving fast relative to it.

If indeed there is no preferred rest frame in the universe, shouldn’t there be a uniform distribution of velocities for distant galaxy clusters? Because of the strong influence of gravity, galaxies within a cluster would be bound to move together. But galaxies that are not close enough together will move independently. Wouldn’t one expect that two galaxies separated in space and time by 12 billion light years have an equal probability of moving at any speed between zero and c relative to each other?

However, indeed our picture of the universe seems to be of a relatively organized structure like a web of filaments, possibly with a flow in a particular direction. It is far more accurate to describe the structure as static than it is to say that it is randomly moving with a uniform distribution of velocities. This means there is a definite bias for a rest frame, where the relative motion of the large scale structure of the universe is minimized. Am I wrong?

Modular home construction

I wonder if one day we will build homes like we do the space station—in prefabricated modules. Modular construction seems like it could offer incredible advantages.

Perhaps rooms can be built in standard dimensions and standard interconnections to adjacent rooms for electricity, networking, coaxial cable, HVAC, hot and cold water, natural gas, etc. Each room would be somewhat over-engineered, but this extra cost is offset by savings from the economies of scale due to mass-production. A home builder would simply assemble a chosen configuration of modules, and provide some finishing touches, such as the exterior facing, roofing, and utility hookups.

This approach would benefit from guaranteed quality of workmanship, replacing skilled labor (e.g., carpenters) with robots and 3D printing, and rapid construction. Moreover, the big innovation comes years later. As technology improves, and the homeowner wants to uptake improvements, it becomes a simple matter of replacing modules, and possibly reassembling them in a different configuration.

Insights into innovation