rise of terrorism due to UN

New Scientist: You can’t fight violence with violence

This does not explain the success of achieving the unconditional surrender of Imperial Japan and Nazi Germany in WW2. I believe the rise of terrorism as a successful strategy for insurgency is due primarily to the rules of war promoted by the UN, which forbid the targeting of civilians. Surrender in war is not a military outcome; it is a political one. Without the civilian population being faced with total annihilation, there is no reason to surrender, and resistance by this protected class can continue indefinitely.

currency of goodwill

Success and failure in life and our relationships—personal and professional—relies in large part on goodwill. Goodwill is measurable. We maintain an account for every interpersonal relationship. We trade in goodwill. It has a currency.

People wonder what makes them liked or respected or appreciated. We hold others in esteem in proportion to the amount of goodwill they’ve accumulated in their account. If someone has been kind and thoughtful in the past, their account carries a higher balance. If someone has done many favors and has called in very few of them, they have earned a wealth of goodwill.

designs are useless – like planning

“In preparing for battle I have always found that plans are useless, but planning is indispensable.” -Eisenhower (from Planning Extreme Programming)

I believe this is true, because “no plan survives contact with the enemy”. In software, the enemy is in the form of dark spirits hidden within the code, both legacy and yet to be written. Because plans (the schedule of work to be done) are intimately tied to designs (models of the software), it must also be true that no design survives contact with the enemy. Any programmer who begins writing code based on a preconceived design will almost immediately feel the pain of opposing forces that beg to be resolved through refactoring; programmers who lack this emotional connection with their code are probably experiencing a failure of imagination (to improve the design).

Therefore, I think we can return to the original quote and state its corollary: designs are useless, but designing is indispensable.

All this is to say that for the above reasons, I think these ideas strongly support the notion that the process artifacts (e.g., Functional Solution Approach, Functional Design, Technical Design) are more or less useless, because as soon as they are written they are obsoleted by improvements that are discovered contemporaneously with each line of code written, but the act of producing them (the thought invested in designing) is indispensable to producing good software.

This leads me to conclude that we should not fuss so much about the actual content of the artifacts, so long as they capture the essence to show a fruitful journey through designing—that problems have been thought through and decisions have been based on good reasoning. Worrying about the content being perfectly precise, comprehensive, and consistent ends up being a waste of effort, since the unrelenting act of designing will have already moved beyond the snapshot captured in the artifact.

Coincidentally, this theme also aligns with the notion of a learning organization espoused by Lean. The value of designing is the facilitation of learning.

quantum gravity unsolved

Modern physics is fundamentally broken. Quantum theory explains how the universe works at small scales, such as atoms. Quantum theory is remarkably accurate with an incredibly good agreement with experiments. The theory of general relativity explains how the universe works at large scales, such as stars and galaxies. General relativity is remarkably accurate with an incredibly good agreement with experiments. However, the two theories disagree with each other. Both theories are mostly correct, but also wrong in some ways. After decades of trying to reconcile the two theories of the universe, physicists are not close to formulating a theory of everything. There are several promising avenues of research, such as string theory and loop quantum gravity, but none of these has so far been successful.

It makes me wonder about where the error in thinking could be to have misled physicists throughout the world onto roads that are probably dead ends.

It’s been pretty obvious to many where the flaw in quantum theory lies. It treats the time dimension as being independent from the three space dimensions, and this is in complete contradiction to general relativity, which states that space and time form a single manifold. Quantum theory treats space and time as an absolute background, as if they exist independently from the particles and fields that make up reality. However, only particles and fields are real. Space and time as a background do not actually exist; they are mathematical constructs reflecting the geometry of some physical theory.

However, where is general relativity flawed? This is not as obvious. The most obvious conflict with quantum theory has always been gravity, which general relativity claims is the curvature of space-time. Quantum theory has never had a good explanation of gravity. The source of gravity is mass-energy.

Something else we should be aware of is the Standard Model of particle physics. It too represents our current best understanding of the universe. It too is flawed. It does not include gravity among other phenomena. And interestingly enough it predicts a particle called the Higgs boson, which is responsible for mass. However, the Higgs boson has never been experimentally observed. The Large Hadron Collider (LHC) is supposed to confirm or refute the existence of the Higgs boson, when it becomes operational.

Maybe this is more than just a coincidence that the Standard Model, which does not include gravity also predicts the origin of mass, which has eluded observation. Given that gravity is where the conflict lies between quantum theory and general relativity, perhaps we should take a closer look at mass-energy.

Let’s take a look at the Schrödinger equation from quantum theory:

The H is the Hamiltonian, which is the total energy of the system (potential + kinetic). The kinetic energy is proportional to mass. General relativity tells us that mass is equivalent to a whole lot of potential energy:E = mc2.

Meanwhile, special relativity tells us that mass is relative. The mass of something depends on how fast it is moving relative to the observer. The observer and the relative motion are other things that the Schrödinger equation does not account for.

We keep encountering mass in these equations. Maybe we don’t understand mass as well as we think we do. Which leads us to a misunderstanding of gravity, and consequently the geometry of space-time. I wonder if all of this points to mass as being the culprit, where we have been getting it all wrong. Mass is where I suspect the problem to be. If the LHC does not find the Higgs boson, I think this will tell us just how wrong we are about mass.

transparent persistence

Advantages

Transparent persistence has emerged into the mainstream over the past few years with the popularity of JDO and JPA for enterprise application development. This approach offers the following advantages.

  1. Domain modeling is expressed naturally as plain old Java objects (POJOs) without having to program any of the SQL or JDBC calls that are traditionally coded by hand.
  2. Navigation through relationships – objects are naturally related through references, and navigating a relationship will automatically load the related object on demand.
  3. Modified objects are stored automatically when the transaction is committed.
  4. Persistence by reachability – related objects are automatically stored, if they are reachable from another persistent object.

The programming model is improved by eliminating the tedium that is traditionally associated with object persistence. Loading, storing, and querying are all expressed in terms of the Java class and field names, as opposed to the physical schema names. The programmer is largely insulated from the impedance mismatch between Java objects and the relational database. The software can be expressed purely in terms of the domain model, as represented by Java objects.

Challenges

When developing domain objects, persistence is only one aspect. The business logic that applies to the graph of related objects is the most important concern. Transparent persistence introduces challenges to executing business logic to enforce constraints and complex business rules when creating, updating, and deleting persistent objects through reachability.

For example, an equipment rental application may need to enforce the following constraints:

  • When creating equipment, it must be related to a location.
  • When creating a rental, it must be related to a customer, and ensure that the equipment is available for the duration of the rental.
  • When updating a rental, it must ensure that equipment is available for the duration of the rental.

JPA 2.0 does not provide sufficient mechanisms for enforcing these constraints, when creating or updating these entities through reachability. The responsibility is placed on a service object to manage these graphs of entities. The constraint checking must be enforced by the service object per transaction. Java EE 5 does not provide any assistance to ensure that the constraint checks (implemented in Java) are deferred until commit, so that they are not repeated, when performing a sequence of operations in the same transaction.

Adding a preCommit event to a persistent object would provide a good place for expressing constraints. Allowing this event to be deferred until transaction commit would provide the proper optimization for good performance. Of course, preCommit would need to prevent any further modifications to the persistent objects enlisted in the transaction. This would factor out many of the invariants so that they are expressed per entity, removing the responsibility from every operation on service objects, which is prone to programmer error. The domain model would be greatly improved.

java christmas wish list 2008

JPA preCommit

Similar to the other events (PrePersist, PreRemove, PostPersist, PostRemove, PreUpdate, PostUpdate, and PostLoad), JPA needs to add a preCommit event. This would be useful for enforcing constraints (invariants) using Java logic, similar to how less expressive deferred constraints can be enforced in SQL.

read-only transaction

javax.transaction.UserTransaction needs the ability to begin a transaction with an awareness of whether the transaction will be read-only or read-write. A read-only transaction would prevent writes (inserts, updates, and deletes) from being done.

dynamic immutability

It would be helpful if an instance of an object can be mutable, when used by some classes (e.g., builder, factory, repository, deserializer), and immutable, when used by others. This would facilitate the ability to load persistent objects from a data store, derive transient fields from persistent fields, and marking the instance as immutable if the transaction is read-only. I do not want to develop entities that have both a mutable class and an immutable class; and access control (private, protected) is not sufficient, if the mutability is dependent on context (e.g., read-only transaction).

Java technology challenges

Here is my Java wish list.

  1. module deployment – Java has done well to define an archive format and class loading system that enables code to be organized into libraries. However, a modular application needs more than the deployment of Java classes. It also needs to accommodate the following:
    • SQL DDL for initial database schema creation and on-going evolution
    • SQL DML and possibly Java code for upgrading data as the schema evolves and the application features are upgraded
    • XML documents and binary resources containing data that needs to be processed by the application and possibly loaded into the database
    • HTML pages or templates (e.g., Tapestry) that can be dynamically added to a Web application
    • Scripts (e.g., Groovy), rules, or other forms of code that can be dynamically executed.
  2. module dependency – Java needs a better way to dynamically enable or disable behaviors that depend on whether modules of code are available, similar to how function_exists(f) works in PHP, without having to resort to invoking methods using Java reflection. A static programming model should be possible, while protecting a block of code to be conditionally executed only if another class is loadable.

universe of events – cosmology in software

On my second reading of Three Roads to Quantum Gravity by Lee Smolin, the concept of a relational universe stands out as something fundamentally important.

Each measurement is supposed to reveal the state of the particle, frozen at some moment of time. A series of measurements is like a series of movie stills — they are all frozen moments.

The idea of a state in Newtonian physics shares with classical sculpture and painting the illusion of the frozen moment. This gives rise to the illusion that the world is composed of objects. (p.53)

In object oriented programming, the objects correspond to the particles. The focus is on capturing the state of the object, frozen at some moment of time. As methods are called on the object, changes to its state (variables) are like a series of movie stills.

Lee Smolin goes on to write:

If this were really the way the world is, then the primary description of something would be how it is, and change in it would be secondary. Change would be nothing but alterations in how something is. But relativity and quantum theory each tell us that this is not how the world is. They tell us — no, better they scream at us — that our world is a history of processes. Motion and change are primary. Nothing is, except in a very approximate and temporary sense. How something is, or what its state is, is an illusion. It may be a useful illusion for some purposes, but if we want to think fundamentally we must not lose sight of the essential fact that ‘is’ is an illusion. So to speak the language of the new physics we must learn a vocabulary in which process is more important than, and prior to, stasis. Actually, there is already available a suitable and very simple language which you will have no trouble understanding.

From this new point of view, the universe consists of a large number of events. An event may be thought of as the smallest part of a process, a smallest unit of change. But do not think of an event happening to an otherwise static object. It is just a change, no more than that.

The universe of events is a relational universe. That is, all its properties are described in terms of relationships between the events. The most important relationship that two events can have is causality. This is the same notion of causality that we found was essential to make sense of stories.

If objects are merely an illusion, and it is really causal events that are fundamental to modeling a universe that is relational and dynamical, then perhaps we should re-examine how effective object oriented programming is at producing software that effectively models real world processes. Classes of objects definitely focus on the static structure of the universe. The methods on these classes can be considered to correspond to events, which carry information in, perform some computation, and carry information out. However, the causal relationships between events is buried in the procedural code within each method; they are not expressed in a first class manner.

Personal productivity applications like spreadsheets and word processors model objects (e.g., documents) and relationships that undergo relatively simple processes involving only a few actors. The causal history of events is not as important, because there is only one set of objects in a document to maintain integrity among and the series of frozen moments model of the universe works rather well. Enterprise applications such as Enterprise Resource Planning (ERP) facilitate a multitude of parallel business processes that involve many actors and sophisticated collaborations. Each actor is performing transactions against some subset of objects, which are each progressing through a distinct life cycle. Maintaining integrity among the objects changed by these many concurrent events is incredibly complicated. It becomes important to keep a causal history of events in addition to the current state of the universe, as well as having a schedule of future events (for planning) that have not come to pass. A series of frozen moments becomes less appealing, whereas a set of processes and events seems like a better description of the universe.

cosmological constant

The energy density of empty space is called the cosmological constant. It accounts for the force that causes the expansion of the universe. Its value is approximately 10^-29 g/cm^3. This is an incredibly tiny positive number. They call this stuff dark energy.

As the universe expands, the density of ordinary matter like stars and rocks decreases because new matter is not magically appearing to fill in the space. The incredible thing about the cosmological constant is that the energy density of vacuum does not decrease as the universe expands with time. If this does not surprise you, then let me explore this a little deeper.

mass = density * volume

If the universe is expanding, then the volume is growing larger with time. If the density remains constant, then this would mean that the mass-energy of the universe is ever increasing.

Reconcile that with the First Law of Thermodynamics.

In any process, the total energy of the universe remains the same.

Are we to believe that the universe itself violates the First Law of Thermodynamics?

Insights into innovation