Sunday, February 24, 2013

Algorithmic discovery

Recent research (1) from USC that demonstrates how predictive algorithms can propel discovery in many areas including medicine and astronomy is further indication that applied mathematics is coming of age. Misguided attempts at rules based Artificial Intelligence kept a lid on more productive empiricism – something engineers and economists have known for many decades. Life sciences has been notoriously backward in the application of mathematics in predictions and decisions. Many have argued that biological systems represent complex interacting uncertainties and hence are not amenable to modeling. Nothing could be further from the truth.

Determinism and normality statistics have played havoc with many fields, including life sciences, for decades. Specialists in this industry followed standardized processes of discovery with rigid and prescriptive expectations of outcomes. The constant and nearly predictable failures have not forced significant changes, yet. The long cycles of research and development have allowed the perpetuation of the status-quo. The USC approach of taking well known mathematical principles and applying them differently with an eye toward practical applications is refreshing. More research of this ilk is needed if the industry is to pull out of the rut it is currently in.

Predictive and decision analytics – supported by established mathematics can wake up the slumbering life sciences industry. Tools have been available for over a decade – but not many have been willing to take the plunge.

(1) Taking the gamble out of DNA sequencing, Published: Sunday, February 24, 2013 - 15:32 in Biology & Nature, e!Science News.

Thursday, February 14, 2013

Outside-in innovation

Recent research from the University of Illinois and NC state seems to confirm what companies intuitively knew. The value of an innovation is driven by the uncertainty in customer acceptance. Running separate processes – one for product innovation and the other for customer evaluation, sub-optimize both. Integrating these disparate processes ran by different departments with differing cultures does make sense.

However, it is not that simple. A process view of product innovation implicitly assumes that the R&D machine is sitting idle to serve up the desires of the “downstream customers.” Equally important is the assumption that the customer expectation is stagnant. Customer expectations of products and new features are dynamic and just as local weather is affected by small and possibly unrelated changes in the complex system, they also change quickly and unpredictably. The researchers seem to take a rigid engineering view of the integration of the product innovation ad customer evaluation processes. History shows that not many companies became great by just satisfying expected and measured desires of downstream customers, who are notoriously fickle.

From a decision perspective, companies in high innovation industries are missing a true portfolio view of their product innovation investments. Granted, it is important to understand the status-quo expectations of the customer – but truly great companies anticipate the future. More importantly, they design products incorporating flexibility with a tacit acceptance that the future is uncertain. They anticipate, not precisely, but on uncertain terms. Those who solve this problem systematically – not by rigid integration – but by taking a holistic portfolio view of investment choices, will win. 

Saturday, February 9, 2013


Recent attempts at using imaging techniques to establish a neuro-basis for irrational economic decisions is likely in the wrong direction. Seeking a hardware deficiency to justify behaviors that cannot be explained by established theory, is a bit like hitting your computer or worse, taking an MRI scan of the computer chip inside, when your spreadsheet produces unexpected results. Perhaps, simpler explanations are already available.

Human mind has never been good in making consistent and rational decisions in the presence of uncertainty. Early in the human evolution, the left brain had to take a firm command of the proceedings, imparting deterministic and process oriented decision methodology on a specialized serial processor. After all, he had to hunt systematically over known routes and gather water from familiar waterholes. The parallel processing right brain remained a silent witness and continues to be so in the modern world.  Economics is no different, linear thoughts and well behaving normality statistics have consistently pushed humans to irrational decisions. But, there is little need to take pictures of the brain or cut it open to understand this phenomenon.

Seeking simpler explanations to known behaviors is always dominant.

Wednesday, February 6, 2013

Step and Wait

Recent research from USC points out that technology does not grow as predicted by Moore’s law and its variants. Instead, it moves more in a process, the researchers call – Step and Wait. Although the focus was investment decisions in technology, this is applicable across the entire economy.

It should be intuitively clear to anybody observing system wide effects of technology. Airplanes, computers and the Internet ushered in step function changes in productivity that took a decade or two to bleed through the entire economy. Once such a step is taken, however, the wait can be long. Today’s world is yearning for a step but unfortunately none has come and it is possible that the long drought in technology innovation could continue. Many have identified incremental product improvement as technology advancement. For investment managers and policy makers, such ideas are disastrous. iPhone 10 is unlikely to save the world, making device based communications obsolete may be in the right direction.

Some technologists have fallen into the trap of extrapolation – and forecasting using exponential growth curves. Singularity, feared by many, came from such a thought process. But if one has to wait for decades after taking a step, technology singularity may be the last thing we have to worry about.

Sunday, February 3, 2013

Modular intelligence

Recent research from Cornell illustrates why modularity is fundamental to design including biological systems. Cost minimization in network systems naturally surfaces modularity as an important building block. This makes intuitive sense as modularity leads to complexity and ultimately intelligence.

Why, then, humans fail to consider modularity in the design of complex systems, systematically? Of course, every brick in the wall is expressed but not every wall in the city. Is modularity boring for those with intelligence or a mere convenience? Does modularity hide a regression from knowledge, or does it break new frontiers? Is modularity limiting or not? Does modularity evolve to an extent that it is able to obscure itself?

At the lowest level, one has to consider minimization of cost as the fundamental driver of modularity. The complication here is the definition of cost itself. First order societies, unable to feed, clothe and house their populations, may consider cost to be different among available alternatives in a limited choice set. Those who have gone further may define cost as ignorance, the minimization of which requires a break from modularity.

Nothing is for sure.