Google

YouTube

Spotify

Scientific Sense Podcast

Saturday, March 9, 2013

Quantum necessity

Conventional computing has run its course – attempts at closer packing of transistors in an effort to improve speed is simply succumbing to the most basic problem – heat, with temperatures at the heart of the latest microprocessors exceeding those of nuclear power plants. This was expected but many had hoped for a path dramatically different from the doubling of speed every eighteen months. Anything less, is disastrous.

Engineering has been a decade long yawn – with little to show. As more and more brain cells migrated to less value added activities in financial services and bureaucratic corporations, they left a huge gap in fundamental innovation and humanity will likely pay a price for it. The ambitions of the creative could not have been satisfied by prescriptive increases in the speed of computing – incremental improvements to the status-quo. Demand for computing has been increasing at a rate that cannot be satisfied by conventional ideas, in every field from biology to predictive analytics.

The question is why humanity finds itself in such an unenviable position. With asteroids flying around the blue planet at will and biological systems that demonstrate ever increasing complexity, the young and timid species of homo sapiens run for cover. They were not expected to be here and a twist of fate has them ruling over a highly tolerant planet. It is ironic that humans are left with observations of calamity but no capability to struggle over them.

Quantum computing is a necessity – not a luxury.

Sunday, February 24, 2013

Algorithmic discovery

Recent research (1) from USC that demonstrates how predictive algorithms can propel discovery in many areas including medicine and astronomy is further indication that applied mathematics is coming of age. Misguided attempts at rules based Artificial Intelligence kept a lid on more productive empiricism – something engineers and economists have known for many decades. Life sciences has been notoriously backward in the application of mathematics in predictions and decisions. Many have argued that biological systems represent complex interacting uncertainties and hence are not amenable to modeling. Nothing could be further from the truth.

Determinism and normality statistics have played havoc with many fields, including life sciences, for decades. Specialists in this industry followed standardized processes of discovery with rigid and prescriptive expectations of outcomes. The constant and nearly predictable failures have not forced significant changes, yet. The long cycles of research and development have allowed the perpetuation of the status-quo. The USC approach of taking well known mathematical principles and applying them differently with an eye toward practical applications is refreshing. More research of this ilk is needed if the industry is to pull out of the rut it is currently in.

Predictive and decision analytics – supported by established mathematics can wake up the slumbering life sciences industry. Tools have been available for over a decade – but not many have been willing to take the plunge.

(1) Taking the gamble out of DNA sequencing, Published: Sunday, February 24, 2013 - 15:32 in Biology & Nature, e!Science News.

Thursday, February 14, 2013

Outside-in innovation

Recent research from the University of Illinois and NC state seems to confirm what companies intuitively knew. The value of an innovation is driven by the uncertainty in customer acceptance. Running separate processes – one for product innovation and the other for customer evaluation, sub-optimize both. Integrating these disparate processes ran by different departments with differing cultures does make sense.

However, it is not that simple. A process view of product innovation implicitly assumes that the R&D machine is sitting idle to serve up the desires of the “downstream customers.” Equally important is the assumption that the customer expectation is stagnant. Customer expectations of products and new features are dynamic and just as local weather is affected by small and possibly unrelated changes in the complex system, they also change quickly and unpredictably. The researchers seem to take a rigid engineering view of the integration of the product innovation ad customer evaluation processes. History shows that not many companies became great by just satisfying expected and measured desires of downstream customers, who are notoriously fickle.

From a decision perspective, companies in high innovation industries are missing a true portfolio view of their product innovation investments. Granted, it is important to understand the status-quo expectations of the customer – but truly great companies anticipate the future. More importantly, they design products incorporating flexibility with a tacit acceptance that the future is uncertain. They anticipate, not precisely, but on uncertain terms. Those who solve this problem systematically – not by rigid integration – but by taking a holistic portfolio view of investment choices, will win. 

Saturday, February 9, 2013

Neuro-economics

Recent attempts at using imaging techniques to establish a neuro-basis for irrational economic decisions is likely in the wrong direction. Seeking a hardware deficiency to justify behaviors that cannot be explained by established theory, is a bit like hitting your computer or worse, taking an MRI scan of the computer chip inside, when your spreadsheet produces unexpected results. Perhaps, simpler explanations are already available.

Human mind has never been good in making consistent and rational decisions in the presence of uncertainty. Early in the human evolution, the left brain had to take a firm command of the proceedings, imparting deterministic and process oriented decision methodology on a specialized serial processor. After all, he had to hunt systematically over known routes and gather water from familiar waterholes. The parallel processing right brain remained a silent witness and continues to be so in the modern world.  Economics is no different, linear thoughts and well behaving normality statistics have consistently pushed humans to irrational decisions. But, there is little need to take pictures of the brain or cut it open to understand this phenomenon.

Seeking simpler explanations to known behaviors is always dominant.

Wednesday, February 6, 2013

Step and Wait

Recent research from USC points out that technology does not grow as predicted by Moore’s law and its variants. Instead, it moves more in a process, the researchers call – Step and Wait. Although the focus was investment decisions in technology, this is applicable across the entire economy.

It should be intuitively clear to anybody observing system wide effects of technology. Airplanes, computers and the Internet ushered in step function changes in productivity that took a decade or two to bleed through the entire economy. Once such a step is taken, however, the wait can be long. Today’s world is yearning for a step but unfortunately none has come and it is possible that the long drought in technology innovation could continue. Many have identified incremental product improvement as technology advancement. For investment managers and policy makers, such ideas are disastrous. iPhone 10 is unlikely to save the world, making device based communications obsolete may be in the right direction.

Some technologists have fallen into the trap of extrapolation – and forecasting using exponential growth curves. Singularity, feared by many, came from such a thought process. But if one has to wait for decades after taking a step, technology singularity may be the last thing we have to worry about.

Sunday, February 3, 2013

Modular intelligence

Recent research from Cornell illustrates why modularity is fundamental to design including biological systems. Cost minimization in network systems naturally surfaces modularity as an important building block. This makes intuitive sense as modularity leads to complexity and ultimately intelligence.

Why, then, humans fail to consider modularity in the design of complex systems, systematically? Of course, every brick in the wall is expressed but not every wall in the city. Is modularity boring for those with intelligence or a mere convenience? Does modularity hide a regression from knowledge, or does it break new frontiers? Is modularity limiting or not? Does modularity evolve to an extent that it is able to obscure itself?

At the lowest level, one has to consider minimization of cost as the fundamental driver of modularity. The complication here is the definition of cost itself. First order societies, unable to feed, clothe and house their populations, may consider cost to be different among available alternatives in a limited choice set. Those who have gone further may define cost as ignorance, the minimization of which requires a break from modularity.

Nothing is for sure.

Wednesday, January 23, 2013

Depressing altruism

Recent research from Princeton, argues that altruism originates from sinister and self-serving tendencies. Much of what has been attributed to the kindness of the human heart can be traced back to local competition and gaining control over zones of influence. This further reinforces the idea that humans are ordinary animals, with little difference from other members of the ecosystem with tightly controlled objective functions.

It is, indeed, depressing. Almost every action, thought and idea can be traced back to fairly crude attempts at utility maximization – at the individual or group level. A few philosophers in the past have argued about the futility of it all, only to be swept aside by the mobs of ignorance. The question is whether humanity is able to move to another level of societal development. Such a transition, however, appears to be a quantum process. At the status-quo energy level, improvements are not possible and any observed abnormalities (such as altruism) can be easily explained away.  More importantly, a quantum energy state transition is never possible from within – humanity appears to be permanently sealed at where they started and where they are likely to end, maximizing food and reproduction.

Only some kind of extra-terrestrial jolt can possibly move society to another state, with different goals.

Monday, January 21, 2013

Policy optimization

As the United States institute another policy regime, that may be a continuation but could equally prove to be different from status-quo, it is important to fully internalize the challenges ahead. With the planet hoarding over seven billion copies of homo-sapiens, sporting an organ – the brain, that is likely the pride of the Milky Way in this slice of space-time, the problems to solve are trivial. With a well behaved fusion furnace, the Sun, ideally situated at 150 Million Kilometers and the big brother, Jupiter, that efficiently sweeps the space debris, at five times that distance, Earth has gotten lucky. It quickly invented a method to accelerate entropy – life, that found many ways to evolve and grow to exponentially increase the original intent. It has culminated in a system, fully dominated by a singular species, able to think, travel and destroy at will. They have spread like wild fire, segmented themselves neatly into buckets of many dimensions – geography, belief and observable trivialities and even threaten to extinguish the whole idea.

Policy, then, is about constrained optimization. It has to consider the space – is it local or is it global, it has to consider the time, is it for now or later and it has to consider the scope, is it for me or for others? Who is in a position to make policy and who will adhere to them? Can policy change the promises made by the past or can it create yet to be understood effects in the future? Is policy good or bad or is it a mere waste of time? Can policy be based on what was written down in the past or should it be about future possibilities? Can policy be blind to ideas and where ideas originate from? Who is accountable for making polices and who may take accountability for them? Who moves, dies and start all over again because of policies and who counts, measures and reports the outcomes? Is the best policy the same in every space-time coordinate or should it differ and if so, could it accommodate differences in perceptions, ideas and culture?

It is a challenge for humanity.

Friday, January 18, 2013

“Too big to” syndrome

Recent announcement by the computer behemoths that they will not pursue exascale computing without government backing is symptomatic of the “too big to” syndrome. The current crop of policy makers, engineers and financiers are all suffering from the same basic disease. They will pursue profits at tremendous risk only if they know the government will bail them out if they fail.

If private enterprises with excess cash will not pursue technologies that could fundamentally change the productivity equation without government backing, it simply means that such technologies are not worth pursuing. The government has no obligation or role to support technology innovation in private companies if the benefits accrue to the shareholders of these firms. This is basic economics. There cannot be any more private profits coupled with social costs. The only difference is that the wily financiers asked for ransom ex-post and somewhat gentler engineers are asking for it, ex-ante.

The “too big to” syndrome is eating into the core of free markets with disastrous consequences for capital allocation across the entire economy.

Thursday, January 17, 2013

Returning language to the wild

Recent research from the University of Warwick shows that most remember Facebook and Twitter updates better than faces and polished writing. This confirms that modern technology has been able to simplify the human mind and return it to its origins. As she stood up in the African savannahs for the first time, it was clear that simpler communication was not too far behind. The origin of language, albeit being hotly debated, had only one purpose – organizing to achieve common objectives. From its inception, language was simple, as the faster one can communicate the limited set of available information, the better.

Later, modern humans, took the simple construct of language and used it as an art form to kill time and nourish the inner mind. Literature and philosophy moved language into realms it was never meant to be used. Both created complexity and stretched the concept in pleasurable ways, but still only for a limited subset of the population. Much later, computers ushered in the first step back in language evolution as machines, being efficient and systematic, were not that fond of ornamentation. Computer languages provided the first glimpse of the original language, able to pack in maximum information into simple constructs. Now, Facebook and Twitter complete the full circle, returning language to its origins. After all, a constraint on characters was exactly what the original languages were facing. They had to communicate limited information efficiently.

For some, however, this is a sad story as it is ample evidence that the human psyche will degenerate to its origins at the first available opportunity, provided by technology or society.

Wednesday, January 16, 2013

Spooky action, closer

A recent article in the journal of Physical Review Letters discusses the possibility of practical teleportation using the known phenomenon of entanglement. By proving the possibility of serial and bulk teleportation of qbits, the authors open up endless possibilities in both quantum computing and space travel.

Ever since entanglement was shown to be real, it has been a constant source of inspiration and annoyance depending on where one sits on the quantum divide. Classical and traditional physics had overwhelmed the real but counterintuitive possibilities offered up by a theory that exposed the fundamental human weakness – a bias to his own scale and a preference for status-quo. For over fifty thousand years, humans have gotten used to object permanence and even babies are born with a firmware that fully accommodate such expectations. Research has provided tantalizing evidence that the brain is indeed a quantum computer, but ironically, most of it is unable to reprogram away from the stupor of evolution.

Extra-terrestrial life at different scales and sophistication may be able to accommodate such broader frameworks faster with an exponential increase in knowledge.

Sunday, January 13, 2013

Hundred year yawn

Recent research from the University of Arizona, that illustrates that the parameters that still “fit” the “dark energy” explanation for the observed expansion of the universe is running out of space. The study shows that the remaining relevant patch for the dark energy enthusiasts is a 2inch square area in a football field size of available ranges. The original explanation, proposed a hundred years ago, still seems to be more relevant than the “theories” that have been doled out since then.

Theoretical physics – a hundred year yawn – has accomplished little since the remarkable insights of Einstein, Plank and Heisenberg. More importantly, it has opened so many dead ends that it drained the intelligence and creativity of two generations of physicists, with nothing to show. Granted, in the last few decades engineers have made telescopes, atom smashers, space vehicles and other remarkable things to see, listen and measure that would have made Einstein blush. But what they delivered are noise in an expanding ocean of particles, confusion and empiricism.

There are three primary axes of exploration that has held the field back. First, educational institutions, around the world, appear to have set lower priority to theory compared to experimentation. They have led budding physicists into the rabbit tunnels of immense size and scope that they get lost in them for their entire careers. Second, space exploration has become an activity of pride and prejudice, with entire nations competing to send people and machines to nearby planets, asteroids and moons. Now, seeking extra-terrestrial life by looking for “earth-like planets,” or finding water, are considered to be research of the highest order. Finally, scientists seem to have lost the pure joy of seeking the truth – instead they appear happier if they can publish more papers or grab a Nobel prize.

Let’s hope the hundred year yawn in theoretical physics will end soon – but it will take the emergence of beautiful brains again.

Saturday, January 5, 2013

Research computer

Recent work by Emory mathematician, Ken Ono, finds further explanation and meaning in the work of Ramanujan, who died nearly a century ago. Applications stretch in many directions including explaining some behavior of certain types of black holes. This illustrates that ground breaking insights originate from pure imagination even in the absence of a formal education and proven tools. More importantly, it is possible that systematization of education, tools and processes is at the heart of declining productivity in true insight generation.

Most of today’s science is geared toward proving a stated hypothesis. This is akin to manufacturing – execution of a defined and specified process, that includes design of experiments, data gathering, statistical analysis and hypothesis testing. Indeed, humanity is fast approaching the “singularity,” in which computers can be trained to conduct research better than themselves, for the research process is the most systematized and computer programmable than anything else. It is possible that the first type of Artificial Intelligence that may become practical is a research computer, able to design studies, collect data, analyze, test and prove a hypothesis given to it. Such a machine, however, will be incapable of imagination and insight generation. So, it may walk, move objects and conduct research, but it will never sleep, weep or dream.

Perhaps, the “singularity,” that many look forward to is related to a complete elimination of imagination from the human psyche.

Thursday, January 3, 2013

Size matters

Recent research from the University of Sweden demonstrates that a larger brain is expensive for biological entities and it compromises the size of the gut and reproductive ability. Using a study on guppies, they show that the increased cognitive abilities of a large brain comes with significant energy demand. Similarly, In humans,the brain that makes up only 2% of the total mass consumes over 20% of the available energy, starving other organs.

Given this data, it is unclear why evolution will push in the direction of aiding the growth of the most inefficient organ in the body, especially if it substantially affects the ability to reproduce. Large brains have unambiguously pushed humans, other primates, dolphins and whales to smaller family structures, increasing the risk of extinction. Humans, for example, was nearly wiped out with less than a few thousand samples surviving through an incredible bottleneck.

Many have considered natural selection, an effective way to optimize systems. Data show this may not be true. Optimizing locally, allowing certain participants to gain advantage over others in short time horizons have long term deleterious effects on the survival of the species. In the case of humans, the large brain seems to have done further damage as individuals in an attempt at maximizing utility locally and within their life spans have accelerated the destruction of the environment, paving the way to extinction.

Size matters – an evolutionary accident and myopic natural selection processes have aided the development of an inefficient and ineffective organ, that will drive the species to extinction.

Wednesday, January 2, 2013

Bonobo Facebook

Recent research from Duke University shows that bonobos are much more sophisticated in social networking than humans. Past several thousand years of clan behavior has left the humans with distinct disadvantages in the modern world. In every realm, it has been shown that humans prefer to interact and share with those they know. Discrimination and racism – somewhat crude manifestations of the same deficiency – a proclivity to share with known individuals than those who are strangers, may have a significant negative impact on the world economy.

Bonobos appear to be different. They prefer to share (food in this case) with strangers than known friends and family. This allows them to extend their social network faster than otherwise. This is a highly sophisticated behavior that maximizes long term utility, that most humans appear to be incapable of doing. It has also been shown that bonobos are less likely to share anonymously compared to humans. This further illustrates that the behavior is driven by strategy with an understanding of long term and system effects.The behavior of humans in social networking channels further illustrate this handicap. Most connections seem to clump closely, paralleling clan behavior, indicating that it will be difficult for humans to get rid of this baggage. Xenophobia and prevalent resistance to free markets, trade and immigration are also manifestations of the same issue.

Accelerating social software have not improved the human firmware – that is likely to remain rigid for the foreseeable future, capping economic growth and welfare.