Google

YouTube

Spotify

Scientific Sense Podcast

Sunday, January 29, 2017

Strategy v/s tactics

For many hundreds of thousands of years, the human brain specialized exclusively in tactics. Managing a simple objective function with two dimensions - food and sex, clearly pointed to maximizing utility in short horizons. Very recently, human societies have been treading on strategy to maximize long term viability of their species, something that is not well understood by a great swath of them. Strategy is inherently inefficient, especially if viewed through a tactical lens, for strategy almost always show negative benefits in short horizons. Human societies, led by those less mentally equipped to understand strategic implications, could deteriorate fast as their leaders attempt to implement tactics to benefit themselves.

Humans may be entering the most dangerous phase of their short existence. Sporting a badly designed infrastructure, thanks to a highly inefficient CPU and a physical infrastructure designed to last less than half of their expected lifespan, they appear to be in real bad shape. Thoughts don't come natural to them, but actions do as the males of the species went out every day, hunting and pillaging to their own satisfaction. It was never profitable to think and it has always been beneficial to show power as their opponents were either from a different clan or animals, substantially better built than themselves. Dislike and hate come to them without effort as they see a specimen that does not exhibit the attributes they are familiar with. Ironically, little do they know that the surface features or the origin of their fellow human beings are noise of such irrelevance that they have an equal chance to be related to anybody in the world as their neighbor. These ideas are indeed abstract and strategic, something the politicians and policy-makers around the world are not able to analyze.

As the extra-terrestrial seeking engineers and scientists lament at the absence of the travelling kind, they have to ask why an entity of intelligence would even make contact with them. After all, humans do not seem to understand biological entities, less endowed than them. It seems hopeless as the potent and deadly combination of ego and ignorance appears to lead the inefficient species to their certain extinction. 

Monday, January 23, 2017

Depressing - but there is hope

It is depressing to think about the fact that nearly 100 in the US and ten times as many across the world succumb to suicide every day. That is almost a person lost every minute, somewhere in the world. With nearly 10% of the US population suffering from depressive disorders, we may not be paying sufficient attention to a disease that kills silently. Nearly 10% of this cohort - almost 3 Million, suffer from Bipolar Depression, a condition that shows 20- 30 times higher probability of attempting suicide than the general population (1). This horrible disease sometimes leads to Acute Suicidal Ideation/Behavior (ASIB) - a condition that integrates suicidal thoughts with planning. And, it is growing at a 24% per year clip (from 1999-2014), especially among young adults.

The human brain, an evolutionary quirk, is a complex and fragile organ, prone to malfunctioning in many different ways. Initially designed to monitor and manage routine systems of the body, its massive excess capacity predictably led to thoughts and emotions, not typically seen in other biological entities. That was the beginning of trouble for the humans, as they struggled to understand and cope with the energy hog they carry on their shoulders. Primitive humans equated diseases of the brain to maligned acts of spirits and set out to ferret out the miscreants through unthinkable interventions. Ironically, the contemporary treatment regimen for ASIB is not substantially different, as patients are mostly locked in psychiatric hospitals, and receive electroconvulsive therapy (ECT). With no available medication for this indication, patients require many sessions of ECT, resulting in memory loss and confusion (1).

Large pharmaceutical companies have enjoyed a profitable franchise of Central Nervous System (CNS) treatments including SSRI/SNRI based antidepressants that carry an increasing risk of suicide. These widely available therapies are not effective in ASIB, a costly condition to treat as patients often progress to inpatient settings and ECT to tactically reduce the risk of catastrophic loss. The challenges to develop new medicines in neuroscience and particularly in psychiatry are very large, such that many big Pharmaceutical companies have abandoned psychiatry after their antidepressants became generic. However, ASIB appears to be a druggable target (1) as Bipolar Depression and in particular suicidal thoughts, may be modulated by the brain's NMDA receptor.


Treating this horrible disease with high risk of death with a medicine is potentially a great prospect for patients, but it will require investments and concerted efforts. Recent publications indicate that ketamine - an anesthetic, and potent NMDA blocker, reduces the impulse for suicide and for depression- which though related - seem not to be the same (1). However, its effect seems to be short lived, 4-7 days, and, it is administered through an iv infusion, making it important to find options that can extend its effect and allow patients to be treated on an outpatient basis once they are not a danger to themselves anymore. 

Saturday, January 21, 2017

It's time for universal basic income

Finland's grand experiment, albeit in small scale, in providing a Universal Basic Income (UBI) without preconditions, ushers in a new dawn in modern societal design. The idea is already late for many countries as accelerating technology makes routine jobs irrelevant and any education less than college, nearly valueless. It is a regime change in such a short time that disallows gradual adjustments and it affects large swaths of populations across the world. Finely tuned welfare programs that create a disincentive for the poor to seek work and policies such as minimum wages that curb opportunities for the young to gain experience, has been creating stress in the social fabric for many decades. UBI will not only correct such disincentives but also remove the cost and inefficiencies associated with the bureaucracies that manage such programs.

The objective function for a modern society is clear - maximize aggregate happiness. Most research on happiness indicate an inverted U relationship with significant disutility in the absence of basic necessities or the fear of not having them in the future. UBI will remove such fear but avoid any disincentive effects. More importantly, UBI could provide optionality for each individual with private utility functions to select optimal pathways to maximize own happiness. If each individual has the flexibility to design such pathways, then society will unambiguously maximize aggregate happiness. What's missing from the status-quo of centrally administered myriad of welfare programs is flexibility for the individual to maximize own utility, unencumbered by the lack of basic necessities - food, shelter, health and information. UBI could provide that at a lower cost than current programs.

Universal Basic Income is conceptually and practically elegant. But to implement it, politicians have to acquire a desire to do something good during the course of their long and uninterrupted careers.

Monday, January 16, 2017

Change resistant society

As India copes with demonetization - a minor perturbation to a cash dominated society - it is clear that its long history and comparatively low diversity may be contributing to significant resistance to change - any change. A democratic system, largely driven by a few personalities since independence, has been trying to break out for ever but India remains to be a "country with unrealized but great potential." In spite of the legend, not many tortoises win races as they succumb to watching laggards zoom past them. To propel the country to the next level, it has to substantially change its attitude, and it is unlikely with the current generation, steeped in pride, history and an unwavering ability to cling to the status-quo.

Even though it is one of the largest economies on a purchasing power parity basis, it is highly insular thanks to "strategic policies," pursued by its non-capitalist leaders from inception. It does not figure in the top 100 countries on exports or imports as measured as a percent of GDP. It is also symptomatic of its lack of understanding as to how economies grow. Specializing in its own comparative advantages and freely trading with others who do other things better, is a simple economic principle, something the leaders in India never seem to have understood. No country is good at everything, India included, but this could be anathematic to the Indian diaspora, let alone its leaders.

Blindly following what seems to be successful elsewhere is an equally dangerous proposition, especially if they are pushed by policy makers enamoured by what they see on trips abroad. And, India's leaders appear to be very prone to this disease. The wisdom of a few has never been shown to be effective in understanding and implementing optimal policies. What India needs is globalization not transplantation - and the confidence to free trade and implement free markets. If so, there appears to be no stopping it but it is a tall order.


Saturday, December 31, 2016

A new spin on Artificial Intelligence


New research from Tohoku University (1) demonstrating pattern finding using low energy solid state devices, representing synapses (spintronics), has potential to reduce the hype of contemporary artificial intelligence and move the field forward incrementally. Computer scientists have been wasting time with conventional computers and inefficient software solutions on what they hope to be a replication of intelligence. However, it has been clear from the inception of the field that engineering processes and know-how fall significantly short of its intended goals. The problem has always been hardware design and the fact that there are more software engineers in the world than those who focus on hardware, has acted as a brake on progress.

The brain has always been a bad model for artificial intelligence. A massive energy hog that has to prop itself up on a large and fat storing gut just to survive, has always been an inefficient design to create intelligence. Largely designed to keep track of routine systems, the brain accidently took on a foreign role that allowed abstract thinking. The over design of the system meant that it could do so with relatively small incremental cost. Computer scientists' attempts to replicate the energy inefficient organ, designed primarily for routine and repeating tasks, on the promise of intelligence have left many skeletons in the long and unsuccessful path to artificial intelligence. The fact that there is unabated noise in the universe of millennials about artificial intelligence is symptomatic of a lack of understanding of what could be possible.

Practical mathematicians and engineers are a bad combination for effecting ground breaking innovation. In the 60s, this potent combination of technologists designed the neural nets - to simulate what they felt was happening inside the funny looking organ. For decades, their attempts to "train," their nets met with failure with the artificial constructs taking too long to learn anything or spontaneously becoming unstable. They continued with the brute force method as the cost of computers and memory started to decline rapidly. Lately, they have found some short cuts that allows faster training. However, natural language processing, clever video games and autonomous cars are not examples of artificial intelligence by any stretch of the imagination.

To make artificial intelligence happen, technologists have to turn to fundamental innovation in hardware. And, they may be well advised to lose some ego and seek help from very different disciplines such as philosophy, economics and music. After all, the massive development of the human brain came when they started to think abstractly and not when they could create fire and stone tools at will.


  1. William A. Borders, Hisanao Akima, Shunsuke Fukami, Satoshi Moriya, Shouta Kurihara, Yoshihiko Horio, Shigeo Sato, Hideo Ohno. Analogue spin–orbit torque device for artificial-neural-network-based associative memory operationApplied Physics Express, 2017; 10 (1): 013007 DOI: 10.7567/APEX.10.013007

Thursday, December 29, 2016

Coding errors

A recent publication in Nature Communications (1) seems to confirm that DNA damage due to ionizing radiation is a cause of cancer in humans. The coding engine in humans has been fragile, prone to mistakes even in the absence of such exogenous effects. As humans attempt interplanetary travel, their biggest challenge is going to be keeping their biological machinery, error free. Perhaps what humans need is an error correction mechanism that implicitly assumes that errors are going to be a way of life. Rather than attempting to avoid it, they have to correct it optimally.

Error detection and correction have been important aspects of electronic communication. Humans do have some experience with it, albeit in crude electronic systems. The human system appears to be a haphazard combination of mistakes made over a few million years. They have been selected for horrible and debilitating diseases and every time they step out into the sunlight, their hardware appears to be at risk. It is an ironic outcome for homosapiens who spent most of their history naked under the tropical sun. Now ionized radiation from beyond the heavens render them paralyzed and ephemeral.

Perhaps it is time we have taken a mechanistic and computing view of humans. The clever arrangement of $26 worth of chemicals seem to last a very short period of time, stricken down by powerful bugs or her own immune system. Now that bugs have been kept at a safe distance, it is really about whether the system can code and replicate optimally. The immediate challenge is error detection and correction at a molecular level. If some of the top minds, engaged in such pointless activities as investing, death curing and artificial intelligence, could focus on more practical matters, humans can certainly come out ahead.


(1) http://esciencenews.com/articles/2016/09/13/study.reveals.how.ionising.radiation.damages.dna.and.causes.cancer

Saturday, December 17, 2016

Does life matter?

Philosophical, ethical and religious considerations have prevented humans from defining the value of life. Short sighted financial analysis that defined the value of life as the NPV of the future utility stream, is faulty. Additionally, there is a distinct difference between personal utility and societal utility that do not coincide. The more important deficiency in the approach is that it does not account for uncertainty in future possibilities and the flexibility held by the individual in altering future decisions. And in a regime of accelerating technologies that could substantially change the expected life horizon, the value of life is increasing every day, provided expected aggregate personal or societal utility is non-negative.

The present value of human life is an important metric for policy. It is certainly not infinite and there is a distinct trade-off between the cost of sustenance and expected future benefits, both to the individual and society. A natural end to life, a random and catastrophic outcome that is imposed by exogenous factors, is highly unlikely to be optimal. The individual has the most information to assess the trade-off between the cost of sustenance and future benefits. If one is able to ignore the excited technologists, attempting to cure death by Silicon, data and an abundance of ignorance, one could find that there is a subtle and gentle slope upward for the human’s ability to perpetuate her badly designed infrastructure. The cost of sustenance of the human body, regardless of the expanding time-span of use, is not trivial. One complication in this trade-off decision is that the individual may perceive personal (and possibly societal) utility, higher than what is true.  Society, prevented from the forceful termination of the individual on philosophical grounds, yields the decision to the individual, who may not be adept enough to do so.

Humans are entering a tricky transition period. It is conceivable that creative manipulation of genes may allow them to sustain copies of themselves for a time-span, perhaps higher by a factor of 10 in less than 100 years. However, in transition, they will struggle, trying to bridge the status-quo with what is possible. This is an optimization problem that may have to expand beyond the individual, if humanity were to perpetuate itself. On the other hand, there appears to be no compelling reasons to do so.

Wednesday, December 14, 2016

Milking data

Milk, a new computer language created by MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) promises a four fold increase in the speed of analytics on big data problems. Although true big data problems are still rare, albeit the term is freely used for anything from large excel sheets to relational data tables, Milk is in the right direction. Computer chip architecture designs have been stagnant, still looking to double speed every 18 months, by packing silicon ever closer with little innovation.

Efficient use of memory has been a perennial problem for analytics, dealing with sparse and noisy data. Rigid hardware designs shuttle unwanted information based on archaic design concepts never asking the question if the data transport is necessary or timely. With hardware and even memory costs in a precipitous decline, there has not been sufficient force behind seeking changes to age old designs. Now that exponentially increasing data is beginning to challenge available hardware again and the need for speed to sift through the proverbial haystack of noise to find the golden needle is in demand, we may need to innovate again. And, Milk paves the path for possible software solutions.

Using just enough data at the right time to make decisions is a good habit, not only in computing but also in every other arena. In the past two decades, computer companies and database vendors sought to sell the biggest steel to all their customers on the future promise of analytics once they collect all the garbage and store it in warehouses. Now that analytics has "arrived," reducing the garbage into usable insights has become a major problem for companies.

Extracting insights from sparse and noisy data is not easy. Perhaps academic institutions can lend a helping hand to jump start innovation at computer behemoths, as they get stuck in the status-quo.