Scientific Sense Podcast

Saturday, December 17, 2016

Does life matter?

Philosophical, ethical and religious considerations have prevented humans from defining the value of life. Short sighted financial analysis that defined the value of life as the NPV of the future utility stream, is faulty. Additionally, there is a distinct difference between personal utility and societal utility that do not coincide. The more important deficiency in the approach is that it does not account for uncertainty in future possibilities and the flexibility held by the individual in altering future decisions. And in a regime of accelerating technologies that could substantially change the expected life horizon, the value of life is increasing every day, provided expected aggregate personal or societal utility is non-negative.

The present value of human life is an important metric for policy. It is certainly not infinite and there is a distinct trade-off between the cost of sustenance and expected future benefits, both to the individual and society. A natural end to life, a random and catastrophic outcome that is imposed by exogenous factors, is highly unlikely to be optimal. The individual has the most information to assess the trade-off between the cost of sustenance and future benefits. If one is able to ignore the excited technologists, attempting to cure death by Silicon, data and an abundance of ignorance, one could find that there is a subtle and gentle slope upward for the human’s ability to perpetuate her badly designed infrastructure. The cost of sustenance of the human body, regardless of the expanding time-span of use, is not trivial. One complication in this trade-off decision is that the individual may perceive personal (and possibly societal) utility, higher than what is true.  Society, prevented from the forceful termination of the individual on philosophical grounds, yields the decision to the individual, who may not be adept enough to do so.

Humans are entering a tricky transition period. It is conceivable that creative manipulation of genes may allow them to sustain copies of themselves for a time-span, perhaps higher by a factor of 10 in less than 100 years. However, in transition, they will struggle, trying to bridge the status-quo with what is possible. This is an optimization problem that may have to expand beyond the individual, if humanity were to perpetuate itself. On the other hand, there appears to be no compelling reasons to do so.

Wednesday, December 14, 2016

Milking data

Milk, a new computer language created by MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) promises a four fold increase in the speed of analytics on big data problems. Although true big data problems are still rare, albeit the term is freely used for anything from large excel sheets to relational data tables, Milk is in the right direction. Computer chip architecture designs have been stagnant, still looking to double speed every 18 months, by packing silicon ever closer with little innovation.

Efficient use of memory has been a perennial problem for analytics, dealing with sparse and noisy data. Rigid hardware designs shuttle unwanted information based on archaic design concepts never asking the question if the data transport is necessary or timely. With hardware and even memory costs in a precipitous decline, there has not been sufficient force behind seeking changes to age old designs. Now that exponentially increasing data is beginning to challenge available hardware again and the need for speed to sift through the proverbial haystack of noise to find the golden needle is in demand, we may need to innovate again. And, Milk paves the path for possible software solutions.

Using just enough data at the right time to make decisions is a good habit, not only in computing but also in every other arena. In the past two decades, computer companies and database vendors sought to sell the biggest steel to all their customers on the future promise of analytics once they collect all the garbage and store it in warehouses. Now that analytics has "arrived," reducing the garbage into usable insights has become a major problem for companies.

Extracting insights from sparse and noisy data is not easy. Perhaps academic institutions can lend a helping hand to jump start innovation at computer behemoths, as they get stuck in the status-quo.

Monday, December 12, 2016

Democracy's event horizon

Recent results from a survey (1) of 2200 Americans showing over 1 in 4 believe that the sun goes around the earth is problematic for democracy. The system, that reflects the aggregate opinion of all participants, has served humanity well in recent years. However, the same characteristic could be its Achilles' heel as its leaders will have to reflect its population. If aggregate knowledge present in a democratic society falls below a threshold value, it can act like the event horizon of a black hole. Once through it, there is no turning back as it will spiral down to a singularity.

There have been telltale signs in many democratic societies for some time. In the world's largest democracy, elections were decided by last names and not policy choices. In Southern Europe, star power has been more dominant. More recently, powerful democratic countries have opted for less optimal outcomes. All of these may imply that democracy, as a system, is running out of its originally intended use - assure optimum outcomes for society in the long run. Instead, it is now more likely to reinforce low knowledge content, if it is dominant.

One democracy appears to have resisted the race to the bottom. Down under, where penalties are imposed for those not bothering to vote, high turn-out has assured that knowledge content of the voters is above the democratic event horizon. It appears that the prescription for ailing democracies returning sub-optimal results is to enhance voter turnout, possibly by the imposition of penalties. The biased selection in the non-voter cohort may be just enough to keep the system from the plunge to the unknown.


Monday, November 28, 2016

The irony of Artificial Intelligence

The indiscriminate use of the term "Artificial Intelligence," by analysts and consultants, demean what it was originally meant to be and is still supposed to imply. Those who have worked in this area for decades, much before the internet, search and autonomous vehicles existed, know fully well that there is nothing "artificial" about it. A lack of definition for "consciousness," has constrained computer scientists and philosophers alike from conceptualizing intelligence. Faster computers, memory and unlimited capital are unlikely to advance the field any further unless we return to thoughtfully studying the underlying issues. Let's do that before a few gamers think that they have accidently invented the "artificial mind."

Intelligence, something solely attributed to humans till very recently, is now shown to be present in a plethora of biological systems. Biological systems, however, are not an infinite array of on-off switches nor do they process information as computer systems do. So the idea that one can network a large number of silicon based idiot boxes, using mathematics from the 60s, to replicate the brain is fraught with issues. The recent fad of "artificial general intelligence," - the ability to teach a computer virtually anything such that it can not only replicate a human but also become a lot better, is a nice fantasy. What the technologists may be missing is that it is not a software problem. The millennials have become good at software but projecting what one is good at onto hard problems may not the best path forward.

Nature had nearly 3.8 billion years to perfect the underlying hardware to deliver intelligence. It was a slow and laborious process and it was never a software problem. Once the hardware was perfected, it was able to run virtually any software. This may give a clue to those plunging head first into teaching machines to "think," using software and age old mathematics. More importantly, the current architecture of computing representing calculators is not amenable to modeling intelligence. Just as there is a distinct difference between Arithmetic and Mathematics, conventional computers differ significantly from a true quantum computer. Adding and subtracting numbers fast is one thing but conceptualizing what it means is quite another.

The unfortunate term, "Artificial Intelligence," has led many mediocre lives astray. And God, with an immense sense of humor, seem to still lead them down blind alleys.

Friday, November 25, 2016

Expert complexity

A new tool rolled out by a research institution (1), allows decision-makers reach better decisions by surveying experts. Specifically, a related paper published in Nature Energy concludes that wind energy costs will fall by 24-30% by 2030 (2). To understand what method of alternate energy production is most viable, one has to disaggregate the problem into two distinct questions:

1. What's the status-quo economics - i.e what is the total cost of production per unit of energy for all available modalities - including, solar, wind, tidal, geothermal, nuclear, fossil and others? And, it is important to understand total costs, however "green," the manufacturers of the turbines and solar cells claim to be.

2. How is this cost likely to decline by scale or newer technologies or both?

The first question has significant available data and does not require any expert opinion. The second question has two parts to it - scale based decline in cost and the probability of the arrival of a technology discontinuity. The former is also an empirical and engineering question that does not require experts to open up their infinite wisdom and the latter is mere speculation that experts could certainly contribute to.

More generally, alternative energy production techniques have comparable status-quo metrics that informs which method is dominant. The idea that "we should try everything," is fundamentally faulty as there is only one best design. The scale question requires more analysis and thought - part of this is related to total available quantity (i.e. how much energy could the world produce if it were to harness the source with 100% efficiency) and the other is related to efficiency gains that will accrue due to scale and learning effects. Both of these questions are well explored in many fields and thus we can easily create forecasted metrics of cost per unit of production across production modalities, without troubling the experts.

Scientists and policy-makers have a tendency to complicate simple problems. But it typically does not add much value, even with software, experts or research. Numbers are more reliable predictors of the performance of engineering systems than experts.


Tuesday, November 22, 2016

Eaten alive

Recent research from the University of Texas (1) sets the timing of the genius Homo excursion out of Africa to an earlier time and a harsher reality. They "mingled," with saber-toothed cats, wolves and hyenas and they ate only as many times as they were eaten. With a brain size smaller than half that of the modern human, their tools were primitive and their processes less compelling. They were effectively scavengers, with a badly designed and fragile infrastructure, that was no match to the mighty cats.

Human ego may have overlooked some information in the past as they had imagined their ancestors tall, dark and sturdy, leaving Africa, with a highly sophisticated technology, the hand ax. The UT team, based on a large sample over a 4 hectare area, argue this is not the case. The finding has implications, not only for history but also for biases that systematically creep into academic studies. In this contemporary world, where possibly half the population is likely unaware that its seven billion inhabitants are closely related, regardless of their outward appearances, scientists may need to do a better job in education. For without education, entire countries could walk back decades of progress and entire continents could move toward punishing tactical conflict. And, that would not be significantly different from what existed 1.8 million years ago, in architecture and context.

Humans have paid a big penalty because of their incessant curiosity, exploring foreign lands before they were possibly ready but what they brought to humanity is valuable knowledge. It will be a shame if modern humans lose it.


Sunday, November 20, 2016

The great money experiment

Cash, an ancient form of monetary value facilitating transactions, is still dominant in most parts of the world. Initially, cash represented precious objects, an idea that forms the foundation of the gold standard, yet another archaic concept pursued by those who do not know that the world has changed. More recently, the English East India Company, that minted coins to send to the East to buy spices and borrowed the term "cash" from Sanskrit, may have been solely responsible for the adoption of cash across the empire. It is only apt that India is getting rid of "excess cash," the source of corruption, tax evasion and terrorism.

Paper money has substantially expanded monetary incompetence across the globe. If electrons can count and fly across the globe instantly to account for credits and debits, it is unclear why humans carry metal and paper in their pockets to transact. Bad habits die slow and in a world that does not barter, cash was a crutch, that is no longer needed. Those who say politicians lack vision should look East, albeit, it is a singular experiment that breaks open new possibilities. Physical forms of cash has been inefficient, archaic and problematic for centuries but none had the guts to get rid of it. To time such a catastrophe, when the eyes of the world is transfixed on the other side of the world, is pure genius.

Yes, the unexpected policy change is going to bring tactical strife to a billion people, except those who had some early warnings to move their "excess cash," to Switzerland. But it may still be utility maximizing for a country that is becoming the largest in the world with a population languishing without technology and information. The prime minister appears forward  looking but it is going to take more that a few trips to Google, Microsoft and Facebook, to prop up a country that seems to have lost its way. Sure its engineers and doctors are sought after but the soul of the country, struggling to find its place, has failed to instill confidence to propel it beyond the third world status, it has been afforded.

Getting rid of corruption and excess cash is the first step. But to go further, India has to provide technology and information to its masses in an architecture that is based on free markets and free trade and not what its socialists leaders proposed from inception.

Saturday, November 12, 2016

Light Medicine

Recent research from the University of Bonn (1) that demonstrates light pulses are effective in jump starting a dying heart, opens up a long neglected pathway of electromagnetic spectrum in the treatment of diseases. Medicine, dominated by chemistry for many centuries, has been languishing. The complexity in the biology of humans, a haphazard combination of mistakes and happenstances has given those with an engineering mindset, hope. But they have been slow to recognize that the beneficial effects they find by the introduction of finely tuned chemicals are often followed by unknown, unanticipated and uncertain toxicity. Such was the domination of chemistry in medicine that they delegated physics to mere diagnostics. However, ancient cultures have been more aware of the effect of magnetism and light on the human body, albeit by unsystematic and unproven guesswork.

A new dawn is close at hand in which physics and computing will propel human health to hitherto unknown levels. We may finally recognize that specificity of intervention that is often accompanied by toxicity in chemistry is not the case in physics. In fact, it is just the opposite. The particles that propagate light and magnetism could be finely tuned and directed to the compartment of the human body that is not performing as expected. Once humans master the quantum effects exhibited by these particles, they may be in a position to hold and impact microscopic parts of the human body without pain or loss of control. The light defibrillator is exactly in this vein that spares the patient from any level of discomfort as it restarts the organ attempting to take a break.

The convergence of medicine and physics is a welcome trend. As the theoretical physicists struggle with beautiful but unprovable fantasies such as string theory or spend most of their careers measuring things they have no clue about such as dark matter and dark energy, perhaps they can devote a few hours of their time to something more practical. If they do, it can change medicine and dethrone ideas we have been pursuing from the middle ages.