Google

YouTube

Spotify

Scientific Sense Podcast

Saturday, December 31, 2016

A new spin on Artificial Intelligence


New research from Tohoku University (1) demonstrating pattern finding using low energy solid state devices, representing synapses (spintronics), has potential to reduce the hype of contemporary artificial intelligence and move the field forward incrementally. Computer scientists have been wasting time with conventional computers and inefficient software solutions on what they hope to be a replication of intelligence. However, it has been clear from the inception of the field that engineering processes and know-how fall significantly short of its intended goals. The problem has always been hardware design and the fact that there are more software engineers in the world than those who focus on hardware, has acted as a brake on progress.

The brain has always been a bad model for artificial intelligence. A massive energy hog that has to prop itself up on a large and fat storing gut just to survive, has always been an inefficient design to create intelligence. Largely designed to keep track of routine systems, the brain accidently took on a foreign role that allowed abstract thinking. The over design of the system meant that it could do so with relatively small incremental cost. Computer scientists' attempts to replicate the energy inefficient organ, designed primarily for routine and repeating tasks, on the promise of intelligence have left many skeletons in the long and unsuccessful path to artificial intelligence. The fact that there is unabated noise in the universe of millennials about artificial intelligence is symptomatic of a lack of understanding of what could be possible.

Practical mathematicians and engineers are a bad combination for effecting ground breaking innovation. In the 60s, this potent combination of technologists designed the neural nets - to simulate what they felt was happening inside the funny looking organ. For decades, their attempts to "train," their nets met with failure with the artificial constructs taking too long to learn anything or spontaneously becoming unstable. They continued with the brute force method as the cost of computers and memory started to decline rapidly. Lately, they have found some short cuts that allows faster training. However, natural language processing, clever video games and autonomous cars are not examples of artificial intelligence by any stretch of the imagination.

To make artificial intelligence happen, technologists have to turn to fundamental innovation in hardware. And, they may be well advised to lose some ego and seek help from very different disciplines such as philosophy, economics and music. After all, the massive development of the human brain came when they started to think abstractly and not when they could create fire and stone tools at will.


  1. William A. Borders, Hisanao Akima, Shunsuke Fukami, Satoshi Moriya, Shouta Kurihara, Yoshihiko Horio, Shigeo Sato, Hideo Ohno. Analogue spin–orbit torque device for artificial-neural-network-based associative memory operationApplied Physics Express, 2017; 10 (1): 013007 DOI: 10.7567/APEX.10.013007

Thursday, December 29, 2016

Coding errors

A recent publication in Nature Communications (1) seems to confirm that DNA damage due to ionizing radiation is a cause of cancer in humans. The coding engine in humans has been fragile, prone to mistakes even in the absence of such exogenous effects. As humans attempt interplanetary travel, their biggest challenge is going to be keeping their biological machinery, error free. Perhaps what humans need is an error correction mechanism that implicitly assumes that errors are going to be a way of life. Rather than attempting to avoid it, they have to correct it optimally.

Error detection and correction have been important aspects of electronic communication. Humans do have some experience with it, albeit in crude electronic systems. The human system appears to be a haphazard combination of mistakes made over a few million years. They have been selected for horrible and debilitating diseases and every time they step out into the sunlight, their hardware appears to be at risk. It is an ironic outcome for homosapiens who spent most of their history naked under the tropical sun. Now ionized radiation from beyond the heavens render them paralyzed and ephemeral.

Perhaps it is time we have taken a mechanistic and computing view of humans. The clever arrangement of $26 worth of chemicals seem to last a very short period of time, stricken down by powerful bugs or her own immune system. Now that bugs have been kept at a safe distance, it is really about whether the system can code and replicate optimally. The immediate challenge is error detection and correction at a molecular level. If some of the top minds, engaged in such pointless activities as investing, death curing and artificial intelligence, could focus on more practical matters, humans can certainly come out ahead.


(1) http://esciencenews.com/articles/2016/09/13/study.reveals.how.ionising.radiation.damages.dna.and.causes.cancer

Saturday, December 17, 2016

Does life matter?

Philosophical, ethical and religious considerations have prevented humans from defining the value of life. Short sighted financial analysis that defined the value of life as the NPV of the future utility stream, is faulty. Additionally, there is a distinct difference between personal utility and societal utility that do not coincide. The more important deficiency in the approach is that it does not account for uncertainty in future possibilities and the flexibility held by the individual in altering future decisions. And in a regime of accelerating technologies that could substantially change the expected life horizon, the value of life is increasing every day, provided expected aggregate personal or societal utility is non-negative.

The present value of human life is an important metric for policy. It is certainly not infinite and there is a distinct trade-off between the cost of sustenance and expected future benefits, both to the individual and society. A natural end to life, a random and catastrophic outcome that is imposed by exogenous factors, is highly unlikely to be optimal. The individual has the most information to assess the trade-off between the cost of sustenance and future benefits. If one is able to ignore the excited technologists, attempting to cure death by Silicon, data and an abundance of ignorance, one could find that there is a subtle and gentle slope upward for the human’s ability to perpetuate her badly designed infrastructure. The cost of sustenance of the human body, regardless of the expanding time-span of use, is not trivial. One complication in this trade-off decision is that the individual may perceive personal (and possibly societal) utility, higher than what is true.  Society, prevented from the forceful termination of the individual on philosophical grounds, yields the decision to the individual, who may not be adept enough to do so.

Humans are entering a tricky transition period. It is conceivable that creative manipulation of genes may allow them to sustain copies of themselves for a time-span, perhaps higher by a factor of 10 in less than 100 years. However, in transition, they will struggle, trying to bridge the status-quo with what is possible. This is an optimization problem that may have to expand beyond the individual, if humanity were to perpetuate itself. On the other hand, there appears to be no compelling reasons to do so.

Wednesday, December 14, 2016

Milking data

Milk, a new computer language created by MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) promises a four fold increase in the speed of analytics on big data problems. Although true big data problems are still rare, albeit the term is freely used for anything from large excel sheets to relational data tables, Milk is in the right direction. Computer chip architecture designs have been stagnant, still looking to double speed every 18 months, by packing silicon ever closer with little innovation.

Efficient use of memory has been a perennial problem for analytics, dealing with sparse and noisy data. Rigid hardware designs shuttle unwanted information based on archaic design concepts never asking the question if the data transport is necessary or timely. With hardware and even memory costs in a precipitous decline, there has not been sufficient force behind seeking changes to age old designs. Now that exponentially increasing data is beginning to challenge available hardware again and the need for speed to sift through the proverbial haystack of noise to find the golden needle is in demand, we may need to innovate again. And, Milk paves the path for possible software solutions.

Using just enough data at the right time to make decisions is a good habit, not only in computing but also in every other arena. In the past two decades, computer companies and database vendors sought to sell the biggest steel to all their customers on the future promise of analytics once they collect all the garbage and store it in warehouses. Now that analytics has "arrived," reducing the garbage into usable insights has become a major problem for companies.

Extracting insights from sparse and noisy data is not easy. Perhaps academic institutions can lend a helping hand to jump start innovation at computer behemoths, as they get stuck in the status-quo.

Monday, December 12, 2016

Democracy's event horizon

Recent results from a survey (1) of 2200 Americans showing over 1 in 4 believe that the sun goes around the earth is problematic for democracy. The system, that reflects the aggregate opinion of all participants, has served humanity well in recent years. However, the same characteristic could be its Achilles' heel as its leaders will have to reflect its population. If aggregate knowledge present in a democratic society falls below a threshold value, it can act like the event horizon of a black hole. Once through it, there is no turning back as it will spiral down to a singularity.

There have been telltale signs in many democratic societies for some time. In the world's largest democracy, elections were decided by last names and not policy choices. In Southern Europe, star power has been more dominant. More recently, powerful democratic countries have opted for less optimal outcomes. All of these may imply that democracy, as a system, is running out of its originally intended use - assure optimum outcomes for society in the long run. Instead, it is now more likely to reinforce low knowledge content, if it is dominant.

One democracy appears to have resisted the race to the bottom. Down under, where penalties are imposed for those not bothering to vote, high turn-out has assured that knowledge content of the voters is above the democratic event horizon. It appears that the prescription for ailing democracies returning sub-optimal results is to enhance voter turnout, possibly by the imposition of penalties. The biased selection in the non-voter cohort may be just enough to keep the system from the plunge to the unknown.

(1) http://www.npr.org/sections/thetwo-way/2014/02/14/277058739/1-in-4-americans-think-the-sun-goes-around-the-earth-survey-says

Monday, November 28, 2016

The irony of Artificial Intelligence

The indiscriminate use of the term "Artificial Intelligence," by analysts and consultants, demean what it was originally meant to be and is still supposed to imply. Those who have worked in this area for decades, much before the internet, search and autonomous vehicles existed, know fully well that there is nothing "artificial" about it. A lack of definition for "consciousness," has constrained computer scientists and philosophers alike from conceptualizing intelligence. Faster computers, memory and unlimited capital are unlikely to advance the field any further unless we return to thoughtfully studying the underlying issues. Let's do that before a few gamers think that they have accidently invented the "artificial mind."

Intelligence, something solely attributed to humans till very recently, is now shown to be present in a plethora of biological systems. Biological systems, however, are not an infinite array of on-off switches nor do they process information as computer systems do. So the idea that one can network a large number of silicon based idiot boxes, using mathematics from the 60s, to replicate the brain is fraught with issues. The recent fad of "artificial general intelligence," - the ability to teach a computer virtually anything such that it can not only replicate a human but also become a lot better, is a nice fantasy. What the technologists may be missing is that it is not a software problem. The millennials have become good at software but projecting what one is good at onto hard problems may not the best path forward.

Nature had nearly 3.8 billion years to perfect the underlying hardware to deliver intelligence. It was a slow and laborious process and it was never a software problem. Once the hardware was perfected, it was able to run virtually any software. This may give a clue to those plunging head first into teaching machines to "think," using software and age old mathematics. More importantly, the current architecture of computing representing calculators is not amenable to modeling intelligence. Just as there is a distinct difference between Arithmetic and Mathematics, conventional computers differ significantly from a true quantum computer. Adding and subtracting numbers fast is one thing but conceptualizing what it means is quite another.

The unfortunate term, "Artificial Intelligence," has led many mediocre lives astray. And God, with an immense sense of humor, seem to still lead them down blind alleys.

Friday, November 25, 2016

Expert complexity

A new tool rolled out by a research institution (1), allows decision-makers reach better decisions by surveying experts. Specifically, a related paper published in Nature Energy concludes that wind energy costs will fall by 24-30% by 2030 (2). To understand what method of alternate energy production is most viable, one has to disaggregate the problem into two distinct questions:

1. What's the status-quo economics - i.e what is the total cost of production per unit of energy for all available modalities - including, solar, wind, tidal, geothermal, nuclear, fossil and others? And, it is important to understand total costs, however "green," the manufacturers of the turbines and solar cells claim to be.

2. How is this cost likely to decline by scale or newer technologies or both?

The first question has significant available data and does not require any expert opinion. The second question has two parts to it - scale based decline in cost and the probability of the arrival of a technology discontinuity. The former is also an empirical and engineering question that does not require experts to open up their infinite wisdom and the latter is mere speculation that experts could certainly contribute to.

More generally, alternative energy production techniques have comparable status-quo metrics that informs which method is dominant. The idea that "we should try everything," is fundamentally faulty as there is only one best design. The scale question requires more analysis and thought - part of this is related to total available quantity (i.e. how much energy could the world produce if it were to harness the source with 100% efficiency) and the other is related to efficiency gains that will accrue due to scale and learning effects. Both of these questions are well explored in many fields and thus we can easily create forecasted metrics of cost per unit of production across production modalities, without troubling the experts.

Scientists and policy-makers have a tendency to complicate simple problems. But it typically does not add much value, even with software, experts or research. Numbers are more reliable predictors of the performance of engineering systems than experts.

(1) https://carnegiescience.edu/news/new-tools-assess-future-wind-power
(2) http://esciencenews.com/articles/2016/09/13/new.tools.assess.future.wind.power




Tuesday, November 22, 2016

Eaten alive

Recent research from the University of Texas (1) sets the timing of the genius Homo excursion out of Africa to an earlier time and a harsher reality. They "mingled," with saber-toothed cats, wolves and hyenas and they ate only as many times as they were eaten. With a brain size smaller than half that of the modern human, their tools were primitive and their processes less compelling. They were effectively scavengers, with a badly designed and fragile infrastructure, that was no match to the mighty cats.

Human ego may have overlooked some information in the past as they had imagined their ancestors tall, dark and sturdy, leaving Africa, with a highly sophisticated technology, the hand ax. The UT team, based on a large sample over a 4 hectare area, argue this is not the case. The finding has implications, not only for history but also for biases that systematically creep into academic studies. In this contemporary world, where possibly half the population is likely unaware that its seven billion inhabitants are closely related, regardless of their outward appearances, scientists may need to do a better job in education. For without education, entire countries could walk back decades of progress and entire continents could move toward punishing tactical conflict. And, that would not be significantly different from what existed 1.8 million years ago, in architecture and context.

Humans have paid a big penalty because of their incessant curiosity, exploring foreign lands before they were possibly ready but what they brought to humanity is valuable knowledge. It will be a shame if modern humans lose it.

(1) http://www.sciencemag.org/news/2016/11/meet-frail-small-brained-people-who-first-trekked-out-africa

Sunday, November 20, 2016

The great money experiment

Cash, an ancient form of monetary value facilitating transactions, is still dominant in most parts of the world. Initially, cash represented precious objects, an idea that forms the foundation of the gold standard, yet another archaic concept pursued by those who do not know that the world has changed. More recently, the English East India Company, that minted coins to send to the East to buy spices and borrowed the term "cash" from Sanskrit, may have been solely responsible for the adoption of cash across the empire. It is only apt that India is getting rid of "excess cash," the source of corruption, tax evasion and terrorism.

Paper money has substantially expanded monetary incompetence across the globe. If electrons can count and fly across the globe instantly to account for credits and debits, it is unclear why humans carry metal and paper in their pockets to transact. Bad habits die slow and in a world that does not barter, cash was a crutch, that is no longer needed. Those who say politicians lack vision should look East, albeit, it is a singular experiment that breaks open new possibilities. Physical forms of cash has been inefficient, archaic and problematic for centuries but none had the guts to get rid of it. To time such a catastrophe, when the eyes of the world is transfixed on the other side of the world, is pure genius.

Yes, the unexpected policy change is going to bring tactical strife to a billion people, except those who had some early warnings to move their "excess cash," to Switzerland. But it may still be utility maximizing for a country that is becoming the largest in the world with a population languishing without technology and information. The prime minister appears forward  looking but it is going to take more that a few trips to Google, Microsoft and Facebook, to prop up a country that seems to have lost its way. Sure its engineers and doctors are sought after but the soul of the country, struggling to find its place, has failed to instill confidence to propel it beyond the third world status, it has been afforded.

Getting rid of corruption and excess cash is the first step. But to go further, India has to provide technology and information to its masses in an architecture that is based on free markets and free trade and not what its socialists leaders proposed from inception.



Saturday, November 12, 2016

Light Medicine

Recent research from the University of Bonn (1) that demonstrates light pulses are effective in jump starting a dying heart, opens up a long neglected pathway of electromagnetic spectrum in the treatment of diseases. Medicine, dominated by chemistry for many centuries, has been languishing. The complexity in the biology of humans, a haphazard combination of mistakes and happenstances has given those with an engineering mindset, hope. But they have been slow to recognize that the beneficial effects they find by the introduction of finely tuned chemicals are often followed by unknown, unanticipated and uncertain toxicity. Such was the domination of chemistry in medicine that they delegated physics to mere diagnostics. However, ancient cultures have been more aware of the effect of magnetism and light on the human body, albeit by unsystematic and unproven guesswork.

A new dawn is close at hand in which physics and computing will propel human health to hitherto unknown levels. We may finally recognize that specificity of intervention that is often accompanied by toxicity in chemistry is not the case in physics. In fact, it is just the opposite. The particles that propagate light and magnetism could be finely tuned and directed to the compartment of the human body that is not performing as expected. Once humans master the quantum effects exhibited by these particles, they may be in a position to hold and impact microscopic parts of the human body without pain or loss of control. The light defibrillator is exactly in this vein that spares the patient from any level of discomfort as it restarts the organ attempting to take a break.

The convergence of medicine and physics is a welcome trend. As the theoretical physicists struggle with beautiful but unprovable fantasies such as string theory or spend most of their careers measuring things they have no clue about such as dark matter and dark energy, perhaps they can devote a few hours of their time to something more practical. If they do, it can change medicine and dethrone ideas we have been pursuing from the middle ages.

(1) http://esciencenews.com/articles/2016/09/13/termination.lethal.arrhythmia.with.light

Monday, October 31, 2016

Missed by a whisker

As the string theorists work out the mathematics of the "theory of everything,", as the particle zoo keeper adds yet another particle to the compendium of knowledge, as the space agencies and private companies clamor to dominate space and send humans to Mars, as the environmentalists lament about the impending gloom and doom and as the activists burn and pillage to redirect policies, a large asteroid passed by the Earth peacefully, missing it by a mere 300K kilometers last night, a literal blink of an eye. And more, a lot more, is in the channel heading for the blue planet. I wonder why there was nothing in the news about it.

Humans habitually worry about the tactics and forget the big picture. There is an evolutionary basis for this as most of their history has been about tactical survival. The African Savannah was not a friendly place and as she descended from the trees, she exposed herself to danger all around her. The human psyche, thus, is programmed to worry about tomorrow and not next year for the probability of the later having any meaning for the individual was low. As the "Nobel laureates" attempt to mend the environment, they have to at least understand that the likelihood of Earth surviving a hit by a reasonable sized asteroid is small. And, there are an unimaginable number of objects in the splintered neighborhood of the solar system. The failed star, Jupiter, does her best but sweeping up all the dangerous objects that shower down from beyond her is almost an impossible task.

Those with a few billion $ to invest to create a "legacy," may be best advised to analyze the risks and rewards of their investment choices, Sure, creating a "space colony" and "curing death," are indeed great ideas but if they want humanity to survive (without whom their "legacy," will have no value), they may need to focus on something entirely different. Those who have been chasing "singularity," may need to consider that even "Artificial General Intelligence," will be dead and gone by the physics of impact. Creating robots of immense talent is good, but protecting a fragile environment built over five billion years may be more valuable.

It will be indeed ironic if the "advanced humans," are wiped out by a similar incident that lend mammals, an opening to dominate.

Saturday, October 29, 2016

Live forever?

Recent findings that appear to indicate that aging can be stopped in mammals by starvation and possible introduction of simple chemicals is encouraging for those who may want to live longer. Humans, a plumbing system designed by bacteria to consume and dispel nutrients favorable to them, have found it difficult to sustain life beyond thresholds established at their inception. Anti-bacterial agents did extend life tactically by a decade or so but it appears that they are hitting against some hard constraints. Engineering systems, specializing in plumbing tasks, have limited life and humans appear to fall into this category.

Although humans carry a specialized quantum computer on their shoulders, the action has largely been in their gut, thus far. A crude tubular structure, able to break down a variety of materials to feed its occupants, has been the main design feature. But just as any other such mechanical designs, the scaffolding deteriorates over time and the structure itself loses flexibility and crumbles after certain number of cycles. So, it is not surprising to imagine that life can be extended by starvation as that will reduce the number of cycles imparted on the most important aspect of a human - her gut.

The more important question for humans is what they are likely to do with a possible extension of life. Would they use the extra time to seek knowledge or fight with those who do not look or think like themselves? Would they use the extra time to advance society or segregate themselves into neatly fractured boxes? Would they look forward or be encumbered by their past, driven by simple objective functions? Would they attempt to create and leave a legacy or understand that legacy is meaningless in an advanced society? Would they attempt to understand themselves or be misunderstood forever?

Monday, October 24, 2016

Extreme skew

Over 100 billion humans lived on the earth since they have arrived hundred thousand years ago. However, most of the fundamental knowledge they have gained from fire to quantum mechanics over this time period came from an incomprehensibly few number of people, perhaps as few as 100. This is a hard constraint for humans to advance knowledge as it depends very much on the arrival of a unique individual, perhaps in a few generations. Increase in abstract and fundamental knowledge does not seem to depend on anything else except the presence of that special individual who propels humanity across a discontinuity. Forecasting of knowledge progression in such a regime that follows a smooth stochastic evolution with an extremely small probability of a jump, is a futile exercise. 

Why is this the case? The variation in intelligence across humans appear very small and their physical abilities as good as perfect cloning. But a single person out of one billion appears to possess special powers, albeit all measurable characteristics of that individual are within expected norms. If the probability of this error solely depends on quantity and not on time, then, there could be as many as 7 such individuals who could be present in today's world. This does not seem to be the case as there has been little advancement in fundamental knowledge for close to hundred years.

Systems that exhibit time based errors are similar to electrical systems such as computers and not mechanical systems that can move. The hundred errors seen over 100 thousand years appear evenly spaced in time and show no acceleration in the presence of much larger number of contemporary experiments. If these errors were part of a natural process such as evolution, humanity would have produced a dozen such fundamental leaps in a single generation by now. However, all indications are that there is no difference in the arrival rates of the genius who changes the human mind.

If the universe were a rules based simulation or driven from a random initial state, then larger experiments will unambiguously lead to larger number of errors. It appears to be programmed to regime shift only with time, implying a game with possibly prescribed and controlled end outcomes. 

Thursday, October 13, 2016

Space contamination

Space agencies world over have been on a quest to send a feeble human out of the world and most likely to Mars. Now that the president has also joined the game, more investments will be going in that direction. And many private companies are on the same track as well. But is there any reason other than nourishing own ego to attempt a trip to the red planet?

Humans have ably demonstrated that they are incapable of taking care of the environment they live and breed in. To send a specimen of the human kind to another planet would be the first step to initiate an irreversible catastrophe. Some have argued that sterilization of robotic equipment travelling to other planets is overdone and even suggested that humans should seed any rock they can find with bacteria and virus. Sure, the single celled cousins may jump at the suggestion, but is that wise?

Exploration has been integral to the human experience. But history also shows that their explorations have resulted in irreversible damage to what they found. They have been unable to learn by observation and bad habits picked up in the past hundred thousand years invariably has led to scorched earth policies. Now that the horizons are expanding beyond the struggling blue planet, there is no indication that humans have learned from their mistakes.

God does seem to have a sense of humor as she appears to have imposed daunting space-time constraints on a species who has acquired a quantum computer, by sheer accident. To make matters worse, the canopy, humans could observe is infinitely bigger than they could ever experience and akin to an ironic twist in a horror movie, they find that such observations are fleeting as the whole universe runs away from them at an accelerating rate.

Those who sit back in awe of the abundance of ignorance surrounding them, often have no desire to drive the truck to the next planet. But the engineers among us live for such meaningless tactics without ever thinking what it really means.

Monday, October 10, 2016

How long?

A recent survey from Columbia (1) shows some interesting results. They asked 1600 participants between the ages of 18 and 64, how long they prefer to live. It appears that close to half of the surveyed population would not want to live more than the average life expectancy and almost 1/6 indicate a preference lower than that. Beyond the obvious correlation to expectations of the quality of life in old age, the data also shows significant relationship to race - with African Americans and Hispanics wanting to live a lot longer than White/Caucasians.

This is an important question and a rich area for further research. Till recently, humans had conquered most of the pathogens but they succumb to auto-immune diseases in which the body attacks itself. Heart disease, cancer and diabetic complications top the list - all of which portend lower quality of life in later years. This coupled with a brain that is bored could be a potent combination for humans, whose infrastructure was never designed to go beyond a few decades. In spite of the billions going down the drain to “cure all diseases,” it is clear that homo-sapiens may require outside help to get over their genetic deficiencies.

How long does one want to live? I think it should depend on whether one is able to add value to society.

  
(1) http://esciencenews.com/articles/2016/08/25/how.long.do.you.want.live.your.expectations.old.age.matter

Sunday, October 2, 2016

Sugar & Salt

For nearly hundred thousand years, humans were without two ingredients that will cut their life short substantially. In the last few thousand years, sugar and salt rose to prominence and are now considered responsible for a large spectrum of human diseases including heart disease and metabolic syndrome that includes diabetes. The economics of these chemicals has been attractive from inception. Recent evidence that the sugar lobby (1) has been involved in suppressing the evidence that sugar played a major role in CHF is symptomatic of the fact that there has been powerful forces behind these industries. Little did Gandhi knew that his salt march in India, nearly a century ago, could have been deleterious to the health of Indians.

The taste buds of Homo sapiens are finely tuned to pick up these toxins, for small quantities are absolutely essential for health. The evolutionary forces could not anticipate that these chemicals will be manufactured and consumed by modern humans in such frequency and quantities that their organs simply give up, unable to cope with them. These materials have become a sign of wealth and power and nations warred to take control of their sources and production. More recently, they have been disbursed in colored and famous water world over, to fundamentally transform human health. 

Two toxins, salt and sugar, are responsible for possibly half the health care costs in the world. However, there are no signs of any abatement in their production and consumption.

(1) http://esciencenews.com/articles/2016/09/13/historical.analysis.examines.sugar.industry.role.heart.disease.research


Friday, September 16, 2016

They are already here

As a chunk of investments go down the drain seeking extra-terrestrials, drug resistant bacteria that cause two million illnesses and 23 thousand deaths in the US alone (1), have gotten little attention. These true extra-terrestrials, who got here hitching a ride on an asteroid, have been powerful and robust over a few billion years. They have transformed the planet, oxygenated it and prepared it for larger organisms, that would fall prey to them. They could dance in unison, divide and multiply and evolve in short horizons, aided by almost countless experiments. They will create organisms that could walk and eat, to satisfy their every whim, with an intelligent control system, that "advanced life," calls the microbiome. They could trigger feelings, switch their occupants' brain on and off, get them to eat what they would like and even decide on an expiry date for the host. Such is the power of the single cell organism, they have dominated the blue planet since their arrival, perhaps by chance.

ET is already here and they have been for over four billion years. As the greatest scientists fear, they are indeed more intelligent as they have not only designed an environment for themselves but also populated it with organisms they could control for their own benefit. As the chemists seek soil from far and wide to conquer the fully optimized machines they could only observe under a microscope, as the pharmaceutical companies run away from declining economics of third world diseases, as the NIH struggles to figure out how to advance thoughts and action, as the foundations that proclaim to lift humanity from disease and strife stop at mosquito nets and commercials, the single cell organism organizes further to consolidate control.

The worst idea of human history is the thought that they are "advanced."

(1) http://esciencenews.com/articles/2016/09/09/how.fight.drug.resistant.bacteria

Monday, September 5, 2016

ROE - Return on Education

As the political season heats up, politicians appear to be mired in tactics, symptomatic of lack of vision and strategic intent. A society deserves the politicians they are afforded but increasingly policies that affect the long term health and prosperity of the nation are unlikely to be affected by the appointed leaders, whoever they may be. Some want walls and others Wall street, but neither can lift the spirit and ambitions of 320 million beating hearts across the continent. As space enthusiasts search for extra-terrestrials, technologists build artificially intelligent bots and medical experts seek methods to ameliorate pain and defeat micro-organisms - politicians, caught in a time warp struggle to stay relevant. The younger generation has left them, for they have seen the light at the end of the tunnel, where they could imagine and build a world for the future.

As the blue planet spins and rotates to stay alive from its perpetual plunge into the center of the sun, protected from countless debris by its venerable cousin, Jupiter, its inhabitants, blinded by ignorance, appear to be ready to fight. They fight with ideas, races, countries, localities, genders and the gender-less, as if their whole life depends on it. They shout on TV and against those on the idiot box, they soften and harden policies at will, they own and disown coal, make speeches to the top bankers for money, refuse to release tax returns and proclaim wealth without understanding there is a liability side to the balance sheet. They visit churches of color, dance to show approval only to break down a few minutes later. They seek money for charities funded by the straight and the crooked, they meet lobbyists and people, shake hands and hold babies as if the whole country is melting down at the sheer sight of them.

Politicians, possibly a new species, could be endangered species, especially if the world educates itself with freely available information. Education has the highest return compared to any other investment by an individual or society. It can help sort out fake politicians who assert their past and your future, confidence tricksters, non-replicatable academic studies, drugs that lose efficacy over time, financial advisers who do not know finance, advertisements that promise the impossible, religious fanatics who perform magic, insurance policies that do not pay, charlatans who shout buy and sell as markets close and space agencies who spawn manned space missions for their own sake.

Such is the state of misinformation that education has higher returns now than it ever had.

Friday, September 2, 2016

The stock and flow of IQ

A recent publication in the Royal Society Open Science (1), hypothesizes an intriguing idea - explosion in hominoid intelligence a few million years ago was aided by a significant increase in blood flow to the brain and not necessarily by an increase in its size. Moreover, the increase in the flow of blood may have resulted in the growth in brain size. If so, a lucky change in the diameter of the arteries carrying blood to the brain may be responsible for humans dominating the blue planet.

Plumbing appears to be central to health. It has been speculated that most autoimmune diseases, including even Alzheimer's disease, could be related to the body's declining ability to eliminate waste through the plumbing system, it has been afforded. Age and use of these systems appear negatively correlated with their efficiency in clearance. If the recent finding is true, the plumbing alone plays a central role in the evolution of intelligence, providing the energy hungry organ with ample fuel and cooling and nourishing it from relatively modest beginnings in Australopithecus to arguably better modern humans.

Plumbing appears to be central to modern medicine. In a regime with little threat from microorganisms, humans may perish by the clogging of their own aging infrastructure.

(1) http://esciencenews.com/articles/2016/08/31/blood.thirsty.brains


Wednesday, August 24, 2016

Proxima test

As the year 2020 draws near, those who emphatically professed conclusive evidence for extra-terrestrial life would be found by then, are getting a bit nervous. But there is good news - a rocky planet, Proxima-b, has been found in the "habitable zone" just a mere 4 light years away. All that remains now is the "contact," - or perhaps spectral evidence of oxygen, water and methane before uncorking champagne bottles. We are now months away from the declaration that life on Earth is not unique, albeit, by proxy evidence. The space agency, as usual, is ahead of schedule.

Boring statisticians have devised constructs such as prior and posterior probabilities of expectations - something ET enthusiasts appear to have little respect for. They have left Enceladus, Saturn’s beautiful moon, the most promising for life in the neighborhood, in the dust with the news that Proxima-b has been found, for conjecture is more powerful than facts and spectra, more beautiful than actual measurements. Before the space agency devises methods to test for life on Proxima-b, they have to ask two important questions.

1. What is the a priori probability of life on Proxima-b?
2. If they do not find life there, does the posterior change in any way?

If the answer to question 1 is zero or the answer to question 2 is no, then there is no logical reason to explore the rocky cousin. We already know there is no "advanced life," there because if there were, we would already have been at war with them.

Tuesday, August 16, 2016

The fifth force

Recent research from the University of California, Irvine, (1) speculates on the existence of the fifth fundamental force field, apart from gravitation, electromagnetism and the weak and strong nuclear forces. This is possibly in a newer direction that counters the tendency of contemporary physicists to spawn yet another "fundamental particle", anytime they do not understand a phenomenon. Speculating on the existence of a newer force, at the very least, shows some creativity in high energy physics, resting on the laurels of their recent discovery of the God particle, in the noise generated in Geneva.

It appears that we are in a stalemate. Physics, dominated by technicians, has been running amok, plunking billions down the tunnel, proving particles of fantasy. It appears that few are asking if proving fantastic particles is advancing the field in any way. Recently, the engineers hung mirrors to measure gravity waves to the tune of the diameter of a proton to prove two black holes merged nearly a billion years ago. Could somebody prove otherwise? Physics has reached a plateau, in which physicists can prove anything with the help of engineers.

Since we have over hundred "fundamental particles," already, perhaps defining a new "fundamental force field," is in a better direction.

(1) http://esciencenews.com/articles/2016/08/15/uci.physicists.confirm.possible.discovery.fifth.force.nature





Thursday, August 11, 2016

Deeper learning

Recently, statistical and mathematical techniques that have been tried and abandoned decades ago to teach computers to be smarter, have surfaced again with better sounding names. With nearly zero cost computing power in the cloud, some companies have been plunging head first into the deep abyss. Deep learning is stylish - some even try "deep mind," in an attempt to replicate the complex human. What the younger scientists may not know is that most of what they have found, has been known for a long time and one should not take advancements, due to only the recent availability of cheap computing power and memory, as "ground-breaking." Disappointments may be in store for those, highly optimistic of "solving," human intelligence. The convergence of Neuroscience and Computer Science is a welcome trend, but a dose of realism may be apt medicine for those modeling the mind, downloading the brain and possibly curing physical death.

Even since she stood up in the African Savannah, a few hundred thousand years ago, the human has been puzzled by her own mind. She searched the skies, climbed mountains and dived into the oceans, seeking the object, she could not herself define. The theory of consciousness has eluded her, for what she was seeking obviously interfered with the tools she was using. It was the ultimate prize. If she could understand herself, then, a whole new world could open up. The body can be separated from the mind, and the latter then could be rendered immortal. The mind could be replicated and networked and perpetuated across space and time. She could create and capture imagination at will. She could solve problems of infinite complexity, travel into interstellar space or even to another universe. If only, she could understand the mind and concoct a theory of consciousness. But alas. it is not to be. Whatever one calls it, the "neural network," has failed to show signs of consciousness. Yet another technology is substantially sub-optimized by engineers and scientists, most comfortable with deterministic answers to complex questions.

Ignorance is highly scalable in the presence of infinite computing power.

Monday, August 8, 2016

Premature existence

Recent ideas from Harvard (1) speculate on the lack of  maturity of life as we find it on Earth. Contemporary theory, which appears highly unlikely to be true, suggests an origin of the universe about 14 billion years ago and the ultimate demise of it in about ten trillion years from now, making it infinitesimally closer to birth than death. Life, if it is a property of the system, has had only a short moment in this unimaginably long time horizon. So prototypical observations of life, such as that we find on this uninteresting corner of the Milky Way, a rather common place galaxy, is premature by any stretch of the imagination. Even after the Sun balloons up in less than five billion years to a red giant and consumes the blue planet, with all its ignorance locked up in a small space, the universe will continue and create life forms of much greater capabilities and interest.

Life on Earth appears to show severe limitations and indicate prototypical experimentation, with infinitesimal life spans of biological units, that appear incapable of perpetuating information and knowledge across time. Such a design can only be the result of an unsuccessful trial of a large number of possible simulations. The fact that "advanced life," is battling the microscopic ones on Earth can only imply very early and unsophisticated experiments to gather raw data. If there is any learning present in this prototypical system, it has to be about what does not work rather than the other way around. The fact that the blue planet at the exact distance from a medium size star with such abundant resources and stable climate has been unable to produce intelligence may instruct future experiments, what not to attempt.

It is early but known experiments appear to be in the wrong direction.

(1) http://esciencenews.com/articles/2016/08/01/is.earthly.life.premature.a.cosmic.perspective

Tuesday, August 2, 2016

Biology meets computing

The ease of biological manipulation by chemical means has held life sciences companies back for over a century from exploring the system, that is equally receptive to electromagnetic interventions. Recent news that GSK has partnered with Alphabet to advance the field of bioelectronics is encouraging. The idea is to impart electrical signals to neurons by micro implants to cure chronic diseases. Although it is a profitable path to pursue, some caution may be in order.

There is a long history of engineering experts attempting prescriptive interventions on biological systems with an expectation of deterministic outcomes. Humans are generally good in engineering and traditional computing but biological systems do not behave in such a predictable fashion. Their engineering competence comes from selection, in which simple tools based on Newtonian Physics let them survive the harsh environment they were in for nearly hundred thousand years. The modern game is distinctly different and it is unlikely that they possess the skills to fast track with the hardware they have built up. With "Artificial Intelligence," soaking up the air waves, it is important for technology giants to bias toward humility. As projects such as "death cure," so slow to come to fruition so as to annoy the technologists behind it, perhaps getting not too excited about curing diabetes through neuron stimulation is a good idea. After all, nature took four billion years to get here, albeit, it was without a computer farm that soaks up a high share of the world electricity production. Competing with nature is likely to take a bit more than that.

The convergence of biology and computing is unavoidable. However, it is unlikely to be advanced by those with stagnant notions of either field.

Sunday, July 31, 2016

The inverted U

The value individuals add to society appears to be in the shape of an inverted U. On the high end - philosophers, scientists and technologists, who seem to know everything, appear to add very little of practical value. Recently, a scientist remarked, "when we found the gravity waves, the whole world stopped," unaware of the fact that most of the world did not know or care. On the low end, ignorance is as dangerous as cyanide, as ably demonstrated by contemporary politicians. In this scheme, the highest value to society appears to come from those in the middle - not too ignorant and not too arrogant. On both ends, there is a common factor - a brain that is idle, because of either stupor or conceit.

To maximize the value that individuals contribute to society, it appears that they have to be lifted from the abyss of ignorance or shown a practical path down from the mountains. The perception of knowledge is as dangerous as a lack of awareness of ignorance - for both result in a loss of value to society by a lack of contribution. The former, locked behind ivy walls, lament at a world of ignorance and the latter, aided and abetted by their leaders, scorn at knowledge. In the middle, humans of blood and flesh, are caught in a time warp they simply could not escape. Apathetic and lethargic, the middle withdraw from media events and elections and watch from the sidelines. This is a movie with a sad ending - dominated either by the ignorant or the arrogant, with little to differentiate between them.

Neither abundance nor lack of knowledge add societal value. Those who are aware of their limitations, do.


Wednesday, July 27, 2016

Simplified diagnostics

Research from Columbia (1) indicates that impairment in odor identification portends cognitive decline and possibly Alzheimer's disease. This appears to dominate more prevalent PET and MRI scans for the diagnosis of these ailments. This is a constant reminder that humans are far from mastering the workings of the organ they carry on their shoulders by engineering and scientific means and simpler measurements of inputs and outputs dominate such mechanics.

The human brain has been an enigma. It consumes vast amounts of energy and often produces little utility - either to the immediate host or to society at large. It effectively delegates routine affairs to the vast Central Nervous System attached to it by a thread and maintains a small garbage collection mechanism within itself to keep track of the estate it manages, automatically. Often, it gets bored and shows signs of over-design, a quirk of evolution that endowed it with vast processing power by incorporating quantum computing in a small and efficient space. Lack of large memory has allowed it to venture into the production of heuristics and showing signs of intelligence. But acceleration in technology and society has stretched it, lending some functions of importance irrelevant and elevating the irrelevant to importance.

Its decline is painful to close observers but not necessarily so for the host, herself. In a regime that disallows rolling back time, the objective function has to be minimizing pain - physical or emotional. A system that pragmatically allows such an outcome by its own deterioration can only be considered good. Diagnostics that does not require cutting open the delicate organ or injecting it with radioactive materials may be the best, for the recognition of the start of decline of this beautiful machine is not necessarily good for the host nor for close observers.

(1) http://esciencenews.com/articles/2016/07/27/smell.test.may.predict.early.stages.alzheimers.disease

Tuesday, July 19, 2016

Shining a light on a black hole

Research from the Moscow Institute of Physics and Technology (MIPT) (1) hypothesizes a measurable and elegant difference between a black hole and a compact object. The event horizon of the black hole, defined by the Schwarzschild radius, is significant - anything slightly bigger shows fundamental differences in behavior. A beam of scattered particles shows discrete spectra in the presence of a compact object that escaped collapsing into a black hole. If it were a black hole, it will be in a constant process of collapse, with a complete stoppage of time for an external observer, resulting in a continuous and smooth spectra.

The concept of a black hole has been an enigmatic thought experiment for physicists and amateurs alike. Contemporary theory fails in the singularity and speculates a stoppage of time inside the event horizon, something that cannot be fully envisioned by humans trained in the practical regime of Newtonian Mechanics. A black hole will never stop collapsing from an external perspective and so there cannot be any ex.post question on a black hole. Theories that attempt more detailed explanation beyond the event horizon is fantasy - just as the mathematically elegant string theory that cannot be tested. In spite of all the engineering progress in the last hundred years, fundamental understanding has remained akin to a black hole - in suspended animation. A handful of men and women from the turn of last century remain to be responsible for most of the abstract knowledge that humans have accumulated. The reasons for this is unclear but lack of imagination appears to be the prime suspect.

Fooling around with mathematics may give contemporary scientists satisfaction but explaining the stoppage of time will require more than that.

(1) http://esciencenews.com/articles/2016/07/01/the.energy.spectrum.particles.will.help.make.out.black.holes




Thursday, July 14, 2016

You are what you learn

Recent research from MIT (1) shows that the underlying attributes of music – consonance and dissonance – are not hard wired. Contrasting preferences of tribes with little exposure to Western music such as some Amazon tribes and those with gradually increasing exposure, culminating in accomplished American musicians, they prove that the preference toward consonance over dissonance is learned. Music, thus, appears to be personal and preferences largely generated by experience rather than an innate mechanism in the brain.

In the contemporary regime of accelerating data, the brain is bombarded with an information stream, it was never designed to tackle. An intricate quantum computer, specialized in pattern finding but with rather limited memory, the brain has been stretched to undervalue its advantages and it has been struggling to keep large swaths of data in its limited memory banks. The learning processor, however, has been able to efficiently design and store information in heuristics and dump the underlying raw data as fast as it can. As it loses history, the stored heuristics drive function and generate preferences, as if they are part of the original operating system.

The finding has implications for many areas not the least of which is in the treatment of Central Nervous System (CNS) diseases such as racism, alcoholism and ego. Fast discarding of underlying information due to a lack of storage capacity, prevents back testing of learned heuristics. A limited training set of underlying data could have irreversible and dramatic influences on end outcomes. More importantly, a brain that is trained with misguided heuristics, cannot easily be retrained as the neurons become rigid with incoherent cycles.

You are what you listen to, you are what you eat and more importantly, you are what you learn.

(1) http://esciencenews.com/articles/2016/07/13/why.we.music.we.do

Tuesday, July 5, 2016

The failure of finite elements

Engineers and mathematicians, with a core competence in building complex structures from elemental and standardized components, have had a tough time with domains not amenable to prescriptive and deterministic logic. These include high energy physics, biology, economics and artificial intelligence. The idea that the behavior of a system cannot be predicted by its components is foreign to most disciplines and the applications of such hard sciences, supported by engineering and technology.

In complex organisms such as companies, it has long been recognized that outcomes cannot be predicted by an analysis of its components, however standardized they may be. The “rules of engagement,” if not defined in elegant and closed form mathematics, appear to be less relevant for those seeking precision. However, there is almost nothing in today’s world that could be defined so precisely and the recognition of this concept is possibly the first positive step toward embracing reality.

The interplay between physicists wanting to prove century old predictions and engineers standing ready to prove anything by heavy and complex machines, has been costly to society. The interplay between biologists and chemists wanting to influence systems with precise and targeted therapy and engineers standing ready to do so, has been costly to society. The interplay between economists looking to apply statistical precision to the unknown and engineers ready to build models to whatever is needed, has been costly to society.

Complex systems cannot be broken down to finite elements for the behavior of the system does not emanate from its components. 

Thursday, June 30, 2016

Cognitive monopoly

Major discontinuities in human history have often led to monopoly positions in subsequent markets, driven by winner takes all characteristics. In the modern economy - automobiles, airplanes and computers certainly fit this view. In the case of the internet, invented by tax payer investments, attempts by a few to monopolize the flow of electrons, have been averted thus far. But "Net neutrality," is not something that rent seeking behemoths are likely to accept in the long run, even if they did not pay for it.

The nascent wave - machine cognition - has the monopolists scrambling to get the upper hand. In this wave, capital, as measured by megaflops and terabytes, has a significant advantage. The leaders, plush with computing power, seem to believe that there is nothing that may challenge their positions. Their expectations of technology acceleration appear optimistic but nonetheless we appear to be progressing at an interesting enough trajectory. Although many, including the world's leading scientist, are worried about runaway artificial intelligence, one could argue that there are more prosaic worries for the 7 billion around the world.

Monopolies generally destroy societal value. Even those with a charitable frame, acquire the disease of "God complex," as the money begins to flow in. Humans are simple, driven by ego and an objective function, either biased toward basic necessities or irrational attributes that are difficult to tease out. Contemporary humans can be easily classified by intuition, without even the need for the simplest of algorithms - Nearest neighbors - into those with access to information and those who do not. Politicians and policy makers have been perplexed by the fact that such a simple segmentation scheme seems to work in every part of the world population from countries to counties and cities. Cognition monopolists will make it infinitely worse.

Can Mathematics be monopolized? The simple answer is yes. In a regime of brute force over highly available computing power for a few, answers could be found by the blind and the dumb. Perhaps, there is still hope for the rest as we have seen this movie before.



Saturday, June 25, 2016

Clans continue

It appears clear that habits formed over hundred thousand years cannot be changed in a mere hundred years. As homo-sapiens ventured out of the African Savannah, they were still tightly organized as small clans, less than a hundred in strength, with their own unique language, culture, religion and morality. In Europe and Asia, within hundreds of years of arrival, they erased their close relatives with astounding efficiency. They also successfully navigated disease and climatic change that reduced them to a few thousand - emerging out of the bottle neck, with even tighter clan relationships.

Technology - aircrafts, computers and the internet - opened up the modern economy in the blink of an eye. Economists, excited by the possibilities, argued for the opening up of countries, continents and economies, but they did not realize the behavior patterns integrated deeply into the human psyche. Countries cling to their languages and apparent cultural nuances aided by politicians who in autocratic and socialistic regimes seem to have convinced the populace that they can implement strategic policies that will make their countries, "great again." In advanced democracies, a larger percentage of the population, seem to have self taught the same ideas and in some rare cases they have found surrogates, who will sing the same tune as the autocrats, even though he/she does not know the words to the music. A dangerous trend has emerged in clans that profess to be democratic and sophisticated. The question is whether learning from mistakes is possible - something that made humans successful in the past. Ironically, in the complex modern economy, the outcomes are not clearly observable and often has long cycles. Getting mauled by a tiger is immediate feedback but having a stagnant and deteriorating economy has little feedback for the larger population.

The modern economy, still largely driven by the clan instincts of the seven billion that occupy the Earth, cannot be shocked out of its stupor by logic. Perhaps photographs from space that show the little blue spot in the midst of chaos may appeal to the artistic side of humans. Little closer, they will find no demarcations as depicted on maps and globes. After all, humans have shown great capabilities to think abstractly, albeit, such thoughts are not often tested by logic.


Saturday, May 28, 2016

Redefining Intelligence

Intelligence, natural or artificial, has been a difficult concept to define and understand. Methods of measuring intelligence seem to favor the speed and efficiency in pattern finding. "IQ tests," certainly measure the ability to find patterns and artificial intelligence aficionados have spent three decades teaching computers to get better at the same. Standardized tests, following the same template, appear to measure the same attribute but couch the results in "aptitude," - perhaps to make it sound more plausible. And, across all dimensions of education and testing, this notion of intelligence and hence "aptitude," appears prevalent.

However, can the speed of pattern finding be used as the only metric for intelligence? Certainly in prototypical systems and societies, efficiency in finding food (energy) and fast replication are dominant. Pattern finding is likely the most important skill in this context. If so, then, one could argue that the status-quo definition of intelligence is a measurement of a characteristic that is most useful to maximize a simple objective function, governed largely by food and replicability. At the very least, a thought experiment may be in order to imagine intelligence in higher order societies.

If intelligence is redefined as the differential of the speed in pattern finding - an acceleration in pattern finding - then it can incorporate higher order learning. In societies where such a metric is dominant, the speed of finding patterns from historical data, albeit important, may not qualify as intelligence. One could easily see systems that have very slow speed of pattern finding at inception if energy is focused more at the differential, allowing such systems to exponentially gain knowledge at later stages. Sluggish and dumb, such participants would certainly be eradicated quickly in prototypical societies, before they can demonstrate the accelerating phase of knowledge creation.

Intelligence - ill defined and measured, may need to be rethought, if humans were to advance to a level 1 society. It seems unlikely.

Monday, May 23, 2016

Salt water bubbles

Economists, closer to salt water, appear to be prone to thoughts of inefficiency and bubbles in the financial markets, something that can be cured by a single trip to the windy city. A recent study from Columbia University (1) asserts that they could find over 13,000 bubbles in the stock market between 2000 and 2013. Using supercomputers, no less, and "big data," they appear to have "conclusively shown" that stock prices take wild and persistent excursions from their "fair values." Unfortunately, these academics, who profess to be "data scientists," are yet to encounter the phenomenon of "random walk," further evidence that "data scientists" should stay away from financial markets. After all, the “physicists” who descended into Wall Street have had a checkered history of “abnormal returns” wrapped in consistent negative alpha.

The remark from a graduate student from Harvard - "I expected to see lots of bubbles in 2009, after the crash, but there were a lot before and a lot after," is symptomatic of the problem faced by “data scientists,” seeking problems to solve in super-domains they have no clue about, where participants, who determine outcomes are equipped with pattern finding technology. They may have better luck in real markets, for prices in financial markets are determined by a large number of participants, each with her own inefficient algorithms. The most troubling aspect of the study is that the authors of the study believe that “a bubble happens when the price of an asset, be it gold, housing or stocks, is more than what a rational person would be willing to pay based on its expected future cash flows.” In a world, immersed in intellectual property, where future cash flows cannot be forecasted precisely, the value of an asset cannot be determined by such simple constructs that have been rendered invalid for decades.

The lure of financial markets have been problematic for “data scientists” and “physicists.” However, a cure is readily available in academic literature emanating from the sixties.

(1) http://esciencenews.com/articles/2016/05/04/stocks.overvalued.longer.and.more.often.previously.thought.says.study

Monday, May 16, 2016

Small step toward bigger hype

Recent research from the University of Liverpool (1) suggests a method by which computers could learn languages by semantic representation and similarity look-ups. Although this may be in the right direction, it is important to remember that most of the work in teaching computers language or even fancy tricks, is not in the realm of "artificial intelligence," but rather they belong to the age old and somewhat archaic notion of expert systems. Computer giants, while solving grand problems such as Chess, Jeopardy, Go and self driving cars, seem to have forgotten that rules based expert systems have been around from the inception of computers, much before some of these companies were founded. The fact that faster hardware can churn larger set of rules quicker is not advancing intelligence but it is certainly helping efficient computing.

Engineering schools appear to still teach ideas that are already obsolete. Programming languages have been frozen in time, with prescriptive syntax and rigid control flow. Today's high level languages are certainly practical and immensely capable of producing inferior applications. Even those who could have "swiftly," assembled knowledge from previous attempts seem to have concocted together a compiler that borrows from the worst that have gone before it. As they proclaim "3 billion devices already run it," every hour an update is pushed or conduct conferences around the globe dotting and netting, the behemoths don't seem to understand that their technologies have inherent limitations.

Computer scientists, locked behind ivy walls, are given skills that the world does not need anymore.

(1) http://esciencenews.com/articles/2016/05/06/teaching.computers.understand.human.languages


Thursday, May 12, 2016

Nutritional genetics

Research from Indiana University (1) speculates that physical traits could be substantially impacted by food. The adage that "you are what you eat," appears to work at a deeper genetic level. In low complexity biological systems, such as ants and bees, variation in food at the larvae stage seems to explain specialization at the genetic level. If true, this has implications beyond what has been observed.

Food, a complex external chemical, has to be metabolized, utilized and purged by biological systems routinely. Although it is clear that available energy content and processing efficiency will depend on the variation and complexity in inputs, the idea that food could cause genetic specialization is fascinating. More importantly, this may lead to better design of food to favorably impact physical and mental conditions, the latter possibly holding higher promise for humans.

Ancient cultures and medicines have routinely relied on food as the primary way to remedy tactical issues. The Indiana research may provide a path to propel this idea into more systematic and planned impacts.

(1) http://esciencenews.com/articles/2016/05/12/you.are.what.you.eat.iu.biologists.map.genetic.pathways.nutrition.based.species.traits

Thursday, May 5, 2016

No safety net

Recent research from Johns Hopkins (1) suggests there are over a quarter of a million deaths in the US per year due to medical errors. It is a sobering observation that future generations will look back on with anguish and perhaps, incredibility.  At the height of technology, we are slipping, not because of lack of know-how, but rather, lack of application. One preventable death is too much and the fact that medical errors are the third leading cause of death in the US, is immensely troubling.

Unfortunately, technology does not solve problems. Bigger data and faster computers are likely irrelevant if they cannot fundamentally influence decision processes and allow information flow to enhance decision quality. It is not about precision - there is no such thing - but a systematic use of all available information at the point of decision. Further, the human brain, with its inherent limitations, is unable to minimize downside risk in a regime of high utilization and volatility. A loss of life, a traumatic and life changing event for any healthcare provider, looms high but the environment simply does not allow anything more than what is tactically possible. The lack of a safety net below cascading, complex and error-prone processes suggest the need for a sudden and impactful change that most technology companies are unable to help with.

It is high time that healthcare embraced practical applications of available technologies to improve patient health and welfare.

(1) http://esciencenews.com/articles/2016/05/04/study.suggests.medical.errors.now.third.leading.cause.death.us

Saturday, April 30, 2016

Predictions with a single observation

A recent study from the University of Rochester (1) claims to improve the "Drake equation," - a constant reminder that multiplying random numbers does not provide any additional insights that are not present in the components. Now with exoplanets galore, the study that appeared in Astrobilology claims they can put a "pessimistic estimate" on the probability of non-existence of advanced civilizations, elsewhere in the universe. Just as in the previous attempt, the study suffers from traditional statistics. Most agree that a singular observation is not sufficient to predict anything, let alone the age, capabilities and the distance from Earth, where they could reasonably exist.

As the authors note, the space time window afforded to humans is too narrow to prove or disprove anything. However, the tendency to be amazed by large numbers and the multiplicative effects of such constructs, have led scientists to make non-scientific claims. Until at least a second data point becomes available, the effort expended on statistical analysis in this area is a waste of time. Availability of data is necessary but not sufficient to assign probabilities. Even those clinging to normality statistics, centuries old by now, know that it is not a good tool to make predictions.

More importantly, those awaiting ET's arrival, have almost infinite flexibility to keep on searching. If one has a hypothesis, then an accumulation of negative findings against it, regardless of how many trials are possible, has to be given due consideration. As an example, if one claims favorable conditions exist for life on Enceladus, Saturn's famous moon, such as water, oxygen and a heat source - then investing into the exploration of the icy rock is reasonable. However, if one comes out empty, it cannot be irrelevant. Just because there are trillion other rocks, in the solar system alone, that could be explored, one cannot simply ignore such an observation. At the very least, it should challenge the assumptions used by the space agency and others to justify such explorations. This "new" statistics - perhaps called "Statistics of large numbers," - where no negative observation has any utility - is very costly even though it is well positioned to pump out publications.

Scientists, engaged in irrelevant and invalid observations. aided by large numbers, may need to challenge themselves to advance the field.

(1) http://esciencenews.com/articles/2016/04/28/are.we.alone.setting.some.limits.our.uniqueness

Tuesday, April 26, 2016

Uncertain networks

Recent research from MIT, Chicago and Harvard (1) contends that smaller shocks in the economy could be magnified significantly by network effects. If true, it may provide guidance on policy that is trying to "jump start" large economic systems by targeted investments. If the transmission of such shocks across the economy is predictable, then, it could impact macro-economic decisions favorably. However, looking back with a deterministic view of network evolution, may have some downside.

Economic growth is driven by the conscious harvesting of uncertainty and not by strategic investments by bureaucrats or even corporations. Further, networks are in a constant state of evolution. Measuring GDP impact, a backward looking measure has less meaning in an economy driven by information, innovation and intellectual property. Firms, locked into the status-quo, with a rigid view of demand and supply, indeed fall prey to shocks amplified by static networks, But those, keenly aware of unpredictable uncertainty and the value of flexibility, could certainly surpass such external noise. The question is not how the present network amplifies shocks but rather how the networks are built. If they are built by organizations with a static view of the future, then they will be brittle and consumed by minor shocks. The measurement of intellectual property by patents is symptomatic of the adherence to known metrics and a lack of awareness of where value is originating from.

Empirical analyses in the context of accepted theories have less value for the future - policy or not. The field of economics has to evolve with the modern economy. Lack of innovation will always have a negative effect on the economy - no further analysis is needed.

(1) http://esciencenews.com/articles/2016/04/06/how.network.effects.hurt.economies

Monday, April 25, 2016

Biological storage

Recent news (1) that University of Washington researchers have successfully stored and retrieved data using DNA molecules is exciting. As the world accumulates data - already many tens of millions of gigabytes, growing at an alarming rate, storage capacity and efficiency are becoming top priorities in computing. With material sciences still lagging and computer scientists clinging to the Silicon status-quo, it is important that the field takes a biological direction. After all, nature has coded and retrieved complex information for ever. Using the same mechanism is likely a few orders of magnitude more efficient than what is currently available.

DNA, the most important biological breakthrough over four billion years, has been almost incomprehensible for humans, arguably, the best product of evolution. Lately, however, they have been able to understand it a bit better. Although some argues that the human genome map is the end game, it is likely that it is just a humble beginning. The multi factorial flexibility afforded by the DNA molecule may allow newer ways to store binary data, making it akin to the other belated innovation - quantum computing. Here, thus far, research focused on mechanistic attempts to force fit the idea into the Silicon matrix. Taking a biological route, perhaps aided by the best functioning quantum computer, the human brain, may be a more profitable path.

Biology could accelerate lagging innovation in material sciences and computer science.

(1) http://esciencenews.com/articles/2016/04/07/uw.team.stores.digital.images.dna.and.retrieves.them.perfectly

Sunday, April 17, 2016

Bacteria rising

Recent research from Michigan State University (1) that demonstrates the rise of multi-drug resistant bacteria due to overuse of antibiotics in animals is troubling. It has long been known that flu originates in farms with multi-species interactions. As mentioned in the article, the swine farms in China are particularly problematic as they allow easy gene transfers among bacteria. This, in conjunction with lack of antibiotics research for decades due to declining commercial economics, could result in a perfect storm. 

Bacteria have been dominant all through the history of the planet. Robust architecture with fast evolution by sheer numbers, led them to largely supersede any other biological life form on earth. For the past several decades, they have been put on the back foot, for the first time, by humans. All they need, however, are sufficient number of trials to develop resistance against any anti-bacterial agent. Data shows that they are well on their way, thanks to a variety of experiments afforded to them by inter-species breeding and the overuse of known agents. 

The economics of this indicates that commercial organizations are unlikely to focus on it till it is too late. If R&D, commercial or publicly funded, is not focused on this developing problem, we may be heading toward a regime that may make Ebola look like a household pet. In a highly connected world of intercontinental travel, it is easy for the single cell organism to hitch a ride to anywhere they would like to go. Thus, local efforts are not sufficient and a true global push is needed to compete against the abundant experience, collected over four billion years.

R&D prioritization at the societal level needs to take into account the downside risk and value of investments. 


(1) http://esciencenews.com/articles/2016/04/12/antibiotic.resistance.genes.increasing

Sunday, April 10, 2016

Hole in the soul

The super-void, that presented itself in the background radiation, sporting a size of close to two billion light years, has baffled scientists. A few years after its discovery, no reasonable explanation is forthcoming. The standard model, that most still pin their research on, has shown so many cracks that scientists who adhere to it are beginning to look like economists, who have similar difficulty letting go of established theories. Hunting in the particle forest, either to propose new ones or to prove the hypothesized ones indeed exist, has been the favorite past-time of physicists. Recently, they even measured the reverberation of gravity waves, generated by an event over billion years ago, to the tune of the diameter of a proton. Now, it is nearly impossible to disprove anything in Physics.

Biologists and chemists are in the same spot. Technology is advancing so fast that scientists are running out of hypotheses to prove. There are so many engineers and technologists, pumped out by the elite educational institutions around the world, who stand ready to prove anything in science. We are fast approaching a regime of dearth of ideas and an oversupply of proofs. Is this what was envisioned a few decades ago by visionary scientists, who predicted a future in which there would be nothing more to prove. If so, it would be bleak, indeed.

Creating hypotheses, a skill that was left undernourished for a few decades, may need to be brought back.

Tuesday, March 29, 2016

Return to hardware

Hardware design has been losing luster for many decades in the world of computers. Software has been riding high, partly aided by hype and partly due to the younger crowd plunging deep into apps to make a quick buck and associated fame. Monopolies have been created on inferior operating
systems and office automation, while those who are opposed to it have been chasing public domain noise. Even educational institutions, following the latest ephemeral trends, with half lives of runway fashions, have been churning out courses with little utility for the future. Some have been putting content on-line and others still want students to toil under fluorescent lighting on wooden desks, while picking up the skills of the future.

Computer science has gone astray. Humans, susceptible to incrementalism, have been chasing false dreams on antiquated frameworks. Just as their predecessors, modern humans always attempt to scale inferior performance by massive parallel processing. They stack circuits ever closer and they network computers ever larger in an attempt to squeeze out performance, Meanwhile, software companies, hungry for speed and scope have created clouds of silicon that appear to suck up most of the production capacity in energy. Data have been accumulating in warehouses, some never to see the light of day and others, creating havoc and panic in complex organizations. Economists often worry about bubbles, for some are not so sanguine about rationality but technologists never dream of a software bubble as they presuppose such conditions.

It's time to leave synthesized voices, fake artificial intelligence and bleak games behind and return to hardware. Without two orders of performance improvement, there are very few apps that would move humanity and that can only come from practical quantum computing. Notwithstanding the much anticipated version X of existing operating systems and mobile phones, without innovation in hardware, humans will swim in a sea of mediocrity for ever. There are glimmers of hope, however. Recent news that larger quantum circuits could be built in more direct ways (1) is encouraging.

Educational institutions have an obligation to move society to the future and not just following trends that will fill up class rooms - physical or virtual.


(1) http://esciencenews.com/articles/2016/03/26/unlocking.gates.quantum.computing




Friday, March 25, 2016

Go AI??

Artificial Intelligence is in the air again. It is such a nice concept, the inventors of which have been suspected of nourishing the "God complex." Deep blue triumphed in chess and beat out mere humans in Jeopardy, Watson can understand how music is made and speak about it in a synthesized human voice, and now the famous search company has conquered Go. What's left in AI to solve?

Silicon has been alluring to engineers for four decades. They could double the speed of the "chip" in every 18 months and the mere extrapolation of this idea would have instructed even those less mathematically endowed that the belated singularity, is indeed near. Now that the game of Go, that potentially has infinite permutations of moves, has been conclusively solved by the electronic brain, we are likely nearing the inevitable. And that is bad news, especially for those in school toiling with such mundane subjects as computer science, programming and application development. Very soon, all of these will be delegated to machines, most of which would be artificially intelligent to a level, perhaps surpassing even contemporary politicians. Some had claimed decades ago that humans are nearing a state of "perfect knowledge." In Physics, the speculation has been that no mystery will remain in a few decades. Now humanity has taken an important leap to the future that artificial intelligence can quickly mop up any remaining mystery in any field - physics, medicine and even economics.

Chess, Jeopardy, self driving cars, neural nets seeking cat videos, twitter girl, Go... extrapolation certainly indicates the unstoppable triumph of artificial intelligence. The only remaining mystery is what billions of ordinary humans would do. The quantum computer they carry on their shoulders will become virtually useless in this regime of artificial intelligence dominance.

Friday, March 18, 2016

Scaling humanity

Reaching a critical mass and the minimum efficient scale are important concepts for many systems - biological, economic and business. Humans, separated by space and time for most of their history, could not reach this inevitable threshold for nearly hundred thousand years. Supported by technology, there are encouraging signs that we are fast approaching the minimum efficient scale of knowledge creation and consumption. The planet remains to be heavily endowed and it can easily support many multiples of humans as long as they are able to network their brains for the benefit of all.

What appears to be lacking is a framework. Weak attempts before, such as religion and countries, simply could not sustain a momentum that will unify in sufficient numbers to reach the necessary scale. Basic sciences, albeit attractive in many ways, could not light the passion underneath the human kiln. The strong forces that are operating to separate rather than unify, aided by the clan experiences of humans, have had the upper hand, thus far. However, technology is making irreversible impacts on the human psyche, propelling them to the next level. If so, they could make the planet, eminently contact worthy for outsiders.

Humans have been here before, however. In all cases, it appears that they have come up short. Insufficient technology for networking appears to be the common culprit in previous attempts. Stitching human brains together to reach the minimum efficient scale has eluded them. This was aided by hard constraints such as life span. Shrinking space and time as well as expanding life spans appear to be necessary conditions for sustainable development. Here, technology seems to show encouraging signs.

Space agencies and physicists lamenting about lack of "contact" may be well advised to ask why such "contact" would be made.

Friday, March 11, 2016

Mathematical music


Recent research from the University of Tokyo (1) that proposes a deeper dive into the structure of music by analyzing - "the recurrence plot of recurrence plot," in an effort to understand the emotive power of music, could be misplaced. Mathematical probing into the structure of creative work often failed to understand the substance of emotions that aid such phenomenon. Mathematics has been an important language in the history of human development. However, humans have been less perfect compared to constructs math could reasonably model and they often exhibit irrationality and creativity at random. It is the lack of "structure," that defines creativity and the effort expended by educational institutions in an effort to define such irrational phenomenon in a language that mathematicians can understand could be wasted.

Human emotions have been enigmatic - they escaped mathematical modeling thus far. Evolution seems to have been flexible enough to allow human behavior that has little value in hunting and survival. However, such work perpetuated the human psyche in a world of stress and tribulation, and lifted it into a realm that is mathematically undefinable. The visions of Einstein and Bach, unconstrained by mathematics, propelled humanity forward. As the engineers attempt to prove "gravity waves,' exist a century after it was proposed by sheer creative thought, one has to wonder if humanity is being sterilized of such a notion.

Mathematics, an idealistic concept, is inept at the analysis of human emotions.

(1) http://scitation.aip.org/content/aip/journal/chaos/26/2/10.1063/1.4941371

Wednesday, February 24, 2016

Lawless innovation

A recent study (1) that argues that "constituency statutes have significant effects on the quantity and quality of innovation" in companies, seems to fall into the same trap of pitting stakeholder value against shareholder value. For many decades, the argument has been that companies and societies (e.g. Scandinavia) that focus on the value of stakeholders - employees, communities and the environment do better than those focused on shareholder value (e.g. US). This is a result of a wrong perception that a focus on shareholder value is based on "short term profits" and stakeholder value maximization is a long term process. There is significant empirical evidence that the market and investors are not "short term focused" and are fully capable of assessing and valuing any choices (short or long term) made by the mangers of the firm. Assuming that markets are myopic, without evidence, may not be a good thing.

It is important not to assume the first correlation found in the data is the underlying cause. Note that stakeholder value choices, unless they translate into shareholder value in any horizon, are value destroying. Further, "Quantity and quality of innovation," are difficult to measure. Few innovations are responsible for most of the GDP in the economy and in winner takes all markets, marginal benefit of innovation in aggregate is simply noise. A more interesting question is the structure, systems and strategies of firms (2) that encourage innovation. It is possible that innovative firms will remain so, regardless of the bureaucracies and statutes imposed on them.

Innovation emanates from the culture of the firm - not from the laws created by those, out of touch with the present economy.


(1) http://esciencenews.com/articles/2016/02/18/a.stake.innovation
(2) https://www.crcpress.com/Flexibility-Flexible-Companies-for-the-Uncertain-World/Eapen/9781439816325