Google

YouTube

Spotify

Scientific Sense Podcast

Thursday, June 30, 2016

Cognitive monopoly

Major discontinuities in human history have often led to monopoly positions in subsequent markets, driven by winner takes all characteristics. In the modern economy - automobiles, airplanes and computers certainly fit this view. In the case of the internet, invented by tax payer investments, attempts by a few to monopolize the flow of electrons, have been averted thus far. But "Net neutrality," is not something that rent seeking behemoths are likely to accept in the long run, even if they did not pay for it.

The nascent wave - machine cognition - has the monopolists scrambling to get the upper hand. In this wave, capital, as measured by megaflops and terabytes, has a significant advantage. The leaders, plush with computing power, seem to believe that there is nothing that may challenge their positions. Their expectations of technology acceleration appear optimistic but nonetheless we appear to be progressing at an interesting enough trajectory. Although many, including the world's leading scientist, are worried about runaway artificial intelligence, one could argue that there are more prosaic worries for the 7 billion around the world.

Monopolies generally destroy societal value. Even those with a charitable frame, acquire the disease of "God complex," as the money begins to flow in. Humans are simple, driven by ego and an objective function, either biased toward basic necessities or irrational attributes that are difficult to tease out. Contemporary humans can be easily classified by intuition, without even the need for the simplest of algorithms - Nearest neighbors - into those with access to information and those who do not. Politicians and policy makers have been perplexed by the fact that such a simple segmentation scheme seems to work in every part of the world population from countries to counties and cities. Cognition monopolists will make it infinitely worse.

Can Mathematics be monopolized? The simple answer is yes. In a regime of brute force over highly available computing power for a few, answers could be found by the blind and the dumb. Perhaps, there is still hope for the rest as we have seen this movie before.



Saturday, June 25, 2016

Clans continue

It appears clear that habits formed over hundred thousand years cannot be changed in a mere hundred years. As homo-sapiens ventured out of the African Savannah, they were still tightly organized as small clans, less than a hundred in strength, with their own unique language, culture, religion and morality. In Europe and Asia, within hundreds of years of arrival, they erased their close relatives with astounding efficiency. They also successfully navigated disease and climatic change that reduced them to a few thousand - emerging out of the bottle neck, with even tighter clan relationships.

Technology - aircrafts, computers and the internet - opened up the modern economy in the blink of an eye. Economists, excited by the possibilities, argued for the opening up of countries, continents and economies, but they did not realize the behavior patterns integrated deeply into the human psyche. Countries cling to their languages and apparent cultural nuances aided by politicians who in autocratic and socialistic regimes seem to have convinced the populace that they can implement strategic policies that will make their countries, "great again." In advanced democracies, a larger percentage of the population, seem to have self taught the same ideas and in some rare cases they have found surrogates, who will sing the same tune as the autocrats, even though he/she does not know the words to the music. A dangerous trend has emerged in clans that profess to be democratic and sophisticated. The question is whether learning from mistakes is possible - something that made humans successful in the past. Ironically, in the complex modern economy, the outcomes are not clearly observable and often has long cycles. Getting mauled by a tiger is immediate feedback but having a stagnant and deteriorating economy has little feedback for the larger population.

The modern economy, still largely driven by the clan instincts of the seven billion that occupy the Earth, cannot be shocked out of its stupor by logic. Perhaps photographs from space that show the little blue spot in the midst of chaos may appeal to the artistic side of humans. Little closer, they will find no demarcations as depicted on maps and globes. After all, humans have shown great capabilities to think abstractly, albeit, such thoughts are not often tested by logic.


Saturday, May 28, 2016

Redefining Intelligence

Intelligence, natural or artificial, has been a difficult concept to define and understand. Methods of measuring intelligence seem to favor the speed and efficiency in pattern finding. "IQ tests," certainly measure the ability to find patterns and artificial intelligence aficionados have spent three decades teaching computers to get better at the same. Standardized tests, following the same template, appear to measure the same attribute but couch the results in "aptitude," - perhaps to make it sound more plausible. And, across all dimensions of education and testing, this notion of intelligence and hence "aptitude," appears prevalent.

However, can the speed of pattern finding be used as the only metric for intelligence? Certainly in prototypical systems and societies, efficiency in finding food (energy) and fast replication are dominant. Pattern finding is likely the most important skill in this context. If so, then, one could argue that the status-quo definition of intelligence is a measurement of a characteristic that is most useful to maximize a simple objective function, governed largely by food and replicability. At the very least, a thought experiment may be in order to imagine intelligence in higher order societies.

If intelligence is redefined as the differential of the speed in pattern finding - an acceleration in pattern finding - then it can incorporate higher order learning. In societies where such a metric is dominant, the speed of finding patterns from historical data, albeit important, may not qualify as intelligence. One could easily see systems that have very slow speed of pattern finding at inception if energy is focused more at the differential, allowing such systems to exponentially gain knowledge at later stages. Sluggish and dumb, such participants would certainly be eradicated quickly in prototypical societies, before they can demonstrate the accelerating phase of knowledge creation.

Intelligence - ill defined and measured, may need to be rethought, if humans were to advance to a level 1 society. It seems unlikely.

Monday, May 23, 2016

Salt water bubbles

Economists, closer to salt water, appear to be prone to thoughts of inefficiency and bubbles in the financial markets, something that can be cured by a single trip to the windy city. A recent study from Columbia University (1) asserts that they could find over 13,000 bubbles in the stock market between 2000 and 2013. Using supercomputers, no less, and "big data," they appear to have "conclusively shown" that stock prices take wild and persistent excursions from their "fair values." Unfortunately, these academics, who profess to be "data scientists," are yet to encounter the phenomenon of "random walk," further evidence that "data scientists" should stay away from financial markets. After all, the “physicists” who descended into Wall Street have had a checkered history of “abnormal returns” wrapped in consistent negative alpha.

The remark from a graduate student from Harvard - "I expected to see lots of bubbles in 2009, after the crash, but there were a lot before and a lot after," is symptomatic of the problem faced by “data scientists,” seeking problems to solve in super-domains they have no clue about, where participants, who determine outcomes are equipped with pattern finding technology. They may have better luck in real markets, for prices in financial markets are determined by a large number of participants, each with her own inefficient algorithms. The most troubling aspect of the study is that the authors of the study believe that “a bubble happens when the price of an asset, be it gold, housing or stocks, is more than what a rational person would be willing to pay based on its expected future cash flows.” In a world, immersed in intellectual property, where future cash flows cannot be forecasted precisely, the value of an asset cannot be determined by such simple constructs that have been rendered invalid for decades.

The lure of financial markets have been problematic for “data scientists” and “physicists.” However, a cure is readily available in academic literature emanating from the sixties.

(1) http://esciencenews.com/articles/2016/05/04/stocks.overvalued.longer.and.more.often.previously.thought.says.study

Monday, May 16, 2016

Small step toward bigger hype

Recent research from the University of Liverpool (1) suggests a method by which computers could learn languages by semantic representation and similarity look-ups. Although this may be in the right direction, it is important to remember that most of the work in teaching computers language or even fancy tricks, is not in the realm of "artificial intelligence," but rather they belong to the age old and somewhat archaic notion of expert systems. Computer giants, while solving grand problems such as Chess, Jeopardy, Go and self driving cars, seem to have forgotten that rules based expert systems have been around from the inception of computers, much before some of these companies were founded. The fact that faster hardware can churn larger set of rules quicker is not advancing intelligence but it is certainly helping efficient computing.

Engineering schools appear to still teach ideas that are already obsolete. Programming languages have been frozen in time, with prescriptive syntax and rigid control flow. Today's high level languages are certainly practical and immensely capable of producing inferior applications. Even those who could have "swiftly," assembled knowledge from previous attempts seem to have concocted together a compiler that borrows from the worst that have gone before it. As they proclaim "3 billion devices already run it," every hour an update is pushed or conduct conferences around the globe dotting and netting, the behemoths don't seem to understand that their technologies have inherent limitations.

Computer scientists, locked behind ivy walls, are given skills that the world does not need anymore.

(1) http://esciencenews.com/articles/2016/05/06/teaching.computers.understand.human.languages


Thursday, May 12, 2016

Nutritional genetics

Research from Indiana University (1) speculates that physical traits could be substantially impacted by food. The adage that "you are what you eat," appears to work at a deeper genetic level. In low complexity biological systems, such as ants and bees, variation in food at the larvae stage seems to explain specialization at the genetic level. If true, this has implications beyond what has been observed.

Food, a complex external chemical, has to be metabolized, utilized and purged by biological systems routinely. Although it is clear that available energy content and processing efficiency will depend on the variation and complexity in inputs, the idea that food could cause genetic specialization is fascinating. More importantly, this may lead to better design of food to favorably impact physical and mental conditions, the latter possibly holding higher promise for humans.

Ancient cultures and medicines have routinely relied on food as the primary way to remedy tactical issues. The Indiana research may provide a path to propel this idea into more systematic and planned impacts.

(1) http://esciencenews.com/articles/2016/05/12/you.are.what.you.eat.iu.biologists.map.genetic.pathways.nutrition.based.species.traits

Thursday, May 5, 2016

No safety net

Recent research from Johns Hopkins (1) suggests there are over a quarter of a million deaths in the US per year due to medical errors. It is a sobering observation that future generations will look back on with anguish and perhaps, incredibility.  At the height of technology, we are slipping, not because of lack of know-how, but rather, lack of application. One preventable death is too much and the fact that medical errors are the third leading cause of death in the US, is immensely troubling.

Unfortunately, technology does not solve problems. Bigger data and faster computers are likely irrelevant if they cannot fundamentally influence decision processes and allow information flow to enhance decision quality. It is not about precision - there is no such thing - but a systematic use of all available information at the point of decision. Further, the human brain, with its inherent limitations, is unable to minimize downside risk in a regime of high utilization and volatility. A loss of life, a traumatic and life changing event for any healthcare provider, looms high but the environment simply does not allow anything more than what is tactically possible. The lack of a safety net below cascading, complex and error-prone processes suggest the need for a sudden and impactful change that most technology companies are unable to help with.

It is high time that healthcare embraced practical applications of available technologies to improve patient health and welfare.

(1) http://esciencenews.com/articles/2016/05/04/study.suggests.medical.errors.now.third.leading.cause.death.us

Saturday, April 30, 2016

Predictions with a single observation

A recent study from the University of Rochester (1) claims to improve the "Drake equation," - a constant reminder that multiplying random numbers does not provide any additional insights that are not present in the components. Now with exoplanets galore, the study that appeared in Astrobilology claims they can put a "pessimistic estimate" on the probability of non-existence of advanced civilizations, elsewhere in the universe. Just as in the previous attempt, the study suffers from traditional statistics. Most agree that a singular observation is not sufficient to predict anything, let alone the age, capabilities and the distance from Earth, where they could reasonably exist.

As the authors note, the space time window afforded to humans is too narrow to prove or disprove anything. However, the tendency to be amazed by large numbers and the multiplicative effects of such constructs, have led scientists to make non-scientific claims. Until at least a second data point becomes available, the effort expended on statistical analysis in this area is a waste of time. Availability of data is necessary but not sufficient to assign probabilities. Even those clinging to normality statistics, centuries old by now, know that it is not a good tool to make predictions.

More importantly, those awaiting ET's arrival, have almost infinite flexibility to keep on searching. If one has a hypothesis, then an accumulation of negative findings against it, regardless of how many trials are possible, has to be given due consideration. As an example, if one claims favorable conditions exist for life on Enceladus, Saturn's famous moon, such as water, oxygen and a heat source - then investing into the exploration of the icy rock is reasonable. However, if one comes out empty, it cannot be irrelevant. Just because there are trillion other rocks, in the solar system alone, that could be explored, one cannot simply ignore such an observation. At the very least, it should challenge the assumptions used by the space agency and others to justify such explorations. This "new" statistics - perhaps called "Statistics of large numbers," - where no negative observation has any utility - is very costly even though it is well positioned to pump out publications.

Scientists, engaged in irrelevant and invalid observations. aided by large numbers, may need to challenge themselves to advance the field.

(1) http://esciencenews.com/articles/2016/04/28/are.we.alone.setting.some.limits.our.uniqueness