Scientific Sense Podcast

Saturday, January 30, 2016

Data science blindspot

Recent research from MIT that claims their "data science machine," does better than humans in predictive models is symptomatic of the blind spots affecting data scientists - both the human and non-human variety. Automation of data analytics is not new - some have been doing it for many decades. Feature selection and model building can certainly be optimized and that is old news. The problem remains to be how such "analytics," ultimately add value to the enterprise. This is not a "data science problem," - it is a business and economics problem.

Investments taken by companies into technologies that claim to be able to read massive amounts of data quickly in an effort to create intelligence are unlikely to have positive returns for their owners. Information technology companies, who have a tendency to formulate problems as primarily computation problems, mostly destroy value for companies. Sure, it is an easy way to sell hardware and databases, but it has very little impact on ultimate decisions that affect companies. What is needed here is a combination of domain knowledge and analytics - something the powerpoint gurus or propeller heads cannot deliver themselves. Real insights sit above such theatrics and they are not easily accessible for decision-makers in companies.

Just as the previous "information technology waves," called "Enterprise Resource Planning" and "Business Intelligence," the latest craze is likely to destroy at least as much value in the economy, if it is not rescued from academics seeking to write papers and technology companies trying to sell their wares. The acid test of utility for any "emerging technology," is tangible shareholder value. 

Wednesday, January 13, 2016

Favorable direction for machine learning

Machine learning, a misnomer for statistical concepts utilized to predict outcomes based on large amounts of historical data, has been a brute force approach. The infamous experiment by the search giant to replicate human brain by neural nets, demonstrated a misunderstanding that the organ works like a computer. Wasted efforts and investments in "Artificial Intelligence," led by famous technical schools in the East and the West, were largely based on the same misconception. All of these have definitively proven that engineers do not understand the human brain and are unlikely to do so for a long time. As a group, they are least competent to model human intelligence.

A recent article in Science (1) seems to make incremental progress toward intelligence. The fact that machines need large amounts of data to "learn" anything should have instructed the purveyors of AI that the processes they are replicating have nothing to do with human intelligence. For hundred thousand years, the quantum computer, humans carry on their shoulders, specialized in pattern finding. They can do so with few examples and they can extend patterns without additional training data. They can even predict possible future patterns, something they have not seen before. Machines are unable to do any of these.

Although the efforts of the NYU, MIT and Univ of Toronto team are admirable, they should be careful not to read too much into it. Optimization is not intelligence, it is just more efficient to reach the predetermined answer. Just as computer giants fall into the trap of mistaking immense computing power as intelligence, researchers should always benchmark their AI concepts against the first human they can find in the street - she is still immensely superior to neatly arranged silicon chips, purported to replicate intelligence.

It is possible that humans could go extinct, seeking to replicate human intelligence in silicon. There are 7 billion unused quantum computers in the world - why not seek to connect them together?


Tuesday, January 5, 2016

The Science of Economics

Many have wondered if economics is, in fact, science. Those who doubt it point to lack of testability and replicability of experiments. Natural experiments in macro systems are often unique and as they say in biological sciences, " a n of 1" is not useful. Further, predictions based on accepted theories often miss the mark. These appear to erect an insurmountable barrier to legitimizing the field of economics.

However, it is worthwhile to explore what is considered to be science. Physics, arguably the grandest of sciences, suffers from the same issues. Sure, human scale Physics is able to make eminently testable predictions based on Newtonian mechanics. Economics could also make such trivial predictions - for example on how demand will change with prices. And, quantum mechanics in the last hundred years has propelled the field further making fantastic and testable hypotheses. Whole industries have grown around it but those with knowledge and associated humility will contend that much remains unknown. In economics, there has been an analogous movement - where uncertainty and flexibility govern and not numbers in a spreadsheet. However, in economics, this has been delegated as something not many understand and thus not fully compatible with academic tenure. That is fair, we have seen that before but that does not indicate that the field is not scientific.

In biological sciences, experiments have been creating havoc. It is almost as if a hypothesis, once stated, could always be proven. In the world of empiricism, this may point to biases - confirmation and conformation - but more importantly in commerce, it showcases a lack of understanding of sunk costs (pardon the non-scientific term). Once hundreds of millions have been plunked into "R&D," the "drug" has to work, for without that, lives of many - if not the patients but the employees of large companies, could be at risk. So, testable hypotheses in themselves, albeit necessary, are not sufficient for science.

The dogma of science may be constraining development in many fields - such as economics, policy, psychology and social sciences. Those who are dogmatic may need to look back into their own fields before passing judgement.