Google

YouTube

Spotify

Scientific Sense Podcast

Tuesday, September 30, 2014

Per capita knowledge

Switzerland and Sweden, both with more than 2 dozen Nobel laureates, producing over 30 prize winners per 10 million population stand in stark contrast to India and China, returning about 0.07 on the same metric. Some may argue that it is a denominator problem but with China and India showing less than a dozen winners, it is tough to argue that it is a size and scale problem. Furthermore, Nobel prizes in areas such as Peace and Literature commanding an equal share of those in hard sciences, one cannot put this down to lack of resources and equipment.

Per capita Nobel prizes is an interesting metric as fundamental advances could be made by paper and pencil or by sheer imagination. Thus, it normalizes the technology advantages that could be attributed to advanced economies. With the US, Canada and France at only 1/3 of the rate of the leaders, it is clear that access to resources and technology cannot explain the disparity. One has to look into educational systems and the prevailing attitude to fundamental innovation in the culture as possible attributes that influence this outcome. Lack of freedom in political systems, due to either autocracy or religion, could be another common factor pointing to underperformance.

Humans, 7 billion near clones, segmented across the world, show dramatic differences in the stock and flow of per capita knowledge. The laggards could learn a lot from observing those who do it better.

Sunday, September 28, 2014

Quantum chemistry

Recent news from Princeton University that describes a technique that can closely approximate lattice energy of molecules may open up new avenues for pharmaceuticals research. Innovations at the boundary of Physics and Chemistry have been slow, primarily due to the lack of flexibility in scientific disciplines that tend to prefer colloquial and incremental improvements to traditional methods. The Princeton team shows how the crystalline form could be predicted using emerging ideas from quantum mechanics. Such processes could be fully incorporated into computational chemistry. With the availability of vast computing power and software technologies, this innovation could usher in the next wave of productivity in pharmaceutical discovery.

Generation gap has been value destroying in most disciplines. Doctors use stethoscopes and engineers use calculators, even though these technologies have been made obsolete for many decades. Similarly, in the labs, once investments are taken into a technology, companies, unaware of the concept of sunk costs, tend to use them forever. In a regime of accelerating knowledge and innovation, the inertia of past knowledge has become exponentially more costly for every discipline, company and individual. Ironically, in the modern world, ignorance with flexibility is a lot more valuable trait than knowledge based on the past coupled with a resistance to change.

Incrementalism is a disease of the past. For the present, looking backward is likely most costly.

Thursday, September 25, 2014

Vanishing singularity

A recent hypothesis by physicists at the University of North Carolina contends that black holes simply cannot exist – mathematically. It is a bit too late – as Nobel seeking scientists elsewhere had all but accepted black holes to be at the center of most galaxies. More creative ones had speculated that black holes lead to other galaxies and provide easy avenues for time travel. This is a constant reminder that theories that lead to inexplicable outcomes, however well they fit some other observations, are not theories at all. They are fancies of grown men and women, constantly seeking meaning for the universe and their own careers.

Black holes have tickled the fancy of many just as the concept of infinity. The possibility of a phenomenon that apparently demonstrates division by zero in practice provided immense flexibility for researchers and scientific journalists alike. As they scorned the “religious” as ignorant, they hid their own massive egos under mountains of illiteracy. Competing theories disagreed – but competing scientists did not, for it was easier to prove the existence of the unseen than reject the establishment. The possibilities were endless – black holes connecting with worm holes, bending space time like a child playing with rubber bands. But,  little did they know that the child had a more complete perspective than their own, weighed down by the pressures of publications and experiments under the dome of heavy steel.

As the singularity evaporates with the radiation and associated mass, perhaps we could return to a regime dominated by Occam’s razor.

Sunday, September 21, 2014

Quorum disruption

A recent paper in Chemistry and Biology describes how certain bacteria use small molecules to “quorum sense,” essentially coordinating an attack on the host. The study also demonstrates how minor tweaks in the concentration of these chemicals could substantially disrupt the “attack signal” that improves the efficiency of the pathogenic bacteria. In the era of declining effectiveness of antibacterial agents, this seems like a favorable direction for research.

Prevalent lack of innovation across the life sciences industry has kept a lid on tangible improvements in the quality of life for humans. Most of the current knowledge on how to attack pathogens has been stale for many decades. In a predictable fashion, the industry has turned into creating bigger “hammers” and if size does not do it, a cocktail of antibacterial agents to quell the infection. This brute force approach has led to the less intelligent micro-organism to simply fall back on mutations to get over what has been put in front of them, incrementally. There is no doubt who will win this war as bacteria has over 3 billion years of experience – and they are fully capable of upsetting incremental approaches to battle it.

Humans, arguably proud of the massive organ they carry on their shoulders, may have to get more sophisticated to stay ahead – taking yesterday’s technology and making it bigger is not going to do it. Disrupting communication signals seems like a better approach.

Wednesday, September 17, 2014

Profit maximization in societal design

A recent study in the Journal of the Institute for Operations Research and the Management Sciences (INFORMS) (1) concludes what is obvious to some. They prove something that could be counterintuitive to most who do not care for economics and those who mix social preferences with economics and assume without proof, such a mixture leads to good policy and better societies.

The study looks at the automotive repair market and analyzes if the repair persons behave ethically and have social preferences, as opposed to purely profit maximizing businesses (e.g. free markets), whether society would be better off. The answer to a large swath of the population should be a resounding No – as a profit maximizing service provider seems like a bad person. In an environment where service providers are driven by ethical and social preference considerations, the study shows that the prices will rise – as they will tend to charge higher but uniform prices to everybody. If so, customers, who do not face price differentiation, will be forced to a higher uniform price, on average. In this case, analytical models show that society, as a whole will be worse off. In a society that contains both ethical and profit maximizing providers, the latter will quickly reflect the higher uniform price when it is convenient to them and reject service to those with higher costs. In a system with only profit maximizing providers and unconstrained transactions, market clearing prices will reflect the provider’s marginal cost, maximizing societal welfare.

Apparent common sense and social preferences are not necessarily good guiding principles for policy. Free markets and profit maximizing decision-makers, generally, push complex societies to higher welfare. The study correctly warns regulators and policy-makers to study social welfare issues before enacting uniform price policies.

(1) INFORMS study shows social welfare may fall in a more ethical market Published: Monday, August 25, 2014 - 15:41 in Mathematics & Economics, (e) Science News

Monday, September 15, 2014

Redefining AI

Artificial Intelligence (AI), an artifact of the 80s, has been directionless. This is partly due to the overuse of the term and assigning “intelligence” to such common place activities as search, rules based logic and machine learning. Recent news that researchers at North Carolina State University has been able to use “AI” to predict the goals of a player in a video game using machine learning, highlights the idea that the term AI is poorly understood. It may be time to redefine it more precisely so that claims of progress in this area could be tempered.

If AI is about intelligence – human intelligence – then most contemporary attempts at replicating it has failed. If AI is about naive search of large data spaces for patterns or the use of classification, clustering or rules based logic on “big data”, then AI will continue to flourish with no innovation in knowledge or software. In this vein, all AI needs is raw computing power. The current leader, Watson, is a case in point. Packing silicon ever closer together and massively parallel processing set logic channels, is not AI – even though it may be able to find the answer to any vexing question asked in trivial games. Machine learning – the latest fad discovered by business brains, without understanding that it has been happening for many decades – has nothing to do with AI - it is just raw application of mathematics, afforded by cheap memory and cheaper computers. What the AI crowd seems to be missing is that, none of these – ability to create models from data, ability to guess answers to trivial questions, ability to predict goals – is about intelligence. It is about the inevitable marriage of computing power with established mathematics.

Human intelligence, however, is not mathematical, even though every scientist and engineer would like it to be. This is why the preeminent engineering schools in the world – in the East, West and in the middle, cannot make any progress in this area. Soccer playing robots and self driving cars, unfortunately are not intelligent. They are unlikely to imagine string theory or appreciate art. Considering intelligence to be mechanical and mathematical, is the first problem. Lately, it has been shown that the hardware itself, the human brain, is a quantum computer. Feeble attempts at replicating this hardware phenomenon is not going to get humans any further in AI because fundamental issues remain in understanding the operating system and applications that run on it.

Artificial Intelligence is meant to represent complete replication of human intelligence. It is not parroting answers in Jeopardy or predicting behavior based on historical data. Humans may be giving themselves less credit by assuming that the crude machines they build, are in fact intelligent.

Saturday, September 13, 2014

Analytical dogma

A recent study published in the Journal of American Medical Association by Stanford researchers showcases an amazing statistic – In the last 3 decades of clinical research, there has been only 37 published re-analysis of randomized clinical trial data. Of this sample, over one-third came to conclusions that differed from the original analysis. An insular culture, unaware of accelerating technologies elsewhere, has been left behind with antiquated tools and techniques. And, resistance to re-analysis is only one of the symptoms of a prevalent disease in the industry, that sticks to dogma and tradition. A regulatory regime that aids such behavior is not helpful either.

It is instructive to note that in the studies that showed significant deviations from original conclusions in the reanalysis of the data, the researchers used different statistical and analytical methods. Even changes in hypothesis formation and the handling of missing data seem to have made a difference to the conclusions. Furthermore, the new studies discovered common errors in the original publications. Definition and measurement of risk are important determinants of eventual conclusions – and in many industries, the measurement and control of risk have substantially progressed to higher levels of sophistication. The researchers point out that sharing of the data and the use of people and analytical techniques from other domains may overturn many of the conclusions, currently held sacrosanct.

Scientists, departing from the spirit of science, in which sharing of data, knowledge and techniques across experts and domains are the norm, do damage not only to themselves but also to the industry. Regulators, steeped in methodologies and SOPs that are antiquated and irrelevant, just aid persistent incompetence.

Tuesday, September 9, 2014

Holographic

A recent experiment at the Fermilab investigates if the apparent 3D space that surrounds us is a hologram.  The idea that uncertainty envelopes not only location and speed but also space itself is mind bending to say the least. If the universe’s ability to store information is limited, then space itself becomes quantized with far reaching implications.

A holographic universe, if proven, could reduce the complexity and put many of the current theories out of commission. It is unclear if Physicists would really like such an outcome – most are used to the expanding particle zoo, convoluted strings and deterministic views of the evolution of space –time. If space is fundamentally uncertain, as speculated by Hogan and Meyer at the University of Chicago, then the status-quo views of space-time need to rewritten.

The direction of knowledge toward simplification is apt even though it does not encourage heavy machines, particle smashing and ignorance building.

Monday, September 8, 2014

Moving deck chairs on the Titanic

News that a sizable piece of the asteroid that made a close encounter with the earth crashed in Nicaragua and the near miss of the massive solar storm in 2012 almost kissed us good bye, are constant reminders that moving deck chairs is not necessarily useful to evade a Titanic type disaster. Environmentalists and lamenting scientists have been burning the midnight oil to turn back the clock – “to protect the environment” and to slow down global warming. They fear the ice caps will melt, water levels will rise and enormous strife will follow for humanity at large. That may be true – but such a problem exists only if humanity is here to witness it in a few 100 years.

NASA and other space organizations around the world have been busy preparing probes to distant planets – to study, learn and get ready for interplanetary travel for the masses. It is indeed commendable but a more tactical need is to protect what is close at hand – not from global warming but from global disaster. The 60 ft. meteorite that crashed in Russia escaped all “monitoring devices,” of the observers and logic would tell us that it cannot be a singular event. It will be ironic if the mighty human gets wiped out by an asteroid when they are preparing to travel to Mars and slow down global warming by slapping solar cells on top of automobiles.

Protecting trees are great – but one has to assure that a forest is possible first.