Google

YouTube

Spotify

Scientific Sense Podcast

Thursday, September 25, 2014

Vanishing singularity

A recent hypothesis by physicists at the University of North Carolina contends that black holes simply cannot exist – mathematically. It is a bit too late – as Nobel seeking scientists elsewhere had all but accepted black holes to be at the center of most galaxies. More creative ones had speculated that black holes lead to other galaxies and provide easy avenues for time travel. This is a constant reminder that theories that lead to inexplicable outcomes, however well they fit some other observations, are not theories at all. They are fancies of grown men and women, constantly seeking meaning for the universe and their own careers.

Black holes have tickled the fancy of many just as the concept of infinity. The possibility of a phenomenon that apparently demonstrates division by zero in practice provided immense flexibility for researchers and scientific journalists alike. As they scorned the “religious” as ignorant, they hid their own massive egos under mountains of illiteracy. Competing theories disagreed – but competing scientists did not, for it was easier to prove the existence of the unseen than reject the establishment. The possibilities were endless – black holes connecting with worm holes, bending space time like a child playing with rubber bands. But,  little did they know that the child had a more complete perspective than their own, weighed down by the pressures of publications and experiments under the dome of heavy steel.

As the singularity evaporates with the radiation and associated mass, perhaps we could return to a regime dominated by Occam’s razor.

Sunday, September 21, 2014

Quorum disruption

A recent paper in Chemistry and Biology describes how certain bacteria use small molecules to “quorum sense,” essentially coordinating an attack on the host. The study also demonstrates how minor tweaks in the concentration of these chemicals could substantially disrupt the “attack signal” that improves the efficiency of the pathogenic bacteria. In the era of declining effectiveness of antibacterial agents, this seems like a favorable direction for research.

Prevalent lack of innovation across the life sciences industry has kept a lid on tangible improvements in the quality of life for humans. Most of the current knowledge on how to attack pathogens has been stale for many decades. In a predictable fashion, the industry has turned into creating bigger “hammers” and if size does not do it, a cocktail of antibacterial agents to quell the infection. This brute force approach has led to the less intelligent micro-organism to simply fall back on mutations to get over what has been put in front of them, incrementally. There is no doubt who will win this war as bacteria has over 3 billion years of experience – and they are fully capable of upsetting incremental approaches to battle it.

Humans, arguably proud of the massive organ they carry on their shoulders, may have to get more sophisticated to stay ahead – taking yesterday’s technology and making it bigger is not going to do it. Disrupting communication signals seems like a better approach.

Wednesday, September 17, 2014

Profit maximization in societal design

A recent study in the Journal of the Institute for Operations Research and the Management Sciences (INFORMS) (1) concludes what is obvious to some. They prove something that could be counterintuitive to most who do not care for economics and those who mix social preferences with economics and assume without proof, such a mixture leads to good policy and better societies.

The study looks at the automotive repair market and analyzes if the repair persons behave ethically and have social preferences, as opposed to purely profit maximizing businesses (e.g. free markets), whether society would be better off. The answer to a large swath of the population should be a resounding No – as a profit maximizing service provider seems like a bad person. In an environment where service providers are driven by ethical and social preference considerations, the study shows that the prices will rise – as they will tend to charge higher but uniform prices to everybody. If so, customers, who do not face price differentiation, will be forced to a higher uniform price, on average. In this case, analytical models show that society, as a whole will be worse off. In a society that contains both ethical and profit maximizing providers, the latter will quickly reflect the higher uniform price when it is convenient to them and reject service to those with higher costs. In a system with only profit maximizing providers and unconstrained transactions, market clearing prices will reflect the provider’s marginal cost, maximizing societal welfare.

Apparent common sense and social preferences are not necessarily good guiding principles for policy. Free markets and profit maximizing decision-makers, generally, push complex societies to higher welfare. The study correctly warns regulators and policy-makers to study social welfare issues before enacting uniform price policies.

(1) INFORMS study shows social welfare may fall in a more ethical market Published: Monday, August 25, 2014 - 15:41 in Mathematics & Economics, (e) Science News

Monday, September 15, 2014

Redefining AI

Artificial Intelligence (AI), an artifact of the 80s, has been directionless. This is partly due to the overuse of the term and assigning “intelligence” to such common place activities as search, rules based logic and machine learning. Recent news that researchers at North Carolina State University has been able to use “AI” to predict the goals of a player in a video game using machine learning, highlights the idea that the term AI is poorly understood. It may be time to redefine it more precisely so that claims of progress in this area could be tempered.

If AI is about intelligence – human intelligence – then most contemporary attempts at replicating it has failed. If AI is about naive search of large data spaces for patterns or the use of classification, clustering or rules based logic on “big data”, then AI will continue to flourish with no innovation in knowledge or software. In this vein, all AI needs is raw computing power. The current leader, Watson, is a case in point. Packing silicon ever closer together and massively parallel processing set logic channels, is not AI – even though it may be able to find the answer to any vexing question asked in trivial games. Machine learning – the latest fad discovered by business brains, without understanding that it has been happening for many decades – has nothing to do with AI - it is just raw application of mathematics, afforded by cheap memory and cheaper computers. What the AI crowd seems to be missing is that, none of these – ability to create models from data, ability to guess answers to trivial questions, ability to predict goals – is about intelligence. It is about the inevitable marriage of computing power with established mathematics.

Human intelligence, however, is not mathematical, even though every scientist and engineer would like it to be. This is why the preeminent engineering schools in the world – in the East, West and in the middle, cannot make any progress in this area. Soccer playing robots and self driving cars, unfortunately are not intelligent. They are unlikely to imagine string theory or appreciate art. Considering intelligence to be mechanical and mathematical, is the first problem. Lately, it has been shown that the hardware itself, the human brain, is a quantum computer. Feeble attempts at replicating this hardware phenomenon is not going to get humans any further in AI because fundamental issues remain in understanding the operating system and applications that run on it.

Artificial Intelligence is meant to represent complete replication of human intelligence. It is not parroting answers in Jeopardy or predicting behavior based on historical data. Humans may be giving themselves less credit by assuming that the crude machines they build, are in fact intelligent.

Saturday, September 13, 2014

Analytical dogma

A recent study published in the Journal of American Medical Association by Stanford researchers showcases an amazing statistic – In the last 3 decades of clinical research, there has been only 37 published re-analysis of randomized clinical trial data. Of this sample, over one-third came to conclusions that differed from the original analysis. An insular culture, unaware of accelerating technologies elsewhere, has been left behind with antiquated tools and techniques. And, resistance to re-analysis is only one of the symptoms of a prevalent disease in the industry, that sticks to dogma and tradition. A regulatory regime that aids such behavior is not helpful either.

It is instructive to note that in the studies that showed significant deviations from original conclusions in the reanalysis of the data, the researchers used different statistical and analytical methods. Even changes in hypothesis formation and the handling of missing data seem to have made a difference to the conclusions. Furthermore, the new studies discovered common errors in the original publications. Definition and measurement of risk are important determinants of eventual conclusions – and in many industries, the measurement and control of risk have substantially progressed to higher levels of sophistication. The researchers point out that sharing of the data and the use of people and analytical techniques from other domains may overturn many of the conclusions, currently held sacrosanct.

Scientists, departing from the spirit of science, in which sharing of data, knowledge and techniques across experts and domains are the norm, do damage not only to themselves but also to the industry. Regulators, steeped in methodologies and SOPs that are antiquated and irrelevant, just aid persistent incompetence.

Tuesday, September 9, 2014

Holographic

A recent experiment at the Fermilab investigates if the apparent 3D space that surrounds us is a hologram.  The idea that uncertainty envelopes not only location and speed but also space itself is mind bending to say the least. If the universe’s ability to store information is limited, then space itself becomes quantized with far reaching implications.

A holographic universe, if proven, could reduce the complexity and put many of the current theories out of commission. It is unclear if Physicists would really like such an outcome – most are used to the expanding particle zoo, convoluted strings and deterministic views of the evolution of space –time. If space is fundamentally uncertain, as speculated by Hogan and Meyer at the University of Chicago, then the status-quo views of space-time need to rewritten.

The direction of knowledge toward simplification is apt even though it does not encourage heavy machines, particle smashing and ignorance building.

Monday, September 8, 2014

Moving deck chairs on the Titanic

News that a sizable piece of the asteroid that made a close encounter with the earth crashed in Nicaragua and the near miss of the massive solar storm in 2012 almost kissed us good bye, are constant reminders that moving deck chairs is not necessarily useful to evade a Titanic type disaster. Environmentalists and lamenting scientists have been burning the midnight oil to turn back the clock – “to protect the environment” and to slow down global warming. They fear the ice caps will melt, water levels will rise and enormous strife will follow for humanity at large. That may be true – but such a problem exists only if humanity is here to witness it in a few 100 years.

NASA and other space organizations around the world have been busy preparing probes to distant planets – to study, learn and get ready for interplanetary travel for the masses. It is indeed commendable but a more tactical need is to protect what is close at hand – not from global warming but from global disaster. The 60 ft. meteorite that crashed in Russia escaped all “monitoring devices,” of the observers and logic would tell us that it cannot be a singular event. It will be ironic if the mighty human gets wiped out by an asteroid when they are preparing to travel to Mars and slow down global warming by slapping solar cells on top of automobiles.

Protecting trees are great – but one has to assure that a forest is possible first.

Sunday, August 31, 2014

Fail-safe

The Indian Institutes of Technology (IIT) – the original five campuses, often considered to be the top engineering institutions in India and in Silicon Valley, have been losing luster. Its graduates, understandably proud of a rich tradition, look over the fact that it never figured in the top 100 educational institutions in the world. Now that “forward thinking politicians,” have decided to take split milk and spread it across the country, the demise of a good brand may be round the corner.

There are many reasons why the IIT brand never climbed into the top echelons of the most cherished educational brands in the world. Stanford, for example, propelled to the top of the pile in a few decades by combining research with entrepreneurship and creating a climate of futuristic learning. Heavy investments in technology and marketing kept MIT close and in the Midwest, Carnegie Melon, Northwestern and the University of Illinois show flashes of greatness in their chosen specializations. What is common among all of them is research – and the ability to innovate. Great institutions are often criticized for their focus on research at the cost of teaching, but this fear is totally misplaced, for there is no learning without research and any institution, vying to compete with the best has to produce the goods – both in fundamental advancement of science and innovative applications of technology. IIT has never been able to do either.

The second reason why the IITs are failing is their focus on bookish knowledge at the cost of experimentation. A well hyped and advertised brand has had its pick of the top 2000 students in the country for decades, and the fact that its graduates have done reasonably well is no reflection on the ability of the institution to shape them. It may have been the opposite. It has taken excellent raw materials – perhaps as good as any institution could hope for and turned them into bricks in the wall – adept at solving known equations and commonplace problems – with high efficiency. However, in a world of accelerating knowledge and information, efficiency is delegated to machines and the only remaining premium is in intellectual property (IP). A nation, unable to create IP at a sustainable rate in a regime that allows protection, cannot go anywhere, how many efficient engineers and doctors it can produce.

To make matters worse – much worse, in a country run by corrupt politicians, proudly wearing socialism on their long sleeves, nothing better could have been expected. In this grand tradition, they always wanted to democratize the brand. The idea of an elite educational brand, known across the world, for the benefit of a few, make them weep at night – for their nephews could never cross the threshold and their Swiss bank accounts were not enough to secure admissions. Such passion is never futile and the solution seems obvious – make an IIT in every state of the union and if possible every district, village and street corner – and spread the brand like chutney on dosa for the good of all. Those who say creativity is waning in a country bursting at its seams never studied its political intelligentsia – they have always been creative.

The IIT – now reachable for politicians on demand and fully functioning on a quota system – dividing the pie neatly to every cast, creed and religion – has to prepare for the inevitable fall from giddying heights, it never was designed to reach. Perhaps, a tolerable exit is in the works – opening the markets to higher education will instantly expose the venerable brand to competition and that may be the shock it needs to wake up from the long stupor.