Google

YouTube

Spotify

Scientific Sense Podcast

Sunday, October 12, 2014

Pipe dreams

A recent article in Physical Review Letters claims a novel form of “dark matter” known as “flavor-mixed multi-component dark matter.” The authors, who ran a large number of simulations on a super computer, conclude that they have successfully solved many of the vexing questions in the standard model – such as, “what exactly is the 80% of the stuff out there that shows gravity and if we do find it what may cause them not to collapse?.” These appear to be questions that a child will ask if faced with the status-quo theory. Adding a few flavors and components seem to show that the standard model is in fact “correct” and the “exotic dark matter,” is a bit more exotic than initially thought.

The amount of time, money, effort and computer time wasted to prove an incorrect framework that does not explain most of the observational data is alarming. This is a very rich area for physicists, mathematicians and engineers, as a complex and likely incorrect theory provides significant empirical flexibility to invent particles, fields and flavors. Academics, driven by the need to publish papers, are likely to simulate more garbage to prove what has been stated. In the process, they move humanity away from knowledge. There is little difference between fiction and current research in high energy physics.

Common sense, which has been “quantum evaporating” for a century, has to return to this field for it to move forward. This is unlikely to be aided by faster computers or wasteful grants.

Thursday, October 9, 2014

Quantum AI

A recent article in the Journal of Physical Review claims that the application of quantum mechanics driven algorithms can substantially improve the performance of robots and other automatons that use artificial intelligence. This appears to be in the right direction as it is known for a while that the brain itself is a quantum computer. Mediocre and linear attempts at artificial intelligence by leading engineering schools have brought ill-repute to the field of AI for long.

The “new idea” brings into focus the need for robots to be flexible – able to learn and act descriptively, not prescriptively. Computer behemoths and computer science departments in universities have been battling with software and hardware constructs, totally useless for AI for many decades. To make matters worse, they showcase stupid applications such as Siri and Jeopardy winning Watson as examples of AI. These ideas have kept a generation of computer scientists bottled up, chasing irrelevant and incongruent ideas, in an attempt to create intelligence artificially.
Perhaps, we are approaching the exit of the dark ages of computing, held hostage by incompetent companies. The idea that intelligence is dependent on both qualitative and quantitative information come as a shock to the traditionalists, but much has been written about it. For any straight thinking person, it should be clear that intelligence cannot be coded in conventional languages and run on conventional computers.

Search companies and space administrations run by the government are unlikely to advance this field. It will require creative and uninhibited minds of the next generation, less worried about world domination.

Wednesday, October 1, 2014

Mathematical segregation

Recent research from Duke, that seems to confirm earlier studies, shows that human habitats tend to get segregated if the density exceeds certain threshold. They find that cities are more likely to get segregated along racial, ethnic and other dimensions, when the proportion of occupied sites to total available sites exceed 25%. They argue that mathematical simulations show such a result. Perhaps, a simpler explanation for the finding is that available space and associated options allow the inhabitants to delay the decision to segregate. As density increases, they are forced to exercise the option to segregate as further delay reduces their value. In either case, it is instructive to note that the need to segregate for humans is as fundamental as food and sex. The timing of their segregation decision is simply value maximizing and market based.

It seems humans, shackled to their clan legacy, are unable to break from the hard wired needs to be close to their own kind and far from the rest. What is ironic is that for most of their history – from 100,000 years to 10,000 years – the characteristics they used to identify clan membership included know-how and family ties. Modern humans, while maintaining the desire to segregate, have found much less substantial aspects to segregate – such as the color of skin and political affiliations. And recently, they have overlaid that with even more meaningless attributes – such as religion and location.

Those holding out for a more peaceful world, should understand that humans are ill-equipped to rise above the legacy they have been handed. In an attempt to rationalize it, they seem to have made it worse.

Tuesday, September 30, 2014

Per capita knowledge

Switzerland and Sweden, both with more than 2 dozen Nobel laureates, producing over 30 prize winners per 10 million population stand in stark contrast to India and China, returning about 0.07 on the same metric. Some may argue that it is a denominator problem but with China and India showing less than a dozen winners, it is tough to argue that it is a size and scale problem. Furthermore, Nobel prizes in areas such as Peace and Literature commanding an equal share of those in hard sciences, one cannot put this down to lack of resources and equipment.

Per capita Nobel prizes is an interesting metric as fundamental advances could be made by paper and pencil or by sheer imagination. Thus, it normalizes the technology advantages that could be attributed to advanced economies. With the US, Canada and France at only 1/3 of the rate of the leaders, it is clear that access to resources and technology cannot explain the disparity. One has to look into educational systems and the prevailing attitude to fundamental innovation in the culture as possible attributes that influence this outcome. Lack of freedom in political systems, due to either autocracy or religion, could be another common factor pointing to underperformance.

Humans, 7 billion near clones, segmented across the world, show dramatic differences in the stock and flow of per capita knowledge. The laggards could learn a lot from observing those who do it better.

Sunday, September 28, 2014

Quantum chemistry

Recent news from Princeton University that describes a technique that can closely approximate lattice energy of molecules may open up new avenues for pharmaceuticals research. Innovations at the boundary of Physics and Chemistry have been slow, primarily due to the lack of flexibility in scientific disciplines that tend to prefer colloquial and incremental improvements to traditional methods. The Princeton team shows how the crystalline form could be predicted using emerging ideas from quantum mechanics. Such processes could be fully incorporated into computational chemistry. With the availability of vast computing power and software technologies, this innovation could usher in the next wave of productivity in pharmaceutical discovery.

Generation gap has been value destroying in most disciplines. Doctors use stethoscopes and engineers use calculators, even though these technologies have been made obsolete for many decades. Similarly, in the labs, once investments are taken into a technology, companies, unaware of the concept of sunk costs, tend to use them forever. In a regime of accelerating knowledge and innovation, the inertia of past knowledge has become exponentially more costly for every discipline, company and individual. Ironically, in the modern world, ignorance with flexibility is a lot more valuable trait than knowledge based on the past coupled with a resistance to change.

Incrementalism is a disease of the past. For the present, looking backward is likely most costly.

Thursday, September 25, 2014

Vanishing singularity

A recent hypothesis by physicists at the University of North Carolina contends that black holes simply cannot exist – mathematically. It is a bit too late – as Nobel seeking scientists elsewhere had all but accepted black holes to be at the center of most galaxies. More creative ones had speculated that black holes lead to other galaxies and provide easy avenues for time travel. This is a constant reminder that theories that lead to inexplicable outcomes, however well they fit some other observations, are not theories at all. They are fancies of grown men and women, constantly seeking meaning for the universe and their own careers.

Black holes have tickled the fancy of many just as the concept of infinity. The possibility of a phenomenon that apparently demonstrates division by zero in practice provided immense flexibility for researchers and scientific journalists alike. As they scorned the “religious” as ignorant, they hid their own massive egos under mountains of illiteracy. Competing theories disagreed – but competing scientists did not, for it was easier to prove the existence of the unseen than reject the establishment. The possibilities were endless – black holes connecting with worm holes, bending space time like a child playing with rubber bands. But,  little did they know that the child had a more complete perspective than their own, weighed down by the pressures of publications and experiments under the dome of heavy steel.

As the singularity evaporates with the radiation and associated mass, perhaps we could return to a regime dominated by Occam’s razor.

Sunday, September 21, 2014

Quorum disruption

A recent paper in Chemistry and Biology describes how certain bacteria use small molecules to “quorum sense,” essentially coordinating an attack on the host. The study also demonstrates how minor tweaks in the concentration of these chemicals could substantially disrupt the “attack signal” that improves the efficiency of the pathogenic bacteria. In the era of declining effectiveness of antibacterial agents, this seems like a favorable direction for research.

Prevalent lack of innovation across the life sciences industry has kept a lid on tangible improvements in the quality of life for humans. Most of the current knowledge on how to attack pathogens has been stale for many decades. In a predictable fashion, the industry has turned into creating bigger “hammers” and if size does not do it, a cocktail of antibacterial agents to quell the infection. This brute force approach has led to the less intelligent micro-organism to simply fall back on mutations to get over what has been put in front of them, incrementally. There is no doubt who will win this war as bacteria has over 3 billion years of experience – and they are fully capable of upsetting incremental approaches to battle it.

Humans, arguably proud of the massive organ they carry on their shoulders, may have to get more sophisticated to stay ahead – taking yesterday’s technology and making it bigger is not going to do it. Disrupting communication signals seems like a better approach.

Wednesday, September 17, 2014

Profit maximization in societal design

A recent study in the Journal of the Institute for Operations Research and the Management Sciences (INFORMS) (1) concludes what is obvious to some. They prove something that could be counterintuitive to most who do not care for economics and those who mix social preferences with economics and assume without proof, such a mixture leads to good policy and better societies.

The study looks at the automotive repair market and analyzes if the repair persons behave ethically and have social preferences, as opposed to purely profit maximizing businesses (e.g. free markets), whether society would be better off. The answer to a large swath of the population should be a resounding No – as a profit maximizing service provider seems like a bad person. In an environment where service providers are driven by ethical and social preference considerations, the study shows that the prices will rise – as they will tend to charge higher but uniform prices to everybody. If so, customers, who do not face price differentiation, will be forced to a higher uniform price, on average. In this case, analytical models show that society, as a whole will be worse off. In a society that contains both ethical and profit maximizing providers, the latter will quickly reflect the higher uniform price when it is convenient to them and reject service to those with higher costs. In a system with only profit maximizing providers and unconstrained transactions, market clearing prices will reflect the provider’s marginal cost, maximizing societal welfare.

Apparent common sense and social preferences are not necessarily good guiding principles for policy. Free markets and profit maximizing decision-makers, generally, push complex societies to higher welfare. The study correctly warns regulators and policy-makers to study social welfare issues before enacting uniform price policies.

(1) INFORMS study shows social welfare may fall in a more ethical market Published: Monday, August 25, 2014 - 15:41 in Mathematics & Economics, (e) Science News