Google

YouTube

Spotify

Scientific Sense Podcast

Friday, September 2, 2016

The stock and flow of IQ

A recent publication in the Royal Society Open Science (1), hypothesizes an intriguing idea - explosion in hominoid intelligence a few million years ago was aided by a significant increase in blood flow to the brain and not necessarily by an increase in its size. Moreover, the increase in the flow of blood may have resulted in the growth in brain size. If so, a lucky change in the diameter of the arteries carrying blood to the brain may be responsible for humans dominating the blue planet.

Plumbing appears to be central to health. It has been speculated that most autoimmune diseases, including even Alzheimer's disease, could be related to the body's declining ability to eliminate waste through the plumbing system, it has been afforded. Age and use of these systems appear negatively correlated with their efficiency in clearance. If the recent finding is true, the plumbing alone plays a central role in the evolution of intelligence, providing the energy hungry organ with ample fuel and cooling and nourishing it from relatively modest beginnings in Australopithecus to arguably better modern humans.

Plumbing appears to be central to modern medicine. In a regime with little threat from microorganisms, humans may perish by the clogging of their own aging infrastructure.

(1) http://esciencenews.com/articles/2016/08/31/blood.thirsty.brains


Wednesday, August 24, 2016

Proxima test

As the year 2020 draws near, those who emphatically professed conclusive evidence for extra-terrestrial life would be found by then, are getting a bit nervous. But there is good news - a rocky planet, Proxima-b, has been found in the "habitable zone" just a mere 4 light years away. All that remains now is the "contact," - or perhaps spectral evidence of oxygen, water and methane before uncorking champagne bottles. We are now months away from the declaration that life on Earth is not unique, albeit, by proxy evidence. The space agency, as usual, is ahead of schedule.

Boring statisticians have devised constructs such as prior and posterior probabilities of expectations - something ET enthusiasts appear to have little respect for. They have left Enceladus, Saturn’s beautiful moon, the most promising for life in the neighborhood, in the dust with the news that Proxima-b has been found, for conjecture is more powerful than facts and spectra, more beautiful than actual measurements. Before the space agency devises methods to test for life on Proxima-b, they have to ask two important questions.

1. What is the a priori probability of life on Proxima-b?
2. If they do not find life there, does the posterior change in any way?

If the answer to question 1 is zero or the answer to question 2 is no, then there is no logical reason to explore the rocky cousin. We already know there is no "advanced life," there because if there were, we would already have been at war with them.

Tuesday, August 16, 2016

The fifth force

Recent research from the University of California, Irvine, (1) speculates on the existence of the fifth fundamental force field, apart from gravitation, electromagnetism and the weak and strong nuclear forces. This is possibly in a newer direction that counters the tendency of contemporary physicists to spawn yet another "fundamental particle", anytime they do not understand a phenomenon. Speculating on the existence of a newer force, at the very least, shows some creativity in high energy physics, resting on the laurels of their recent discovery of the God particle, in the noise generated in Geneva.

It appears that we are in a stalemate. Physics, dominated by technicians, has been running amok, plunking billions down the tunnel, proving particles of fantasy. It appears that few are asking if proving fantastic particles is advancing the field in any way. Recently, the engineers hung mirrors to measure gravity waves to the tune of the diameter of a proton to prove two black holes merged nearly a billion years ago. Could somebody prove otherwise? Physics has reached a plateau, in which physicists can prove anything with the help of engineers.

Since we have over hundred "fundamental particles," already, perhaps defining a new "fundamental force field," is in a better direction.

(1) http://esciencenews.com/articles/2016/08/15/uci.physicists.confirm.possible.discovery.fifth.force.nature





Thursday, August 11, 2016

Deeper learning

Recently, statistical and mathematical techniques that have been tried and abandoned decades ago to teach computers to be smarter, have surfaced again with better sounding names. With nearly zero cost computing power in the cloud, some companies have been plunging head first into the deep abyss. Deep learning is stylish - some even try "deep mind," in an attempt to replicate the complex human. What the younger scientists may not know is that most of what they have found, has been known for a long time and one should not take advancements, due to only the recent availability of cheap computing power and memory, as "ground-breaking." Disappointments may be in store for those, highly optimistic of "solving," human intelligence. The convergence of Neuroscience and Computer Science is a welcome trend, but a dose of realism may be apt medicine for those modeling the mind, downloading the brain and possibly curing physical death.

Even since she stood up in the African Savannah, a few hundred thousand years ago, the human has been puzzled by her own mind. She searched the skies, climbed mountains and dived into the oceans, seeking the object, she could not herself define. The theory of consciousness has eluded her, for what she was seeking obviously interfered with the tools she was using. It was the ultimate prize. If she could understand herself, then, a whole new world could open up. The body can be separated from the mind, and the latter then could be rendered immortal. The mind could be replicated and networked and perpetuated across space and time. She could create and capture imagination at will. She could solve problems of infinite complexity, travel into interstellar space or even to another universe. If only, she could understand the mind and concoct a theory of consciousness. But alas. it is not to be. Whatever one calls it, the "neural network," has failed to show signs of consciousness. Yet another technology is substantially sub-optimized by engineers and scientists, most comfortable with deterministic answers to complex questions.

Ignorance is highly scalable in the presence of infinite computing power.

Monday, August 8, 2016

Premature existence

Recent ideas from Harvard (1) speculate on the lack of  maturity of life as we find it on Earth. Contemporary theory, which appears highly unlikely to be true, suggests an origin of the universe about 14 billion years ago and the ultimate demise of it in about ten trillion years from now, making it infinitesimally closer to birth than death. Life, if it is a property of the system, has had only a short moment in this unimaginably long time horizon. So prototypical observations of life, such as that we find on this uninteresting corner of the Milky Way, a rather common place galaxy, is premature by any stretch of the imagination. Even after the Sun balloons up in less than five billion years to a red giant and consumes the blue planet, with all its ignorance locked up in a small space, the universe will continue and create life forms of much greater capabilities and interest.

Life on Earth appears to show severe limitations and indicate prototypical experimentation, with infinitesimal life spans of biological units, that appear incapable of perpetuating information and knowledge across time. Such a design can only be the result of an unsuccessful trial of a large number of possible simulations. The fact that "advanced life," is battling the microscopic ones on Earth can only imply very early and unsophisticated experiments to gather raw data. If there is any learning present in this prototypical system, it has to be about what does not work rather than the other way around. The fact that the blue planet at the exact distance from a medium size star with such abundant resources and stable climate has been unable to produce intelligence may instruct future experiments, what not to attempt.

It is early but known experiments appear to be in the wrong direction.

(1) http://esciencenews.com/articles/2016/08/01/is.earthly.life.premature.a.cosmic.perspective

Tuesday, August 2, 2016

Biology meets computing

The ease of biological manipulation by chemical means has held life sciences companies back for over a century from exploring the system, that is equally receptive to electromagnetic interventions. Recent news that GSK has partnered with Alphabet to advance the field of bioelectronics is encouraging. The idea is to impart electrical signals to neurons by micro implants to cure chronic diseases. Although it is a profitable path to pursue, some caution may be in order.

There is a long history of engineering experts attempting prescriptive interventions on biological systems with an expectation of deterministic outcomes. Humans are generally good in engineering and traditional computing but biological systems do not behave in such a predictable fashion. Their engineering competence comes from selection, in which simple tools based on Newtonian Physics let them survive the harsh environment they were in for nearly hundred thousand years. The modern game is distinctly different and it is unlikely that they possess the skills to fast track with the hardware they have built up. With "Artificial Intelligence," soaking up the air waves, it is important for technology giants to bias toward humility. As projects such as "death cure," so slow to come to fruition so as to annoy the technologists behind it, perhaps getting not too excited about curing diabetes through neuron stimulation is a good idea. After all, nature took four billion years to get here, albeit, it was without a computer farm that soaks up a high share of the world electricity production. Competing with nature is likely to take a bit more than that.

The convergence of biology and computing is unavoidable. However, it is unlikely to be advanced by those with stagnant notions of either field.

Sunday, July 31, 2016

The inverted U

The value individuals add to society appears to be in the shape of an inverted U. On the high end - philosophers, scientists and technologists, who seem to know everything, appear to add very little of practical value. Recently, a scientist remarked, "when we found the gravity waves, the whole world stopped," unaware of the fact that most of the world did not know or care. On the low end, ignorance is as dangerous as cyanide, as ably demonstrated by contemporary politicians. In this scheme, the highest value to society appears to come from those in the middle - not too ignorant and not too arrogant. On both ends, there is a common factor - a brain that is idle, because of either stupor or conceit.

To maximize the value that individuals contribute to society, it appears that they have to be lifted from the abyss of ignorance or shown a practical path down from the mountains. The perception of knowledge is as dangerous as a lack of awareness of ignorance - for both result in a loss of value to society by a lack of contribution. The former, locked behind ivy walls, lament at a world of ignorance and the latter, aided and abetted by their leaders, scorn at knowledge. In the middle, humans of blood and flesh, are caught in a time warp they simply could not escape. Apathetic and lethargic, the middle withdraw from media events and elections and watch from the sidelines. This is a movie with a sad ending - dominated either by the ignorant or the arrogant, with little to differentiate between them.

Neither abundance nor lack of knowledge add societal value. Those who are aware of their limitations, do.


Wednesday, July 27, 2016

Simplified diagnostics

Research from Columbia (1) indicates that impairment in odor identification portends cognitive decline and possibly Alzheimer's disease. This appears to dominate more prevalent PET and MRI scans for the diagnosis of these ailments. This is a constant reminder that humans are far from mastering the workings of the organ they carry on their shoulders by engineering and scientific means and simpler measurements of inputs and outputs dominate such mechanics.

The human brain has been an enigma. It consumes vast amounts of energy and often produces little utility - either to the immediate host or to society at large. It effectively delegates routine affairs to the vast Central Nervous System attached to it by a thread and maintains a small garbage collection mechanism within itself to keep track of the estate it manages, automatically. Often, it gets bored and shows signs of over-design, a quirk of evolution that endowed it with vast processing power by incorporating quantum computing in a small and efficient space. Lack of large memory has allowed it to venture into the production of heuristics and showing signs of intelligence. But acceleration in technology and society has stretched it, lending some functions of importance irrelevant and elevating the irrelevant to importance.

Its decline is painful to close observers but not necessarily so for the host, herself. In a regime that disallows rolling back time, the objective function has to be minimizing pain - physical or emotional. A system that pragmatically allows such an outcome by its own deterioration can only be considered good. Diagnostics that does not require cutting open the delicate organ or injecting it with radioactive materials may be the best, for the recognition of the start of decline of this beautiful machine is not necessarily good for the host nor for close observers.

(1) http://esciencenews.com/articles/2016/07/27/smell.test.may.predict.early.stages.alzheimers.disease

Tuesday, July 19, 2016

Shining a light on a black hole

Research from the Moscow Institute of Physics and Technology (MIPT) (1) hypothesizes a measurable and elegant difference between a black hole and a compact object. The event horizon of the black hole, defined by the Schwarzschild radius, is significant - anything slightly bigger shows fundamental differences in behavior. A beam of scattered particles shows discrete spectra in the presence of a compact object that escaped collapsing into a black hole. If it were a black hole, it will be in a constant process of collapse, with a complete stoppage of time for an external observer, resulting in a continuous and smooth spectra.

The concept of a black hole has been an enigmatic thought experiment for physicists and amateurs alike. Contemporary theory fails in the singularity and speculates a stoppage of time inside the event horizon, something that cannot be fully envisioned by humans trained in the practical regime of Newtonian Mechanics. A black hole will never stop collapsing from an external perspective and so there cannot be any ex.post question on a black hole. Theories that attempt more detailed explanation beyond the event horizon is fantasy - just as the mathematically elegant string theory that cannot be tested. In spite of all the engineering progress in the last hundred years, fundamental understanding has remained akin to a black hole - in suspended animation. A handful of men and women from the turn of last century remain to be responsible for most of the abstract knowledge that humans have accumulated. The reasons for this is unclear but lack of imagination appears to be the prime suspect.

Fooling around with mathematics may give contemporary scientists satisfaction but explaining the stoppage of time will require more than that.

(1) http://esciencenews.com/articles/2016/07/01/the.energy.spectrum.particles.will.help.make.out.black.holes




Thursday, July 14, 2016

You are what you learn

Recent research from MIT (1) shows that the underlying attributes of music – consonance and dissonance – are not hard wired. Contrasting preferences of tribes with little exposure to Western music such as some Amazon tribes and those with gradually increasing exposure, culminating in accomplished American musicians, they prove that the preference toward consonance over dissonance is learned. Music, thus, appears to be personal and preferences largely generated by experience rather than an innate mechanism in the brain.

In the contemporary regime of accelerating data, the brain is bombarded with an information stream, it was never designed to tackle. An intricate quantum computer, specialized in pattern finding but with rather limited memory, the brain has been stretched to undervalue its advantages and it has been struggling to keep large swaths of data in its limited memory banks. The learning processor, however, has been able to efficiently design and store information in heuristics and dump the underlying raw data as fast as it can. As it loses history, the stored heuristics drive function and generate preferences, as if they are part of the original operating system.

The finding has implications for many areas not the least of which is in the treatment of Central Nervous System (CNS) diseases such as racism, alcoholism and ego. Fast discarding of underlying information due to a lack of storage capacity, prevents back testing of learned heuristics. A limited training set of underlying data could have irreversible and dramatic influences on end outcomes. More importantly, a brain that is trained with misguided heuristics, cannot easily be retrained as the neurons become rigid with incoherent cycles.

You are what you listen to, you are what you eat and more importantly, you are what you learn.

(1) http://esciencenews.com/articles/2016/07/13/why.we.music.we.do

Tuesday, July 5, 2016

The failure of finite elements

Engineers and mathematicians, with a core competence in building complex structures from elemental and standardized components, have had a tough time with domains not amenable to prescriptive and deterministic logic. These include high energy physics, biology, economics and artificial intelligence. The idea that the behavior of a system cannot be predicted by its components is foreign to most disciplines and the applications of such hard sciences, supported by engineering and technology.

In complex organisms such as companies, it has long been recognized that outcomes cannot be predicted by an analysis of its components, however standardized they may be. The “rules of engagement,” if not defined in elegant and closed form mathematics, appear to be less relevant for those seeking precision. However, there is almost nothing in today’s world that could be defined so precisely and the recognition of this concept is possibly the first positive step toward embracing reality.

The interplay between physicists wanting to prove century old predictions and engineers standing ready to prove anything by heavy and complex machines, has been costly to society. The interplay between biologists and chemists wanting to influence systems with precise and targeted therapy and engineers standing ready to do so, has been costly to society. The interplay between economists looking to apply statistical precision to the unknown and engineers ready to build models to whatever is needed, has been costly to society.

Complex systems cannot be broken down to finite elements for the behavior of the system does not emanate from its components. 

Thursday, June 30, 2016

Cognitive monopoly

Major discontinuities in human history have often led to monopoly positions in subsequent markets, driven by winner takes all characteristics. In the modern economy - automobiles, airplanes and computers certainly fit this view. In the case of the internet, invented by tax payer investments, attempts by a few to monopolize the flow of electrons, have been averted thus far. But "Net neutrality," is not something that rent seeking behemoths are likely to accept in the long run, even if they did not pay for it.

The nascent wave - machine cognition - has the monopolists scrambling to get the upper hand. In this wave, capital, as measured by megaflops and terabytes, has a significant advantage. The leaders, plush with computing power, seem to believe that there is nothing that may challenge their positions. Their expectations of technology acceleration appear optimistic but nonetheless we appear to be progressing at an interesting enough trajectory. Although many, including the world's leading scientist, are worried about runaway artificial intelligence, one could argue that there are more prosaic worries for the 7 billion around the world.

Monopolies generally destroy societal value. Even those with a charitable frame, acquire the disease of "God complex," as the money begins to flow in. Humans are simple, driven by ego and an objective function, either biased toward basic necessities or irrational attributes that are difficult to tease out. Contemporary humans can be easily classified by intuition, without even the need for the simplest of algorithms - Nearest neighbors - into those with access to information and those who do not. Politicians and policy makers have been perplexed by the fact that such a simple segmentation scheme seems to work in every part of the world population from countries to counties and cities. Cognition monopolists will make it infinitely worse.

Can Mathematics be monopolized? The simple answer is yes. In a regime of brute force over highly available computing power for a few, answers could be found by the blind and the dumb. Perhaps, there is still hope for the rest as we have seen this movie before.



Saturday, June 25, 2016

Clans continue

It appears clear that habits formed over hundred thousand years cannot be changed in a mere hundred years. As homo-sapiens ventured out of the African Savannah, they were still tightly organized as small clans, less than a hundred in strength, with their own unique language, culture, religion and morality. In Europe and Asia, within hundreds of years of arrival, they erased their close relatives with astounding efficiency. They also successfully navigated disease and climatic change that reduced them to a few thousand - emerging out of the bottle neck, with even tighter clan relationships.

Technology - aircrafts, computers and the internet - opened up the modern economy in the blink of an eye. Economists, excited by the possibilities, argued for the opening up of countries, continents and economies, but they did not realize the behavior patterns integrated deeply into the human psyche. Countries cling to their languages and apparent cultural nuances aided by politicians who in autocratic and socialistic regimes seem to have convinced the populace that they can implement strategic policies that will make their countries, "great again." In advanced democracies, a larger percentage of the population, seem to have self taught the same ideas and in some rare cases they have found surrogates, who will sing the same tune as the autocrats, even though he/she does not know the words to the music. A dangerous trend has emerged in clans that profess to be democratic and sophisticated. The question is whether learning from mistakes is possible - something that made humans successful in the past. Ironically, in the complex modern economy, the outcomes are not clearly observable and often has long cycles. Getting mauled by a tiger is immediate feedback but having a stagnant and deteriorating economy has little feedback for the larger population.

The modern economy, still largely driven by the clan instincts of the seven billion that occupy the Earth, cannot be shocked out of its stupor by logic. Perhaps photographs from space that show the little blue spot in the midst of chaos may appeal to the artistic side of humans. Little closer, they will find no demarcations as depicted on maps and globes. After all, humans have shown great capabilities to think abstractly, albeit, such thoughts are not often tested by logic.


Saturday, May 28, 2016

Redefining Intelligence

Intelligence, natural or artificial, has been a difficult concept to define and understand. Methods of measuring intelligence seem to favor the speed and efficiency in pattern finding. "IQ tests," certainly measure the ability to find patterns and artificial intelligence aficionados have spent three decades teaching computers to get better at the same. Standardized tests, following the same template, appear to measure the same attribute but couch the results in "aptitude," - perhaps to make it sound more plausible. And, across all dimensions of education and testing, this notion of intelligence and hence "aptitude," appears prevalent.

However, can the speed of pattern finding be used as the only metric for intelligence? Certainly in prototypical systems and societies, efficiency in finding food (energy) and fast replication are dominant. Pattern finding is likely the most important skill in this context. If so, then, one could argue that the status-quo definition of intelligence is a measurement of a characteristic that is most useful to maximize a simple objective function, governed largely by food and replicability. At the very least, a thought experiment may be in order to imagine intelligence in higher order societies.

If intelligence is redefined as the differential of the speed in pattern finding - an acceleration in pattern finding - then it can incorporate higher order learning. In societies where such a metric is dominant, the speed of finding patterns from historical data, albeit important, may not qualify as intelligence. One could easily see systems that have very slow speed of pattern finding at inception if energy is focused more at the differential, allowing such systems to exponentially gain knowledge at later stages. Sluggish and dumb, such participants would certainly be eradicated quickly in prototypical societies, before they can demonstrate the accelerating phase of knowledge creation.

Intelligence - ill defined and measured, may need to be rethought, if humans were to advance to a level 1 society. It seems unlikely.

Monday, May 23, 2016

Salt water bubbles

Economists, closer to salt water, appear to be prone to thoughts of inefficiency and bubbles in the financial markets, something that can be cured by a single trip to the windy city. A recent study from Columbia University (1) asserts that they could find over 13,000 bubbles in the stock market between 2000 and 2013. Using supercomputers, no less, and "big data," they appear to have "conclusively shown" that stock prices take wild and persistent excursions from their "fair values." Unfortunately, these academics, who profess to be "data scientists," are yet to encounter the phenomenon of "random walk," further evidence that "data scientists" should stay away from financial markets. After all, the “physicists” who descended into Wall Street have had a checkered history of “abnormal returns” wrapped in consistent negative alpha.

The remark from a graduate student from Harvard - "I expected to see lots of bubbles in 2009, after the crash, but there were a lot before and a lot after," is symptomatic of the problem faced by “data scientists,” seeking problems to solve in super-domains they have no clue about, where participants, who determine outcomes are equipped with pattern finding technology. They may have better luck in real markets, for prices in financial markets are determined by a large number of participants, each with her own inefficient algorithms. The most troubling aspect of the study is that the authors of the study believe that “a bubble happens when the price of an asset, be it gold, housing or stocks, is more than what a rational person would be willing to pay based on its expected future cash flows.” In a world, immersed in intellectual property, where future cash flows cannot be forecasted precisely, the value of an asset cannot be determined by such simple constructs that have been rendered invalid for decades.

The lure of financial markets have been problematic for “data scientists” and “physicists.” However, a cure is readily available in academic literature emanating from the sixties.

(1) http://esciencenews.com/articles/2016/05/04/stocks.overvalued.longer.and.more.often.previously.thought.says.study

Monday, May 16, 2016

Small step toward bigger hype

Recent research from the University of Liverpool (1) suggests a method by which computers could learn languages by semantic representation and similarity look-ups. Although this may be in the right direction, it is important to remember that most of the work in teaching computers language or even fancy tricks, is not in the realm of "artificial intelligence," but rather they belong to the age old and somewhat archaic notion of expert systems. Computer giants, while solving grand problems such as Chess, Jeopardy, Go and self driving cars, seem to have forgotten that rules based expert systems have been around from the inception of computers, much before some of these companies were founded. The fact that faster hardware can churn larger set of rules quicker is not advancing intelligence but it is certainly helping efficient computing.

Engineering schools appear to still teach ideas that are already obsolete. Programming languages have been frozen in time, with prescriptive syntax and rigid control flow. Today's high level languages are certainly practical and immensely capable of producing inferior applications. Even those who could have "swiftly," assembled knowledge from previous attempts seem to have concocted together a compiler that borrows from the worst that have gone before it. As they proclaim "3 billion devices already run it," every hour an update is pushed or conduct conferences around the globe dotting and netting, the behemoths don't seem to understand that their technologies have inherent limitations.

Computer scientists, locked behind ivy walls, are given skills that the world does not need anymore.

(1) http://esciencenews.com/articles/2016/05/06/teaching.computers.understand.human.languages


Thursday, May 12, 2016

Nutritional genetics

Research from Indiana University (1) speculates that physical traits could be substantially impacted by food. The adage that "you are what you eat," appears to work at a deeper genetic level. In low complexity biological systems, such as ants and bees, variation in food at the larvae stage seems to explain specialization at the genetic level. If true, this has implications beyond what has been observed.

Food, a complex external chemical, has to be metabolized, utilized and purged by biological systems routinely. Although it is clear that available energy content and processing efficiency will depend on the variation and complexity in inputs, the idea that food could cause genetic specialization is fascinating. More importantly, this may lead to better design of food to favorably impact physical and mental conditions, the latter possibly holding higher promise for humans.

Ancient cultures and medicines have routinely relied on food as the primary way to remedy tactical issues. The Indiana research may provide a path to propel this idea into more systematic and planned impacts.

(1) http://esciencenews.com/articles/2016/05/12/you.are.what.you.eat.iu.biologists.map.genetic.pathways.nutrition.based.species.traits

Thursday, May 5, 2016

No safety net

Recent research from Johns Hopkins (1) suggests there are over a quarter of a million deaths in the US per year due to medical errors. It is a sobering observation that future generations will look back on with anguish and perhaps, incredibility.  At the height of technology, we are slipping, not because of lack of know-how, but rather, lack of application. One preventable death is too much and the fact that medical errors are the third leading cause of death in the US, is immensely troubling.

Unfortunately, technology does not solve problems. Bigger data and faster computers are likely irrelevant if they cannot fundamentally influence decision processes and allow information flow to enhance decision quality. It is not about precision - there is no such thing - but a systematic use of all available information at the point of decision. Further, the human brain, with its inherent limitations, is unable to minimize downside risk in a regime of high utilization and volatility. A loss of life, a traumatic and life changing event for any healthcare provider, looms high but the environment simply does not allow anything more than what is tactically possible. The lack of a safety net below cascading, complex and error-prone processes suggest the need for a sudden and impactful change that most technology companies are unable to help with.

It is high time that healthcare embraced practical applications of available technologies to improve patient health and welfare.

(1) http://esciencenews.com/articles/2016/05/04/study.suggests.medical.errors.now.third.leading.cause.death.us