Google

YouTube

Spotify

Scientific Sense Podcast

Monday, August 8, 2016

Premature existence

Recent ideas from Harvard (1) speculate on the lack of  maturity of life as we find it on Earth. Contemporary theory, which appears highly unlikely to be true, suggests an origin of the universe about 14 billion years ago and the ultimate demise of it in about ten trillion years from now, making it infinitesimally closer to birth than death. Life, if it is a property of the system, has had only a short moment in this unimaginably long time horizon. So prototypical observations of life, such as that we find on this uninteresting corner of the Milky Way, a rather common place galaxy, is premature by any stretch of the imagination. Even after the Sun balloons up in less than five billion years to a red giant and consumes the blue planet, with all its ignorance locked up in a small space, the universe will continue and create life forms of much greater capabilities and interest.

Life on Earth appears to show severe limitations and indicate prototypical experimentation, with infinitesimal life spans of biological units, that appear incapable of perpetuating information and knowledge across time. Such a design can only be the result of an unsuccessful trial of a large number of possible simulations. The fact that "advanced life," is battling the microscopic ones on Earth can only imply very early and unsophisticated experiments to gather raw data. If there is any learning present in this prototypical system, it has to be about what does not work rather than the other way around. The fact that the blue planet at the exact distance from a medium size star with such abundant resources and stable climate has been unable to produce intelligence may instruct future experiments, what not to attempt.

It is early but known experiments appear to be in the wrong direction.

(1) http://esciencenews.com/articles/2016/08/01/is.earthly.life.premature.a.cosmic.perspective

Tuesday, August 2, 2016

Biology meets computing

The ease of biological manipulation by chemical means has held life sciences companies back for over a century from exploring the system, that is equally receptive to electromagnetic interventions. Recent news that GSK has partnered with Alphabet to advance the field of bioelectronics is encouraging. The idea is to impart electrical signals to neurons by micro implants to cure chronic diseases. Although it is a profitable path to pursue, some caution may be in order.

There is a long history of engineering experts attempting prescriptive interventions on biological systems with an expectation of deterministic outcomes. Humans are generally good in engineering and traditional computing but biological systems do not behave in such a predictable fashion. Their engineering competence comes from selection, in which simple tools based on Newtonian Physics let them survive the harsh environment they were in for nearly hundred thousand years. The modern game is distinctly different and it is unlikely that they possess the skills to fast track with the hardware they have built up. With "Artificial Intelligence," soaking up the air waves, it is important for technology giants to bias toward humility. As projects such as "death cure," so slow to come to fruition so as to annoy the technologists behind it, perhaps getting not too excited about curing diabetes through neuron stimulation is a good idea. After all, nature took four billion years to get here, albeit, it was without a computer farm that soaks up a high share of the world electricity production. Competing with nature is likely to take a bit more than that.

The convergence of biology and computing is unavoidable. However, it is unlikely to be advanced by those with stagnant notions of either field.

Sunday, July 31, 2016

The inverted U

The value individuals add to society appears to be in the shape of an inverted U. On the high end - philosophers, scientists and technologists, who seem to know everything, appear to add very little of practical value. Recently, a scientist remarked, "when we found the gravity waves, the whole world stopped," unaware of the fact that most of the world did not know or care. On the low end, ignorance is as dangerous as cyanide, as ably demonstrated by contemporary politicians. In this scheme, the highest value to society appears to come from those in the middle - not too ignorant and not too arrogant. On both ends, there is a common factor - a brain that is idle, because of either stupor or conceit.

To maximize the value that individuals contribute to society, it appears that they have to be lifted from the abyss of ignorance or shown a practical path down from the mountains. The perception of knowledge is as dangerous as a lack of awareness of ignorance - for both result in a loss of value to society by a lack of contribution. The former, locked behind ivy walls, lament at a world of ignorance and the latter, aided and abetted by their leaders, scorn at knowledge. In the middle, humans of blood and flesh, are caught in a time warp they simply could not escape. Apathetic and lethargic, the middle withdraw from media events and elections and watch from the sidelines. This is a movie with a sad ending - dominated either by the ignorant or the arrogant, with little to differentiate between them.

Neither abundance nor lack of knowledge add societal value. Those who are aware of their limitations, do.


Wednesday, July 27, 2016

Simplified diagnostics

Research from Columbia (1) indicates that impairment in odor identification portends cognitive decline and possibly Alzheimer's disease. This appears to dominate more prevalent PET and MRI scans for the diagnosis of these ailments. This is a constant reminder that humans are far from mastering the workings of the organ they carry on their shoulders by engineering and scientific means and simpler measurements of inputs and outputs dominate such mechanics.

The human brain has been an enigma. It consumes vast amounts of energy and often produces little utility - either to the immediate host or to society at large. It effectively delegates routine affairs to the vast Central Nervous System attached to it by a thread and maintains a small garbage collection mechanism within itself to keep track of the estate it manages, automatically. Often, it gets bored and shows signs of over-design, a quirk of evolution that endowed it with vast processing power by incorporating quantum computing in a small and efficient space. Lack of large memory has allowed it to venture into the production of heuristics and showing signs of intelligence. But acceleration in technology and society has stretched it, lending some functions of importance irrelevant and elevating the irrelevant to importance.

Its decline is painful to close observers but not necessarily so for the host, herself. In a regime that disallows rolling back time, the objective function has to be minimizing pain - physical or emotional. A system that pragmatically allows such an outcome by its own deterioration can only be considered good. Diagnostics that does not require cutting open the delicate organ or injecting it with radioactive materials may be the best, for the recognition of the start of decline of this beautiful machine is not necessarily good for the host nor for close observers.

(1) http://esciencenews.com/articles/2016/07/27/smell.test.may.predict.early.stages.alzheimers.disease

Tuesday, July 19, 2016

Shining a light on a black hole

Research from the Moscow Institute of Physics and Technology (MIPT) (1) hypothesizes a measurable and elegant difference between a black hole and a compact object. The event horizon of the black hole, defined by the Schwarzschild radius, is significant - anything slightly bigger shows fundamental differences in behavior. A beam of scattered particles shows discrete spectra in the presence of a compact object that escaped collapsing into a black hole. If it were a black hole, it will be in a constant process of collapse, with a complete stoppage of time for an external observer, resulting in a continuous and smooth spectra.

The concept of a black hole has been an enigmatic thought experiment for physicists and amateurs alike. Contemporary theory fails in the singularity and speculates a stoppage of time inside the event horizon, something that cannot be fully envisioned by humans trained in the practical regime of Newtonian Mechanics. A black hole will never stop collapsing from an external perspective and so there cannot be any ex.post question on a black hole. Theories that attempt more detailed explanation beyond the event horizon is fantasy - just as the mathematically elegant string theory that cannot be tested. In spite of all the engineering progress in the last hundred years, fundamental understanding has remained akin to a black hole - in suspended animation. A handful of men and women from the turn of last century remain to be responsible for most of the abstract knowledge that humans have accumulated. The reasons for this is unclear but lack of imagination appears to be the prime suspect.

Fooling around with mathematics may give contemporary scientists satisfaction but explaining the stoppage of time will require more than that.

(1) http://esciencenews.com/articles/2016/07/01/the.energy.spectrum.particles.will.help.make.out.black.holes




Thursday, July 14, 2016

You are what you learn

Recent research from MIT (1) shows that the underlying attributes of music – consonance and dissonance – are not hard wired. Contrasting preferences of tribes with little exposure to Western music such as some Amazon tribes and those with gradually increasing exposure, culminating in accomplished American musicians, they prove that the preference toward consonance over dissonance is learned. Music, thus, appears to be personal and preferences largely generated by experience rather than an innate mechanism in the brain.

In the contemporary regime of accelerating data, the brain is bombarded with an information stream, it was never designed to tackle. An intricate quantum computer, specialized in pattern finding but with rather limited memory, the brain has been stretched to undervalue its advantages and it has been struggling to keep large swaths of data in its limited memory banks. The learning processor, however, has been able to efficiently design and store information in heuristics and dump the underlying raw data as fast as it can. As it loses history, the stored heuristics drive function and generate preferences, as if they are part of the original operating system.

The finding has implications for many areas not the least of which is in the treatment of Central Nervous System (CNS) diseases such as racism, alcoholism and ego. Fast discarding of underlying information due to a lack of storage capacity, prevents back testing of learned heuristics. A limited training set of underlying data could have irreversible and dramatic influences on end outcomes. More importantly, a brain that is trained with misguided heuristics, cannot easily be retrained as the neurons become rigid with incoherent cycles.

You are what you listen to, you are what you eat and more importantly, you are what you learn.

(1) http://esciencenews.com/articles/2016/07/13/why.we.music.we.do

Tuesday, July 5, 2016

The failure of finite elements

Engineers and mathematicians, with a core competence in building complex structures from elemental and standardized components, have had a tough time with domains not amenable to prescriptive and deterministic logic. These include high energy physics, biology, economics and artificial intelligence. The idea that the behavior of a system cannot be predicted by its components is foreign to most disciplines and the applications of such hard sciences, supported by engineering and technology.

In complex organisms such as companies, it has long been recognized that outcomes cannot be predicted by an analysis of its components, however standardized they may be. The “rules of engagement,” if not defined in elegant and closed form mathematics, appear to be less relevant for those seeking precision. However, there is almost nothing in today’s world that could be defined so precisely and the recognition of this concept is possibly the first positive step toward embracing reality.

The interplay between physicists wanting to prove century old predictions and engineers standing ready to prove anything by heavy and complex machines, has been costly to society. The interplay between biologists and chemists wanting to influence systems with precise and targeted therapy and engineers standing ready to do so, has been costly to society. The interplay between economists looking to apply statistical precision to the unknown and engineers ready to build models to whatever is needed, has been costly to society.

Complex systems cannot be broken down to finite elements for the behavior of the system does not emanate from its components. 

Thursday, June 30, 2016

Cognitive monopoly

Major discontinuities in human history have often led to monopoly positions in subsequent markets, driven by winner takes all characteristics. In the modern economy - automobiles, airplanes and computers certainly fit this view. In the case of the internet, invented by tax payer investments, attempts by a few to monopolize the flow of electrons, have been averted thus far. But "Net neutrality," is not something that rent seeking behemoths are likely to accept in the long run, even if they did not pay for it.

The nascent wave - machine cognition - has the monopolists scrambling to get the upper hand. In this wave, capital, as measured by megaflops and terabytes, has a significant advantage. The leaders, plush with computing power, seem to believe that there is nothing that may challenge their positions. Their expectations of technology acceleration appear optimistic but nonetheless we appear to be progressing at an interesting enough trajectory. Although many, including the world's leading scientist, are worried about runaway artificial intelligence, one could argue that there are more prosaic worries for the 7 billion around the world.

Monopolies generally destroy societal value. Even those with a charitable frame, acquire the disease of "God complex," as the money begins to flow in. Humans are simple, driven by ego and an objective function, either biased toward basic necessities or irrational attributes that are difficult to tease out. Contemporary humans can be easily classified by intuition, without even the need for the simplest of algorithms - Nearest neighbors - into those with access to information and those who do not. Politicians and policy makers have been perplexed by the fact that such a simple segmentation scheme seems to work in every part of the world population from countries to counties and cities. Cognition monopolists will make it infinitely worse.

Can Mathematics be monopolized? The simple answer is yes. In a regime of brute force over highly available computing power for a few, answers could be found by the blind and the dumb. Perhaps, there is still hope for the rest as we have seen this movie before.



Saturday, June 25, 2016

Clans continue

It appears clear that habits formed over hundred thousand years cannot be changed in a mere hundred years. As homo-sapiens ventured out of the African Savannah, they were still tightly organized as small clans, less than a hundred in strength, with their own unique language, culture, religion and morality. In Europe and Asia, within hundreds of years of arrival, they erased their close relatives with astounding efficiency. They also successfully navigated disease and climatic change that reduced them to a few thousand - emerging out of the bottle neck, with even tighter clan relationships.

Technology - aircrafts, computers and the internet - opened up the modern economy in the blink of an eye. Economists, excited by the possibilities, argued for the opening up of countries, continents and economies, but they did not realize the behavior patterns integrated deeply into the human psyche. Countries cling to their languages and apparent cultural nuances aided by politicians who in autocratic and socialistic regimes seem to have convinced the populace that they can implement strategic policies that will make their countries, "great again." In advanced democracies, a larger percentage of the population, seem to have self taught the same ideas and in some rare cases they have found surrogates, who will sing the same tune as the autocrats, even though he/she does not know the words to the music. A dangerous trend has emerged in clans that profess to be democratic and sophisticated. The question is whether learning from mistakes is possible - something that made humans successful in the past. Ironically, in the complex modern economy, the outcomes are not clearly observable and often has long cycles. Getting mauled by a tiger is immediate feedback but having a stagnant and deteriorating economy has little feedback for the larger population.

The modern economy, still largely driven by the clan instincts of the seven billion that occupy the Earth, cannot be shocked out of its stupor by logic. Perhaps photographs from space that show the little blue spot in the midst of chaos may appeal to the artistic side of humans. Little closer, they will find no demarcations as depicted on maps and globes. After all, humans have shown great capabilities to think abstractly, albeit, such thoughts are not often tested by logic.


Saturday, May 28, 2016

Redefining Intelligence

Intelligence, natural or artificial, has been a difficult concept to define and understand. Methods of measuring intelligence seem to favor the speed and efficiency in pattern finding. "IQ tests," certainly measure the ability to find patterns and artificial intelligence aficionados have spent three decades teaching computers to get better at the same. Standardized tests, following the same template, appear to measure the same attribute but couch the results in "aptitude," - perhaps to make it sound more plausible. And, across all dimensions of education and testing, this notion of intelligence and hence "aptitude," appears prevalent.

However, can the speed of pattern finding be used as the only metric for intelligence? Certainly in prototypical systems and societies, efficiency in finding food (energy) and fast replication are dominant. Pattern finding is likely the most important skill in this context. If so, then, one could argue that the status-quo definition of intelligence is a measurement of a characteristic that is most useful to maximize a simple objective function, governed largely by food and replicability. At the very least, a thought experiment may be in order to imagine intelligence in higher order societies.

If intelligence is redefined as the differential of the speed in pattern finding - an acceleration in pattern finding - then it can incorporate higher order learning. In societies where such a metric is dominant, the speed of finding patterns from historical data, albeit important, may not qualify as intelligence. One could easily see systems that have very slow speed of pattern finding at inception if energy is focused more at the differential, allowing such systems to exponentially gain knowledge at later stages. Sluggish and dumb, such participants would certainly be eradicated quickly in prototypical societies, before they can demonstrate the accelerating phase of knowledge creation.

Intelligence - ill defined and measured, may need to be rethought, if humans were to advance to a level 1 society. It seems unlikely.

Monday, May 23, 2016

Salt water bubbles

Economists, closer to salt water, appear to be prone to thoughts of inefficiency and bubbles in the financial markets, something that can be cured by a single trip to the windy city. A recent study from Columbia University (1) asserts that they could find over 13,000 bubbles in the stock market between 2000 and 2013. Using supercomputers, no less, and "big data," they appear to have "conclusively shown" that stock prices take wild and persistent excursions from their "fair values." Unfortunately, these academics, who profess to be "data scientists," are yet to encounter the phenomenon of "random walk," further evidence that "data scientists" should stay away from financial markets. After all, the “physicists” who descended into Wall Street have had a checkered history of “abnormal returns” wrapped in consistent negative alpha.

The remark from a graduate student from Harvard - "I expected to see lots of bubbles in 2009, after the crash, but there were a lot before and a lot after," is symptomatic of the problem faced by “data scientists,” seeking problems to solve in super-domains they have no clue about, where participants, who determine outcomes are equipped with pattern finding technology. They may have better luck in real markets, for prices in financial markets are determined by a large number of participants, each with her own inefficient algorithms. The most troubling aspect of the study is that the authors of the study believe that “a bubble happens when the price of an asset, be it gold, housing or stocks, is more than what a rational person would be willing to pay based on its expected future cash flows.” In a world, immersed in intellectual property, where future cash flows cannot be forecasted precisely, the value of an asset cannot be determined by such simple constructs that have been rendered invalid for decades.

The lure of financial markets have been problematic for “data scientists” and “physicists.” However, a cure is readily available in academic literature emanating from the sixties.

(1) http://esciencenews.com/articles/2016/05/04/stocks.overvalued.longer.and.more.often.previously.thought.says.study

Monday, May 16, 2016

Small step toward bigger hype

Recent research from the University of Liverpool (1) suggests a method by which computers could learn languages by semantic representation and similarity look-ups. Although this may be in the right direction, it is important to remember that most of the work in teaching computers language or even fancy tricks, is not in the realm of "artificial intelligence," but rather they belong to the age old and somewhat archaic notion of expert systems. Computer giants, while solving grand problems such as Chess, Jeopardy, Go and self driving cars, seem to have forgotten that rules based expert systems have been around from the inception of computers, much before some of these companies were founded. The fact that faster hardware can churn larger set of rules quicker is not advancing intelligence but it is certainly helping efficient computing.

Engineering schools appear to still teach ideas that are already obsolete. Programming languages have been frozen in time, with prescriptive syntax and rigid control flow. Today's high level languages are certainly practical and immensely capable of producing inferior applications. Even those who could have "swiftly," assembled knowledge from previous attempts seem to have concocted together a compiler that borrows from the worst that have gone before it. As they proclaim "3 billion devices already run it," every hour an update is pushed or conduct conferences around the globe dotting and netting, the behemoths don't seem to understand that their technologies have inherent limitations.

Computer scientists, locked behind ivy walls, are given skills that the world does not need anymore.

(1) http://esciencenews.com/articles/2016/05/06/teaching.computers.understand.human.languages


Thursday, May 12, 2016

Nutritional genetics

Research from Indiana University (1) speculates that physical traits could be substantially impacted by food. The adage that "you are what you eat," appears to work at a deeper genetic level. In low complexity biological systems, such as ants and bees, variation in food at the larvae stage seems to explain specialization at the genetic level. If true, this has implications beyond what has been observed.

Food, a complex external chemical, has to be metabolized, utilized and purged by biological systems routinely. Although it is clear that available energy content and processing efficiency will depend on the variation and complexity in inputs, the idea that food could cause genetic specialization is fascinating. More importantly, this may lead to better design of food to favorably impact physical and mental conditions, the latter possibly holding higher promise for humans.

Ancient cultures and medicines have routinely relied on food as the primary way to remedy tactical issues. The Indiana research may provide a path to propel this idea into more systematic and planned impacts.

(1) http://esciencenews.com/articles/2016/05/12/you.are.what.you.eat.iu.biologists.map.genetic.pathways.nutrition.based.species.traits

Thursday, May 5, 2016

No safety net

Recent research from Johns Hopkins (1) suggests there are over a quarter of a million deaths in the US per year due to medical errors. It is a sobering observation that future generations will look back on with anguish and perhaps, incredibility.  At the height of technology, we are slipping, not because of lack of know-how, but rather, lack of application. One preventable death is too much and the fact that medical errors are the third leading cause of death in the US, is immensely troubling.

Unfortunately, technology does not solve problems. Bigger data and faster computers are likely irrelevant if they cannot fundamentally influence decision processes and allow information flow to enhance decision quality. It is not about precision - there is no such thing - but a systematic use of all available information at the point of decision. Further, the human brain, with its inherent limitations, is unable to minimize downside risk in a regime of high utilization and volatility. A loss of life, a traumatic and life changing event for any healthcare provider, looms high but the environment simply does not allow anything more than what is tactically possible. The lack of a safety net below cascading, complex and error-prone processes suggest the need for a sudden and impactful change that most technology companies are unable to help with.

It is high time that healthcare embraced practical applications of available technologies to improve patient health and welfare.

(1) http://esciencenews.com/articles/2016/05/04/study.suggests.medical.errors.now.third.leading.cause.death.us

Saturday, April 30, 2016

Predictions with a single observation

A recent study from the University of Rochester (1) claims to improve the "Drake equation," - a constant reminder that multiplying random numbers does not provide any additional insights that are not present in the components. Now with exoplanets galore, the study that appeared in Astrobilology claims they can put a "pessimistic estimate" on the probability of non-existence of advanced civilizations, elsewhere in the universe. Just as in the previous attempt, the study suffers from traditional statistics. Most agree that a singular observation is not sufficient to predict anything, let alone the age, capabilities and the distance from Earth, where they could reasonably exist.

As the authors note, the space time window afforded to humans is too narrow to prove or disprove anything. However, the tendency to be amazed by large numbers and the multiplicative effects of such constructs, have led scientists to make non-scientific claims. Until at least a second data point becomes available, the effort expended on statistical analysis in this area is a waste of time. Availability of data is necessary but not sufficient to assign probabilities. Even those clinging to normality statistics, centuries old by now, know that it is not a good tool to make predictions.

More importantly, those awaiting ET's arrival, have almost infinite flexibility to keep on searching. If one has a hypothesis, then an accumulation of negative findings against it, regardless of how many trials are possible, has to be given due consideration. As an example, if one claims favorable conditions exist for life on Enceladus, Saturn's famous moon, such as water, oxygen and a heat source - then investing into the exploration of the icy rock is reasonable. However, if one comes out empty, it cannot be irrelevant. Just because there are trillion other rocks, in the solar system alone, that could be explored, one cannot simply ignore such an observation. At the very least, it should challenge the assumptions used by the space agency and others to justify such explorations. This "new" statistics - perhaps called "Statistics of large numbers," - where no negative observation has any utility - is very costly even though it is well positioned to pump out publications.

Scientists, engaged in irrelevant and invalid observations. aided by large numbers, may need to challenge themselves to advance the field.

(1) http://esciencenews.com/articles/2016/04/28/are.we.alone.setting.some.limits.our.uniqueness

Tuesday, April 26, 2016

Uncertain networks

Recent research from MIT, Chicago and Harvard (1) contends that smaller shocks in the economy could be magnified significantly by network effects. If true, it may provide guidance on policy that is trying to "jump start" large economic systems by targeted investments. If the transmission of such shocks across the economy is predictable, then, it could impact macro-economic decisions favorably. However, looking back with a deterministic view of network evolution, may have some downside.

Economic growth is driven by the conscious harvesting of uncertainty and not by strategic investments by bureaucrats or even corporations. Further, networks are in a constant state of evolution. Measuring GDP impact, a backward looking measure has less meaning in an economy driven by information, innovation and intellectual property. Firms, locked into the status-quo, with a rigid view of demand and supply, indeed fall prey to shocks amplified by static networks, But those, keenly aware of unpredictable uncertainty and the value of flexibility, could certainly surpass such external noise. The question is not how the present network amplifies shocks but rather how the networks are built. If they are built by organizations with a static view of the future, then they will be brittle and consumed by minor shocks. The measurement of intellectual property by patents is symptomatic of the adherence to known metrics and a lack of awareness of where value is originating from.

Empirical analyses in the context of accepted theories have less value for the future - policy or not. The field of economics has to evolve with the modern economy. Lack of innovation will always have a negative effect on the economy - no further analysis is needed.

(1) http://esciencenews.com/articles/2016/04/06/how.network.effects.hurt.economies

Monday, April 25, 2016

Biological storage

Recent news (1) that University of Washington researchers have successfully stored and retrieved data using DNA molecules is exciting. As the world accumulates data - already many tens of millions of gigabytes, growing at an alarming rate, storage capacity and efficiency are becoming top priorities in computing. With material sciences still lagging and computer scientists clinging to the Silicon status-quo, it is important that the field takes a biological direction. After all, nature has coded and retrieved complex information for ever. Using the same mechanism is likely a few orders of magnitude more efficient than what is currently available.

DNA, the most important biological breakthrough over four billion years, has been almost incomprehensible for humans, arguably, the best product of evolution. Lately, however, they have been able to understand it a bit better. Although some argues that the human genome map is the end game, it is likely that it is just a humble beginning. The multi factorial flexibility afforded by the DNA molecule may allow newer ways to store binary data, making it akin to the other belated innovation - quantum computing. Here, thus far, research focused on mechanistic attempts to force fit the idea into the Silicon matrix. Taking a biological route, perhaps aided by the best functioning quantum computer, the human brain, may be a more profitable path.

Biology could accelerate lagging innovation in material sciences and computer science.

(1) http://esciencenews.com/articles/2016/04/07/uw.team.stores.digital.images.dna.and.retrieves.them.perfectly

Sunday, April 17, 2016

Bacteria rising

Recent research from Michigan State University (1) that demonstrates the rise of multi-drug resistant bacteria due to overuse of antibiotics in animals is troubling. It has long been known that flu originates in farms with multi-species interactions. As mentioned in the article, the swine farms in China are particularly problematic as they allow easy gene transfers among bacteria. This, in conjunction with lack of antibiotics research for decades due to declining commercial economics, could result in a perfect storm. 

Bacteria have been dominant all through the history of the planet. Robust architecture with fast evolution by sheer numbers, led them to largely supersede any other biological life form on earth. For the past several decades, they have been put on the back foot, for the first time, by humans. All they need, however, are sufficient number of trials to develop resistance against any anti-bacterial agent. Data shows that they are well on their way, thanks to a variety of experiments afforded to them by inter-species breeding and the overuse of known agents. 

The economics of this indicates that commercial organizations are unlikely to focus on it till it is too late. If R&D, commercial or publicly funded, is not focused on this developing problem, we may be heading toward a regime that may make Ebola look like a household pet. In a highly connected world of intercontinental travel, it is easy for the single cell organism to hitch a ride to anywhere they would like to go. Thus, local efforts are not sufficient and a true global push is needed to compete against the abundant experience, collected over four billion years.

R&D prioritization at the societal level needs to take into account the downside risk and value of investments. 


(1) http://esciencenews.com/articles/2016/04/12/antibiotic.resistance.genes.increasing

Sunday, April 10, 2016

Hole in the soul

The super-void, that presented itself in the background radiation, sporting a size of close to two billion light years, has baffled scientists. A few years after its discovery, no reasonable explanation is forthcoming. The standard model, that most still pin their research on, has shown so many cracks that scientists who adhere to it are beginning to look like economists, who have similar difficulty letting go of established theories. Hunting in the particle forest, either to propose new ones or to prove the hypothesized ones indeed exist, has been the favorite past-time of physicists. Recently, they even measured the reverberation of gravity waves, generated by an event over billion years ago, to the tune of the diameter of a proton. Now, it is nearly impossible to disprove anything in Physics.

Biologists and chemists are in the same spot. Technology is advancing so fast that scientists are running out of hypotheses to prove. There are so many engineers and technologists, pumped out by the elite educational institutions around the world, who stand ready to prove anything in science. We are fast approaching a regime of dearth of ideas and an oversupply of proofs. Is this what was envisioned a few decades ago by visionary scientists, who predicted a future in which there would be nothing more to prove. If so, it would be bleak, indeed.

Creating hypotheses, a skill that was left undernourished for a few decades, may need to be brought back.

Tuesday, March 29, 2016

Return to hardware

Hardware design has been losing luster for many decades in the world of computers. Software has been riding high, partly aided by hype and partly due to the younger crowd plunging deep into apps to make a quick buck and associated fame. Monopolies have been created on inferior operating
systems and office automation, while those who are opposed to it have been chasing public domain noise. Even educational institutions, following the latest ephemeral trends, with half lives of runway fashions, have been churning out courses with little utility for the future. Some have been putting content on-line and others still want students to toil under fluorescent lighting on wooden desks, while picking up the skills of the future.

Computer science has gone astray. Humans, susceptible to incrementalism, have been chasing false dreams on antiquated frameworks. Just as their predecessors, modern humans always attempt to scale inferior performance by massive parallel processing. They stack circuits ever closer and they network computers ever larger in an attempt to squeeze out performance, Meanwhile, software companies, hungry for speed and scope have created clouds of silicon that appear to suck up most of the production capacity in energy. Data have been accumulating in warehouses, some never to see the light of day and others, creating havoc and panic in complex organizations. Economists often worry about bubbles, for some are not so sanguine about rationality but technologists never dream of a software bubble as they presuppose such conditions.

It's time to leave synthesized voices, fake artificial intelligence and bleak games behind and return to hardware. Without two orders of performance improvement, there are very few apps that would move humanity and that can only come from practical quantum computing. Notwithstanding the much anticipated version X of existing operating systems and mobile phones, without innovation in hardware, humans will swim in a sea of mediocrity for ever. There are glimmers of hope, however. Recent news that larger quantum circuits could be built in more direct ways (1) is encouraging.

Educational institutions have an obligation to move society to the future and not just following trends that will fill up class rooms - physical or virtual.


(1) http://esciencenews.com/articles/2016/03/26/unlocking.gates.quantum.computing




Friday, March 25, 2016

Go AI??

Artificial Intelligence is in the air again. It is such a nice concept, the inventors of which have been suspected of nourishing the "God complex." Deep blue triumphed in chess and beat out mere humans in Jeopardy, Watson can understand how music is made and speak about it in a synthesized human voice, and now the famous search company has conquered Go. What's left in AI to solve?

Silicon has been alluring to engineers for four decades. They could double the speed of the "chip" in every 18 months and the mere extrapolation of this idea would have instructed even those less mathematically endowed that the belated singularity, is indeed near. Now that the game of Go, that potentially has infinite permutations of moves, has been conclusively solved by the electronic brain, we are likely nearing the inevitable. And that is bad news, especially for those in school toiling with such mundane subjects as computer science, programming and application development. Very soon, all of these will be delegated to machines, most of which would be artificially intelligent to a level, perhaps surpassing even contemporary politicians. Some had claimed decades ago that humans are nearing a state of "perfect knowledge." In Physics, the speculation has been that no mystery will remain in a few decades. Now humanity has taken an important leap to the future that artificial intelligence can quickly mop up any remaining mystery in any field - physics, medicine and even economics.

Chess, Jeopardy, self driving cars, neural nets seeking cat videos, twitter girl, Go... extrapolation certainly indicates the unstoppable triumph of artificial intelligence. The only remaining mystery is what billions of ordinary humans would do. The quantum computer they carry on their shoulders will become virtually useless in this regime of artificial intelligence dominance.

Friday, March 18, 2016

Scaling humanity

Reaching a critical mass and the minimum efficient scale are important concepts for many systems - biological, economic and business. Humans, separated by space and time for most of their history, could not reach this inevitable threshold for nearly hundred thousand years. Supported by technology, there are encouraging signs that we are fast approaching the minimum efficient scale of knowledge creation and consumption. The planet remains to be heavily endowed and it can easily support many multiples of humans as long as they are able to network their brains for the benefit of all.

What appears to be lacking is a framework. Weak attempts before, such as religion and countries, simply could not sustain a momentum that will unify in sufficient numbers to reach the necessary scale. Basic sciences, albeit attractive in many ways, could not light the passion underneath the human kiln. The strong forces that are operating to separate rather than unify, aided by the clan experiences of humans, have had the upper hand, thus far. However, technology is making irreversible impacts on the human psyche, propelling them to the next level. If so, they could make the planet, eminently contact worthy for outsiders.

Humans have been here before, however. In all cases, it appears that they have come up short. Insufficient technology for networking appears to be the common culprit in previous attempts. Stitching human brains together to reach the minimum efficient scale has eluded them. This was aided by hard constraints such as life span. Shrinking space and time as well as expanding life spans appear to be necessary conditions for sustainable development. Here, technology seems to show encouraging signs.

Space agencies and physicists lamenting about lack of "contact" may be well advised to ask why such "contact" would be made.

Friday, March 11, 2016

Mathematical music


Recent research from the University of Tokyo (1) that proposes a deeper dive into the structure of music by analyzing - "the recurrence plot of recurrence plot," in an effort to understand the emotive power of music, could be misplaced. Mathematical probing into the structure of creative work often failed to understand the substance of emotions that aid such phenomenon. Mathematics has been an important language in the history of human development. However, humans have been less perfect compared to constructs math could reasonably model and they often exhibit irrationality and creativity at random. It is the lack of "structure," that defines creativity and the effort expended by educational institutions in an effort to define such irrational phenomenon in a language that mathematicians can understand could be wasted.

Human emotions have been enigmatic - they escaped mathematical modeling thus far. Evolution seems to have been flexible enough to allow human behavior that has little value in hunting and survival. However, such work perpetuated the human psyche in a world of stress and tribulation, and lifted it into a realm that is mathematically undefinable. The visions of Einstein and Bach, unconstrained by mathematics, propelled humanity forward. As the engineers attempt to prove "gravity waves,' exist a century after it was proposed by sheer creative thought, one has to wonder if humanity is being sterilized of such a notion.

Mathematics, an idealistic concept, is inept at the analysis of human emotions.

(1) http://scitation.aip.org/content/aip/journal/chaos/26/2/10.1063/1.4941371

Wednesday, February 24, 2016

Lawless innovation

A recent study (1) that argues that "constituency statutes have significant effects on the quantity and quality of innovation" in companies, seems to fall into the same trap of pitting stakeholder value against shareholder value. For many decades, the argument has been that companies and societies (e.g. Scandinavia) that focus on the value of stakeholders - employees, communities and the environment do better than those focused on shareholder value (e.g. US). This is a result of a wrong perception that a focus on shareholder value is based on "short term profits" and stakeholder value maximization is a long term process. There is significant empirical evidence that the market and investors are not "short term focused" and are fully capable of assessing and valuing any choices (short or long term) made by the mangers of the firm. Assuming that markets are myopic, without evidence, may not be a good thing.

It is important not to assume the first correlation found in the data is the underlying cause. Note that stakeholder value choices, unless they translate into shareholder value in any horizon, are value destroying. Further, "Quantity and quality of innovation," are difficult to measure. Few innovations are responsible for most of the GDP in the economy and in winner takes all markets, marginal benefit of innovation in aggregate is simply noise. A more interesting question is the structure, systems and strategies of firms (2) that encourage innovation. It is possible that innovative firms will remain so, regardless of the bureaucracies and statutes imposed on them.

Innovation emanates from the culture of the firm - not from the laws created by those, out of touch with the present economy.


(1) http://esciencenews.com/articles/2016/02/18/a.stake.innovation
(2) https://www.crcpress.com/Flexibility-Flexible-Companies-for-the-Uncertain-World/Eapen/9781439816325

Saturday, February 6, 2016

Innovative Life Sciences

Recent research (1) that shows Graphene could be utilized to interact with neurons open up a new avenue for research and practice to cure cognitive disabilities and possibly treat CNS diseases. More importantly, this is a profitable direction for biosciences to accelerate innovation. From the moment humans figured out they could impact the system by the ingestion of chemicals, they have been focused singularly on that. The system, however, is clearly electromagnetochemical, providing plenty of opportunities for more elegant interventions without multifactorial and unpredictable long term effects. Chemistry, has plateaued and life sciences companies with a vision of the future, have to move in a direction they are uncomfortable with.

Such an innovative departure in life sciences will take new leadership and a collaboration with emerging ideas and technologies. The impact will be far reaching - possibly replacing chemicals as the only non-invasive intervention. Medical education has to consider robotics, precision electronics and even high energy physics. Computer science and information science have to become integral to diagnosis and treatment. The meaning of intervention has to change - with impacts on the brain and the body simultaneously for optimum effect. In a regime of subdued bugs, unable to threaten the mighty human, it is going to be a battle against the body and the mind. Here, chemicals fail.

Innovation in life sciences will not come from incremental improvements to existing therapies, it will come from embracing hitherto unknown intervention modalities.

(1) http://esciencenews.com/articles/2016/01/29/graphene.shown.safely.interact.with.neurons.brain

Saturday, January 30, 2016

Data science blindspot

Recent research from MIT that claims their "data science machine," does better than humans in predictive models is symptomatic of the blind spots affecting data scientists - both the human and non-human variety. Automation of data analytics is not new - some have been doing it for many decades. Feature selection and model building can certainly be optimized and that is old news. The problem remains to be how such "analytics," ultimately add value to the enterprise. This is not a "data science problem," - it is a business and economics problem.

Investments taken by companies into technologies that claim to be able to read massive amounts of data quickly in an effort to create intelligence are unlikely to have positive returns for their owners. Information technology companies, who have a tendency to formulate problems as primarily computation problems, mostly destroy value for companies. Sure, it is an easy way to sell hardware and databases, but it has very little impact on ultimate decisions that affect companies. What is needed here is a combination of domain knowledge and analytics - something the powerpoint gurus or propeller heads cannot deliver themselves. Real insights sit above such theatrics and they are not easily accessible for decision-makers in companies.

Just as the previous "information technology waves," called "Enterprise Resource Planning" and "Business Intelligence," the latest craze is likely to destroy at least as much value in the economy, if it is not rescued from academics seeking to write papers and technology companies trying to sell their wares. The acid test of utility for any "emerging technology," is tangible shareholder value. 

Wednesday, January 13, 2016

Favorable direction for machine learning

Machine learning, a misnomer for statistical concepts utilized to predict outcomes based on large amounts of historical data, has been a brute force approach. The infamous experiment by the search giant to replicate human brain by neural nets, demonstrated a misunderstanding that the organ works like a computer. Wasted efforts and investments in "Artificial Intelligence," led by famous technical schools in the East and the West, were largely based on the same misconception. All of these have definitively proven that engineers do not understand the human brain and are unlikely to do so for a long time. As a group, they are least competent to model human intelligence.

A recent article in Science (1) seems to make incremental progress toward intelligence. The fact that machines need large amounts of data to "learn" anything should have instructed the purveyors of AI that the processes they are replicating have nothing to do with human intelligence. For hundred thousand years, the quantum computer, humans carry on their shoulders, specialized in pattern finding. They can do so with few examples and they can extend patterns without additional training data. They can even predict possible future patterns, something they have not seen before. Machines are unable to do any of these.

Although the efforts of the NYU, MIT and Univ of Toronto team are admirable, they should be careful not to read too much into it. Optimization is not intelligence, it is just more efficient to reach the predetermined answer. Just as computer giants fall into the trap of mistaking immense computing power as intelligence, researchers should always benchmark their AI concepts against the first human they can find in the street - she is still immensely superior to neatly arranged silicon chips, purported to replicate intelligence.

It is possible that humans could go extinct, seeking to replicate human intelligence in silicon. There are 7 billion unused quantum computers in the world - why not seek to connect them together?

(1) http://esciencenews.com/articles/2015/12/10/scientists.teach.machines.learn.humans

Tuesday, January 5, 2016

The Science of Economics

Many have wondered if economics is, in fact, science. Those who doubt it point to lack of testability and replicability of experiments. Natural experiments in macro systems are often unique and as they say in biological sciences, " a n of 1" is not useful. Further, predictions based on accepted theories often miss the mark. These appear to erect an insurmountable barrier to legitimizing the field of economics.

However, it is worthwhile to explore what is considered to be science. Physics, arguably the grandest of sciences, suffers from the same issues. Sure, human scale Physics is able to make eminently testable predictions based on Newtonian mechanics. Economics could also make such trivial predictions - for example on how demand will change with prices. And, quantum mechanics in the last hundred years has propelled the field further making fantastic and testable hypotheses. Whole industries have grown around it but those with knowledge and associated humility will contend that much remains unknown. In economics, there has been an analogous movement - where uncertainty and flexibility govern and not numbers in a spreadsheet. However, in economics, this has been delegated as something not many understand and thus not fully compatible with academic tenure. That is fair, we have seen that before but that does not indicate that the field is not scientific.

In biological sciences, experiments have been creating havoc. It is almost as if a hypothesis, once stated, could always be proven. In the world of empiricism, this may point to biases - confirmation and conformation - but more importantly in commerce, it showcases a lack of understanding of sunk costs (pardon the non-scientific term). Once hundreds of millions have been plunked into "R&D," the "drug" has to work, for without that, lives of many - if not the patients but the employees of large companies, could be at risk. So, testable hypotheses in themselves, albeit necessary, are not sufficient for science.

The dogma of science may be constraining development in many fields - such as economics, policy, psychology and social sciences. Those who are dogmatic may need to look back into their own fields before passing judgement.