Google

YouTube

Spotify

Scientific Sense Podcast

Tuesday, August 2, 2016

Biology meets computing

The ease of biological manipulation by chemical means has held life sciences companies back for over a century from exploring the system, that is equally receptive to electromagnetic interventions. Recent news that GSK has partnered with Alphabet to advance the field of bioelectronics is encouraging. The idea is to impart electrical signals to neurons by micro implants to cure chronic diseases. Although it is a profitable path to pursue, some caution may be in order.

There is a long history of engineering experts attempting prescriptive interventions on biological systems with an expectation of deterministic outcomes. Humans are generally good in engineering and traditional computing but biological systems do not behave in such a predictable fashion. Their engineering competence comes from selection, in which simple tools based on Newtonian Physics let them survive the harsh environment they were in for nearly hundred thousand years. The modern game is distinctly different and it is unlikely that they possess the skills to fast track with the hardware they have built up. With "Artificial Intelligence," soaking up the air waves, it is important for technology giants to bias toward humility. As projects such as "death cure," so slow to come to fruition so as to annoy the technologists behind it, perhaps getting not too excited about curing diabetes through neuron stimulation is a good idea. After all, nature took four billion years to get here, albeit, it was without a computer farm that soaks up a high share of the world electricity production. Competing with nature is likely to take a bit more than that.

The convergence of biology and computing is unavoidable. However, it is unlikely to be advanced by those with stagnant notions of either field.

Sunday, July 31, 2016

The inverted U

The value individuals add to society appears to be in the shape of an inverted U. On the high end - philosophers, scientists and technologists, who seem to know everything, appear to add very little of practical value. Recently, a scientist remarked, "when we found the gravity waves, the whole world stopped," unaware of the fact that most of the world did not know or care. On the low end, ignorance is as dangerous as cyanide, as ably demonstrated by contemporary politicians. In this scheme, the highest value to society appears to come from those in the middle - not too ignorant and not too arrogant. On both ends, there is a common factor - a brain that is idle, because of either stupor or conceit.

To maximize the value that individuals contribute to society, it appears that they have to be lifted from the abyss of ignorance or shown a practical path down from the mountains. The perception of knowledge is as dangerous as a lack of awareness of ignorance - for both result in a loss of value to society by a lack of contribution. The former, locked behind ivy walls, lament at a world of ignorance and the latter, aided and abetted by their leaders, scorn at knowledge. In the middle, humans of blood and flesh, are caught in a time warp they simply could not escape. Apathetic and lethargic, the middle withdraw from media events and elections and watch from the sidelines. This is a movie with a sad ending - dominated either by the ignorant or the arrogant, with little to differentiate between them.

Neither abundance nor lack of knowledge add societal value. Those who are aware of their limitations, do.


Wednesday, July 27, 2016

Simplified diagnostics

Research from Columbia (1) indicates that impairment in odor identification portends cognitive decline and possibly Alzheimer's disease. This appears to dominate more prevalent PET and MRI scans for the diagnosis of these ailments. This is a constant reminder that humans are far from mastering the workings of the organ they carry on their shoulders by engineering and scientific means and simpler measurements of inputs and outputs dominate such mechanics.

The human brain has been an enigma. It consumes vast amounts of energy and often produces little utility - either to the immediate host or to society at large. It effectively delegates routine affairs to the vast Central Nervous System attached to it by a thread and maintains a small garbage collection mechanism within itself to keep track of the estate it manages, automatically. Often, it gets bored and shows signs of over-design, a quirk of evolution that endowed it with vast processing power by incorporating quantum computing in a small and efficient space. Lack of large memory has allowed it to venture into the production of heuristics and showing signs of intelligence. But acceleration in technology and society has stretched it, lending some functions of importance irrelevant and elevating the irrelevant to importance.

Its decline is painful to close observers but not necessarily so for the host, herself. In a regime that disallows rolling back time, the objective function has to be minimizing pain - physical or emotional. A system that pragmatically allows such an outcome by its own deterioration can only be considered good. Diagnostics that does not require cutting open the delicate organ or injecting it with radioactive materials may be the best, for the recognition of the start of decline of this beautiful machine is not necessarily good for the host nor for close observers.

(1) http://esciencenews.com/articles/2016/07/27/smell.test.may.predict.early.stages.alzheimers.disease

Tuesday, July 19, 2016

Shining a light on a black hole

Research from the Moscow Institute of Physics and Technology (MIPT) (1) hypothesizes a measurable and elegant difference between a black hole and a compact object. The event horizon of the black hole, defined by the Schwarzschild radius, is significant - anything slightly bigger shows fundamental differences in behavior. A beam of scattered particles shows discrete spectra in the presence of a compact object that escaped collapsing into a black hole. If it were a black hole, it will be in a constant process of collapse, with a complete stoppage of time for an external observer, resulting in a continuous and smooth spectra.

The concept of a black hole has been an enigmatic thought experiment for physicists and amateurs alike. Contemporary theory fails in the singularity and speculates a stoppage of time inside the event horizon, something that cannot be fully envisioned by humans trained in the practical regime of Newtonian Mechanics. A black hole will never stop collapsing from an external perspective and so there cannot be any ex.post question on a black hole. Theories that attempt more detailed explanation beyond the event horizon is fantasy - just as the mathematically elegant string theory that cannot be tested. In spite of all the engineering progress in the last hundred years, fundamental understanding has remained akin to a black hole - in suspended animation. A handful of men and women from the turn of last century remain to be responsible for most of the abstract knowledge that humans have accumulated. The reasons for this is unclear but lack of imagination appears to be the prime suspect.

Fooling around with mathematics may give contemporary scientists satisfaction but explaining the stoppage of time will require more than that.

(1) http://esciencenews.com/articles/2016/07/01/the.energy.spectrum.particles.will.help.make.out.black.holes




Thursday, July 14, 2016

You are what you learn

Recent research from MIT (1) shows that the underlying attributes of music – consonance and dissonance – are not hard wired. Contrasting preferences of tribes with little exposure to Western music such as some Amazon tribes and those with gradually increasing exposure, culminating in accomplished American musicians, they prove that the preference toward consonance over dissonance is learned. Music, thus, appears to be personal and preferences largely generated by experience rather than an innate mechanism in the brain.

In the contemporary regime of accelerating data, the brain is bombarded with an information stream, it was never designed to tackle. An intricate quantum computer, specialized in pattern finding but with rather limited memory, the brain has been stretched to undervalue its advantages and it has been struggling to keep large swaths of data in its limited memory banks. The learning processor, however, has been able to efficiently design and store information in heuristics and dump the underlying raw data as fast as it can. As it loses history, the stored heuristics drive function and generate preferences, as if they are part of the original operating system.

The finding has implications for many areas not the least of which is in the treatment of Central Nervous System (CNS) diseases such as racism, alcoholism and ego. Fast discarding of underlying information due to a lack of storage capacity, prevents back testing of learned heuristics. A limited training set of underlying data could have irreversible and dramatic influences on end outcomes. More importantly, a brain that is trained with misguided heuristics, cannot easily be retrained as the neurons become rigid with incoherent cycles.

You are what you listen to, you are what you eat and more importantly, you are what you learn.

(1) http://esciencenews.com/articles/2016/07/13/why.we.music.we.do

Tuesday, July 5, 2016

The failure of finite elements

Engineers and mathematicians, with a core competence in building complex structures from elemental and standardized components, have had a tough time with domains not amenable to prescriptive and deterministic logic. These include high energy physics, biology, economics and artificial intelligence. The idea that the behavior of a system cannot be predicted by its components is foreign to most disciplines and the applications of such hard sciences, supported by engineering and technology.

In complex organisms such as companies, it has long been recognized that outcomes cannot be predicted by an analysis of its components, however standardized they may be. The “rules of engagement,” if not defined in elegant and closed form mathematics, appear to be less relevant for those seeking precision. However, there is almost nothing in today’s world that could be defined so precisely and the recognition of this concept is possibly the first positive step toward embracing reality.

The interplay between physicists wanting to prove century old predictions and engineers standing ready to prove anything by heavy and complex machines, has been costly to society. The interplay between biologists and chemists wanting to influence systems with precise and targeted therapy and engineers standing ready to do so, has been costly to society. The interplay between economists looking to apply statistical precision to the unknown and engineers ready to build models to whatever is needed, has been costly to society.

Complex systems cannot be broken down to finite elements for the behavior of the system does not emanate from its components. 

Thursday, June 30, 2016

Cognitive monopoly

Major discontinuities in human history have often led to monopoly positions in subsequent markets, driven by winner takes all characteristics. In the modern economy - automobiles, airplanes and computers certainly fit this view. In the case of the internet, invented by tax payer investments, attempts by a few to monopolize the flow of electrons, have been averted thus far. But "Net neutrality," is not something that rent seeking behemoths are likely to accept in the long run, even if they did not pay for it.

The nascent wave - machine cognition - has the monopolists scrambling to get the upper hand. In this wave, capital, as measured by megaflops and terabytes, has a significant advantage. The leaders, plush with computing power, seem to believe that there is nothing that may challenge their positions. Their expectations of technology acceleration appear optimistic but nonetheless we appear to be progressing at an interesting enough trajectory. Although many, including the world's leading scientist, are worried about runaway artificial intelligence, one could argue that there are more prosaic worries for the 7 billion around the world.

Monopolies generally destroy societal value. Even those with a charitable frame, acquire the disease of "God complex," as the money begins to flow in. Humans are simple, driven by ego and an objective function, either biased toward basic necessities or irrational attributes that are difficult to tease out. Contemporary humans can be easily classified by intuition, without even the need for the simplest of algorithms - Nearest neighbors - into those with access to information and those who do not. Politicians and policy makers have been perplexed by the fact that such a simple segmentation scheme seems to work in every part of the world population from countries to counties and cities. Cognition monopolists will make it infinitely worse.

Can Mathematics be monopolized? The simple answer is yes. In a regime of brute force over highly available computing power for a few, answers could be found by the blind and the dumb. Perhaps, there is still hope for the rest as we have seen this movie before.



Saturday, June 25, 2016

Clans continue

It appears clear that habits formed over hundred thousand years cannot be changed in a mere hundred years. As homo-sapiens ventured out of the African Savannah, they were still tightly organized as small clans, less than a hundred in strength, with their own unique language, culture, religion and morality. In Europe and Asia, within hundreds of years of arrival, they erased their close relatives with astounding efficiency. They also successfully navigated disease and climatic change that reduced them to a few thousand - emerging out of the bottle neck, with even tighter clan relationships.

Technology - aircrafts, computers and the internet - opened up the modern economy in the blink of an eye. Economists, excited by the possibilities, argued for the opening up of countries, continents and economies, but they did not realize the behavior patterns integrated deeply into the human psyche. Countries cling to their languages and apparent cultural nuances aided by politicians who in autocratic and socialistic regimes seem to have convinced the populace that they can implement strategic policies that will make their countries, "great again." In advanced democracies, a larger percentage of the population, seem to have self taught the same ideas and in some rare cases they have found surrogates, who will sing the same tune as the autocrats, even though he/she does not know the words to the music. A dangerous trend has emerged in clans that profess to be democratic and sophisticated. The question is whether learning from mistakes is possible - something that made humans successful in the past. Ironically, in the complex modern economy, the outcomes are not clearly observable and often has long cycles. Getting mauled by a tiger is immediate feedback but having a stagnant and deteriorating economy has little feedback for the larger population.

The modern economy, still largely driven by the clan instincts of the seven billion that occupy the Earth, cannot be shocked out of its stupor by logic. Perhaps photographs from space that show the little blue spot in the midst of chaos may appeal to the artistic side of humans. Little closer, they will find no demarcations as depicted on maps and globes. After all, humans have shown great capabilities to think abstractly, albeit, such thoughts are not often tested by logic.


Saturday, May 28, 2016

Redefining Intelligence

Intelligence, natural or artificial, has been a difficult concept to define and understand. Methods of measuring intelligence seem to favor the speed and efficiency in pattern finding. "IQ tests," certainly measure the ability to find patterns and artificial intelligence aficionados have spent three decades teaching computers to get better at the same. Standardized tests, following the same template, appear to measure the same attribute but couch the results in "aptitude," - perhaps to make it sound more plausible. And, across all dimensions of education and testing, this notion of intelligence and hence "aptitude," appears prevalent.

However, can the speed of pattern finding be used as the only metric for intelligence? Certainly in prototypical systems and societies, efficiency in finding food (energy) and fast replication are dominant. Pattern finding is likely the most important skill in this context. If so, then, one could argue that the status-quo definition of intelligence is a measurement of a characteristic that is most useful to maximize a simple objective function, governed largely by food and replicability. At the very least, a thought experiment may be in order to imagine intelligence in higher order societies.

If intelligence is redefined as the differential of the speed in pattern finding - an acceleration in pattern finding - then it can incorporate higher order learning. In societies where such a metric is dominant, the speed of finding patterns from historical data, albeit important, may not qualify as intelligence. One could easily see systems that have very slow speed of pattern finding at inception if energy is focused more at the differential, allowing such systems to exponentially gain knowledge at later stages. Sluggish and dumb, such participants would certainly be eradicated quickly in prototypical societies, before they can demonstrate the accelerating phase of knowledge creation.

Intelligence - ill defined and measured, may need to be rethought, if humans were to advance to a level 1 society. It seems unlikely.

Monday, May 23, 2016

Salt water bubbles

Economists, closer to salt water, appear to be prone to thoughts of inefficiency and bubbles in the financial markets, something that can be cured by a single trip to the windy city. A recent study from Columbia University (1) asserts that they could find over 13,000 bubbles in the stock market between 2000 and 2013. Using supercomputers, no less, and "big data," they appear to have "conclusively shown" that stock prices take wild and persistent excursions from their "fair values." Unfortunately, these academics, who profess to be "data scientists," are yet to encounter the phenomenon of "random walk," further evidence that "data scientists" should stay away from financial markets. After all, the “physicists” who descended into Wall Street have had a checkered history of “abnormal returns” wrapped in consistent negative alpha.

The remark from a graduate student from Harvard - "I expected to see lots of bubbles in 2009, after the crash, but there were a lot before and a lot after," is symptomatic of the problem faced by “data scientists,” seeking problems to solve in super-domains they have no clue about, where participants, who determine outcomes are equipped with pattern finding technology. They may have better luck in real markets, for prices in financial markets are determined by a large number of participants, each with her own inefficient algorithms. The most troubling aspect of the study is that the authors of the study believe that “a bubble happens when the price of an asset, be it gold, housing or stocks, is more than what a rational person would be willing to pay based on its expected future cash flows.” In a world, immersed in intellectual property, where future cash flows cannot be forecasted precisely, the value of an asset cannot be determined by such simple constructs that have been rendered invalid for decades.

The lure of financial markets have been problematic for “data scientists” and “physicists.” However, a cure is readily available in academic literature emanating from the sixties.

(1) http://esciencenews.com/articles/2016/05/04/stocks.overvalued.longer.and.more.often.previously.thought.says.study

Monday, May 16, 2016

Small step toward bigger hype

Recent research from the University of Liverpool (1) suggests a method by which computers could learn languages by semantic representation and similarity look-ups. Although this may be in the right direction, it is important to remember that most of the work in teaching computers language or even fancy tricks, is not in the realm of "artificial intelligence," but rather they belong to the age old and somewhat archaic notion of expert systems. Computer giants, while solving grand problems such as Chess, Jeopardy, Go and self driving cars, seem to have forgotten that rules based expert systems have been around from the inception of computers, much before some of these companies were founded. The fact that faster hardware can churn larger set of rules quicker is not advancing intelligence but it is certainly helping efficient computing.

Engineering schools appear to still teach ideas that are already obsolete. Programming languages have been frozen in time, with prescriptive syntax and rigid control flow. Today's high level languages are certainly practical and immensely capable of producing inferior applications. Even those who could have "swiftly," assembled knowledge from previous attempts seem to have concocted together a compiler that borrows from the worst that have gone before it. As they proclaim "3 billion devices already run it," every hour an update is pushed or conduct conferences around the globe dotting and netting, the behemoths don't seem to understand that their technologies have inherent limitations.

Computer scientists, locked behind ivy walls, are given skills that the world does not need anymore.

(1) http://esciencenews.com/articles/2016/05/06/teaching.computers.understand.human.languages


Thursday, May 12, 2016

Nutritional genetics

Research from Indiana University (1) speculates that physical traits could be substantially impacted by food. The adage that "you are what you eat," appears to work at a deeper genetic level. In low complexity biological systems, such as ants and bees, variation in food at the larvae stage seems to explain specialization at the genetic level. If true, this has implications beyond what has been observed.

Food, a complex external chemical, has to be metabolized, utilized and purged by biological systems routinely. Although it is clear that available energy content and processing efficiency will depend on the variation and complexity in inputs, the idea that food could cause genetic specialization is fascinating. More importantly, this may lead to better design of food to favorably impact physical and mental conditions, the latter possibly holding higher promise for humans.

Ancient cultures and medicines have routinely relied on food as the primary way to remedy tactical issues. The Indiana research may provide a path to propel this idea into more systematic and planned impacts.

(1) http://esciencenews.com/articles/2016/05/12/you.are.what.you.eat.iu.biologists.map.genetic.pathways.nutrition.based.species.traits

Thursday, May 5, 2016

No safety net

Recent research from Johns Hopkins (1) suggests there are over a quarter of a million deaths in the US per year due to medical errors. It is a sobering observation that future generations will look back on with anguish and perhaps, incredibility.  At the height of technology, we are slipping, not because of lack of know-how, but rather, lack of application. One preventable death is too much and the fact that medical errors are the third leading cause of death in the US, is immensely troubling.

Unfortunately, technology does not solve problems. Bigger data and faster computers are likely irrelevant if they cannot fundamentally influence decision processes and allow information flow to enhance decision quality. It is not about precision - there is no such thing - but a systematic use of all available information at the point of decision. Further, the human brain, with its inherent limitations, is unable to minimize downside risk in a regime of high utilization and volatility. A loss of life, a traumatic and life changing event for any healthcare provider, looms high but the environment simply does not allow anything more than what is tactically possible. The lack of a safety net below cascading, complex and error-prone processes suggest the need for a sudden and impactful change that most technology companies are unable to help with.

It is high time that healthcare embraced practical applications of available technologies to improve patient health and welfare.

(1) http://esciencenews.com/articles/2016/05/04/study.suggests.medical.errors.now.third.leading.cause.death.us

Saturday, April 30, 2016

Predictions with a single observation

A recent study from the University of Rochester (1) claims to improve the "Drake equation," - a constant reminder that multiplying random numbers does not provide any additional insights that are not present in the components. Now with exoplanets galore, the study that appeared in Astrobilology claims they can put a "pessimistic estimate" on the probability of non-existence of advanced civilizations, elsewhere in the universe. Just as in the previous attempt, the study suffers from traditional statistics. Most agree that a singular observation is not sufficient to predict anything, let alone the age, capabilities and the distance from Earth, where they could reasonably exist.

As the authors note, the space time window afforded to humans is too narrow to prove or disprove anything. However, the tendency to be amazed by large numbers and the multiplicative effects of such constructs, have led scientists to make non-scientific claims. Until at least a second data point becomes available, the effort expended on statistical analysis in this area is a waste of time. Availability of data is necessary but not sufficient to assign probabilities. Even those clinging to normality statistics, centuries old by now, know that it is not a good tool to make predictions.

More importantly, those awaiting ET's arrival, have almost infinite flexibility to keep on searching. If one has a hypothesis, then an accumulation of negative findings against it, regardless of how many trials are possible, has to be given due consideration. As an example, if one claims favorable conditions exist for life on Enceladus, Saturn's famous moon, such as water, oxygen and a heat source - then investing into the exploration of the icy rock is reasonable. However, if one comes out empty, it cannot be irrelevant. Just because there are trillion other rocks, in the solar system alone, that could be explored, one cannot simply ignore such an observation. At the very least, it should challenge the assumptions used by the space agency and others to justify such explorations. This "new" statistics - perhaps called "Statistics of large numbers," - where no negative observation has any utility - is very costly even though it is well positioned to pump out publications.

Scientists, engaged in irrelevant and invalid observations. aided by large numbers, may need to challenge themselves to advance the field.

(1) http://esciencenews.com/articles/2016/04/28/are.we.alone.setting.some.limits.our.uniqueness

Tuesday, April 26, 2016

Uncertain networks

Recent research from MIT, Chicago and Harvard (1) contends that smaller shocks in the economy could be magnified significantly by network effects. If true, it may provide guidance on policy that is trying to "jump start" large economic systems by targeted investments. If the transmission of such shocks across the economy is predictable, then, it could impact macro-economic decisions favorably. However, looking back with a deterministic view of network evolution, may have some downside.

Economic growth is driven by the conscious harvesting of uncertainty and not by strategic investments by bureaucrats or even corporations. Further, networks are in a constant state of evolution. Measuring GDP impact, a backward looking measure has less meaning in an economy driven by information, innovation and intellectual property. Firms, locked into the status-quo, with a rigid view of demand and supply, indeed fall prey to shocks amplified by static networks, But those, keenly aware of unpredictable uncertainty and the value of flexibility, could certainly surpass such external noise. The question is not how the present network amplifies shocks but rather how the networks are built. If they are built by organizations with a static view of the future, then they will be brittle and consumed by minor shocks. The measurement of intellectual property by patents is symptomatic of the adherence to known metrics and a lack of awareness of where value is originating from.

Empirical analyses in the context of accepted theories have less value for the future - policy or not. The field of economics has to evolve with the modern economy. Lack of innovation will always have a negative effect on the economy - no further analysis is needed.

(1) http://esciencenews.com/articles/2016/04/06/how.network.effects.hurt.economies

Monday, April 25, 2016

Biological storage

Recent news (1) that University of Washington researchers have successfully stored and retrieved data using DNA molecules is exciting. As the world accumulates data - already many tens of millions of gigabytes, growing at an alarming rate, storage capacity and efficiency are becoming top priorities in computing. With material sciences still lagging and computer scientists clinging to the Silicon status-quo, it is important that the field takes a biological direction. After all, nature has coded and retrieved complex information for ever. Using the same mechanism is likely a few orders of magnitude more efficient than what is currently available.

DNA, the most important biological breakthrough over four billion years, has been almost incomprehensible for humans, arguably, the best product of evolution. Lately, however, they have been able to understand it a bit better. Although some argues that the human genome map is the end game, it is likely that it is just a humble beginning. The multi factorial flexibility afforded by the DNA molecule may allow newer ways to store binary data, making it akin to the other belated innovation - quantum computing. Here, thus far, research focused on mechanistic attempts to force fit the idea into the Silicon matrix. Taking a biological route, perhaps aided by the best functioning quantum computer, the human brain, may be a more profitable path.

Biology could accelerate lagging innovation in material sciences and computer science.

(1) http://esciencenews.com/articles/2016/04/07/uw.team.stores.digital.images.dna.and.retrieves.them.perfectly

Sunday, April 17, 2016

Bacteria rising

Recent research from Michigan State University (1) that demonstrates the rise of multi-drug resistant bacteria due to overuse of antibiotics in animals is troubling. It has long been known that flu originates in farms with multi-species interactions. As mentioned in the article, the swine farms in China are particularly problematic as they allow easy gene transfers among bacteria. This, in conjunction with lack of antibiotics research for decades due to declining commercial economics, could result in a perfect storm. 

Bacteria have been dominant all through the history of the planet. Robust architecture with fast evolution by sheer numbers, led them to largely supersede any other biological life form on earth. For the past several decades, they have been put on the back foot, for the first time, by humans. All they need, however, are sufficient number of trials to develop resistance against any anti-bacterial agent. Data shows that they are well on their way, thanks to a variety of experiments afforded to them by inter-species breeding and the overuse of known agents. 

The economics of this indicates that commercial organizations are unlikely to focus on it till it is too late. If R&D, commercial or publicly funded, is not focused on this developing problem, we may be heading toward a regime that may make Ebola look like a household pet. In a highly connected world of intercontinental travel, it is easy for the single cell organism to hitch a ride to anywhere they would like to go. Thus, local efforts are not sufficient and a true global push is needed to compete against the abundant experience, collected over four billion years.

R&D prioritization at the societal level needs to take into account the downside risk and value of investments. 


(1) http://esciencenews.com/articles/2016/04/12/antibiotic.resistance.genes.increasing

Sunday, April 10, 2016

Hole in the soul

The super-void, that presented itself in the background radiation, sporting a size of close to two billion light years, has baffled scientists. A few years after its discovery, no reasonable explanation is forthcoming. The standard model, that most still pin their research on, has shown so many cracks that scientists who adhere to it are beginning to look like economists, who have similar difficulty letting go of established theories. Hunting in the particle forest, either to propose new ones or to prove the hypothesized ones indeed exist, has been the favorite past-time of physicists. Recently, they even measured the reverberation of gravity waves, generated by an event over billion years ago, to the tune of the diameter of a proton. Now, it is nearly impossible to disprove anything in Physics.

Biologists and chemists are in the same spot. Technology is advancing so fast that scientists are running out of hypotheses to prove. There are so many engineers and technologists, pumped out by the elite educational institutions around the world, who stand ready to prove anything in science. We are fast approaching a regime of dearth of ideas and an oversupply of proofs. Is this what was envisioned a few decades ago by visionary scientists, who predicted a future in which there would be nothing more to prove. If so, it would be bleak, indeed.

Creating hypotheses, a skill that was left undernourished for a few decades, may need to be brought back.

Tuesday, March 29, 2016

Return to hardware

Hardware design has been losing luster for many decades in the world of computers. Software has been riding high, partly aided by hype and partly due to the younger crowd plunging deep into apps to make a quick buck and associated fame. Monopolies have been created on inferior operating
systems and office automation, while those who are opposed to it have been chasing public domain noise. Even educational institutions, following the latest ephemeral trends, with half lives of runway fashions, have been churning out courses with little utility for the future. Some have been putting content on-line and others still want students to toil under fluorescent lighting on wooden desks, while picking up the skills of the future.

Computer science has gone astray. Humans, susceptible to incrementalism, have been chasing false dreams on antiquated frameworks. Just as their predecessors, modern humans always attempt to scale inferior performance by massive parallel processing. They stack circuits ever closer and they network computers ever larger in an attempt to squeeze out performance, Meanwhile, software companies, hungry for speed and scope have created clouds of silicon that appear to suck up most of the production capacity in energy. Data have been accumulating in warehouses, some never to see the light of day and others, creating havoc and panic in complex organizations. Economists often worry about bubbles, for some are not so sanguine about rationality but technologists never dream of a software bubble as they presuppose such conditions.

It's time to leave synthesized voices, fake artificial intelligence and bleak games behind and return to hardware. Without two orders of performance improvement, there are very few apps that would move humanity and that can only come from practical quantum computing. Notwithstanding the much anticipated version X of existing operating systems and mobile phones, without innovation in hardware, humans will swim in a sea of mediocrity for ever. There are glimmers of hope, however. Recent news that larger quantum circuits could be built in more direct ways (1) is encouraging.

Educational institutions have an obligation to move society to the future and not just following trends that will fill up class rooms - physical or virtual.


(1) http://esciencenews.com/articles/2016/03/26/unlocking.gates.quantum.computing




Friday, March 25, 2016

Go AI??

Artificial Intelligence is in the air again. It is such a nice concept, the inventors of which have been suspected of nourishing the "God complex." Deep blue triumphed in chess and beat out mere humans in Jeopardy, Watson can understand how music is made and speak about it in a synthesized human voice, and now the famous search company has conquered Go. What's left in AI to solve?

Silicon has been alluring to engineers for four decades. They could double the speed of the "chip" in every 18 months and the mere extrapolation of this idea would have instructed even those less mathematically endowed that the belated singularity, is indeed near. Now that the game of Go, that potentially has infinite permutations of moves, has been conclusively solved by the electronic brain, we are likely nearing the inevitable. And that is bad news, especially for those in school toiling with such mundane subjects as computer science, programming and application development. Very soon, all of these will be delegated to machines, most of which would be artificially intelligent to a level, perhaps surpassing even contemporary politicians. Some had claimed decades ago that humans are nearing a state of "perfect knowledge." In Physics, the speculation has been that no mystery will remain in a few decades. Now humanity has taken an important leap to the future that artificial intelligence can quickly mop up any remaining mystery in any field - physics, medicine and even economics.

Chess, Jeopardy, self driving cars, neural nets seeking cat videos, twitter girl, Go... extrapolation certainly indicates the unstoppable triumph of artificial intelligence. The only remaining mystery is what billions of ordinary humans would do. The quantum computer they carry on their shoulders will become virtually useless in this regime of artificial intelligence dominance.

Friday, March 18, 2016

Scaling humanity

Reaching a critical mass and the minimum efficient scale are important concepts for many systems - biological, economic and business. Humans, separated by space and time for most of their history, could not reach this inevitable threshold for nearly hundred thousand years. Supported by technology, there are encouraging signs that we are fast approaching the minimum efficient scale of knowledge creation and consumption. The planet remains to be heavily endowed and it can easily support many multiples of humans as long as they are able to network their brains for the benefit of all.

What appears to be lacking is a framework. Weak attempts before, such as religion and countries, simply could not sustain a momentum that will unify in sufficient numbers to reach the necessary scale. Basic sciences, albeit attractive in many ways, could not light the passion underneath the human kiln. The strong forces that are operating to separate rather than unify, aided by the clan experiences of humans, have had the upper hand, thus far. However, technology is making irreversible impacts on the human psyche, propelling them to the next level. If so, they could make the planet, eminently contact worthy for outsiders.

Humans have been here before, however. In all cases, it appears that they have come up short. Insufficient technology for networking appears to be the common culprit in previous attempts. Stitching human brains together to reach the minimum efficient scale has eluded them. This was aided by hard constraints such as life span. Shrinking space and time as well as expanding life spans appear to be necessary conditions for sustainable development. Here, technology seems to show encouraging signs.

Space agencies and physicists lamenting about lack of "contact" may be well advised to ask why such "contact" would be made.

Friday, March 11, 2016

Mathematical music


Recent research from the University of Tokyo (1) that proposes a deeper dive into the structure of music by analyzing - "the recurrence plot of recurrence plot," in an effort to understand the emotive power of music, could be misplaced. Mathematical probing into the structure of creative work often failed to understand the substance of emotions that aid such phenomenon. Mathematics has been an important language in the history of human development. However, humans have been less perfect compared to constructs math could reasonably model and they often exhibit irrationality and creativity at random. It is the lack of "structure," that defines creativity and the effort expended by educational institutions in an effort to define such irrational phenomenon in a language that mathematicians can understand could be wasted.

Human emotions have been enigmatic - they escaped mathematical modeling thus far. Evolution seems to have been flexible enough to allow human behavior that has little value in hunting and survival. However, such work perpetuated the human psyche in a world of stress and tribulation, and lifted it into a realm that is mathematically undefinable. The visions of Einstein and Bach, unconstrained by mathematics, propelled humanity forward. As the engineers attempt to prove "gravity waves,' exist a century after it was proposed by sheer creative thought, one has to wonder if humanity is being sterilized of such a notion.

Mathematics, an idealistic concept, is inept at the analysis of human emotions.

(1) http://scitation.aip.org/content/aip/journal/chaos/26/2/10.1063/1.4941371

Wednesday, February 24, 2016

Lawless innovation

A recent study (1) that argues that "constituency statutes have significant effects on the quantity and quality of innovation" in companies, seems to fall into the same trap of pitting stakeholder value against shareholder value. For many decades, the argument has been that companies and societies (e.g. Scandinavia) that focus on the value of stakeholders - employees, communities and the environment do better than those focused on shareholder value (e.g. US). This is a result of a wrong perception that a focus on shareholder value is based on "short term profits" and stakeholder value maximization is a long term process. There is significant empirical evidence that the market and investors are not "short term focused" and are fully capable of assessing and valuing any choices (short or long term) made by the mangers of the firm. Assuming that markets are myopic, without evidence, may not be a good thing.

It is important not to assume the first correlation found in the data is the underlying cause. Note that stakeholder value choices, unless they translate into shareholder value in any horizon, are value destroying. Further, "Quantity and quality of innovation," are difficult to measure. Few innovations are responsible for most of the GDP in the economy and in winner takes all markets, marginal benefit of innovation in aggregate is simply noise. A more interesting question is the structure, systems and strategies of firms (2) that encourage innovation. It is possible that innovative firms will remain so, regardless of the bureaucracies and statutes imposed on them.

Innovation emanates from the culture of the firm - not from the laws created by those, out of touch with the present economy.


(1) http://esciencenews.com/articles/2016/02/18/a.stake.innovation
(2) https://www.crcpress.com/Flexibility-Flexible-Companies-for-the-Uncertain-World/Eapen/9781439816325

Saturday, February 6, 2016

Innovative Life Sciences

Recent research (1) that shows Graphene could be utilized to interact with neurons open up a new avenue for research and practice to cure cognitive disabilities and possibly treat CNS diseases. More importantly, this is a profitable direction for biosciences to accelerate innovation. From the moment humans figured out they could impact the system by the ingestion of chemicals, they have been focused singularly on that. The system, however, is clearly electromagnetochemical, providing plenty of opportunities for more elegant interventions without multifactorial and unpredictable long term effects. Chemistry, has plateaued and life sciences companies with a vision of the future, have to move in a direction they are uncomfortable with.

Such an innovative departure in life sciences will take new leadership and a collaboration with emerging ideas and technologies. The impact will be far reaching - possibly replacing chemicals as the only non-invasive intervention. Medical education has to consider robotics, precision electronics and even high energy physics. Computer science and information science have to become integral to diagnosis and treatment. The meaning of intervention has to change - with impacts on the brain and the body simultaneously for optimum effect. In a regime of subdued bugs, unable to threaten the mighty human, it is going to be a battle against the body and the mind. Here, chemicals fail.

Innovation in life sciences will not come from incremental improvements to existing therapies, it will come from embracing hitherto unknown intervention modalities.

(1) http://esciencenews.com/articles/2016/01/29/graphene.shown.safely.interact.with.neurons.brain

Saturday, January 30, 2016

Data science blindspot

Recent research from MIT that claims their "data science machine," does better than humans in predictive models is symptomatic of the blind spots affecting data scientists - both the human and non-human variety. Automation of data analytics is not new - some have been doing it for many decades. Feature selection and model building can certainly be optimized and that is old news. The problem remains to be how such "analytics," ultimately add value to the enterprise. This is not a "data science problem," - it is a business and economics problem.

Investments taken by companies into technologies that claim to be able to read massive amounts of data quickly in an effort to create intelligence are unlikely to have positive returns for their owners. Information technology companies, who have a tendency to formulate problems as primarily computation problems, mostly destroy value for companies. Sure, it is an easy way to sell hardware and databases, but it has very little impact on ultimate decisions that affect companies. What is needed here is a combination of domain knowledge and analytics - something the powerpoint gurus or propeller heads cannot deliver themselves. Real insights sit above such theatrics and they are not easily accessible for decision-makers in companies.

Just as the previous "information technology waves," called "Enterprise Resource Planning" and "Business Intelligence," the latest craze is likely to destroy at least as much value in the economy, if it is not rescued from academics seeking to write papers and technology companies trying to sell their wares. The acid test of utility for any "emerging technology," is tangible shareholder value. 

Wednesday, January 13, 2016

Favorable direction for machine learning

Machine learning, a misnomer for statistical concepts utilized to predict outcomes based on large amounts of historical data, has been a brute force approach. The infamous experiment by the search giant to replicate human brain by neural nets, demonstrated a misunderstanding that the organ works like a computer. Wasted efforts and investments in "Artificial Intelligence," led by famous technical schools in the East and the West, were largely based on the same misconception. All of these have definitively proven that engineers do not understand the human brain and are unlikely to do so for a long time. As a group, they are least competent to model human intelligence.

A recent article in Science (1) seems to make incremental progress toward intelligence. The fact that machines need large amounts of data to "learn" anything should have instructed the purveyors of AI that the processes they are replicating have nothing to do with human intelligence. For hundred thousand years, the quantum computer, humans carry on their shoulders, specialized in pattern finding. They can do so with few examples and they can extend patterns without additional training data. They can even predict possible future patterns, something they have not seen before. Machines are unable to do any of these.

Although the efforts of the NYU, MIT and Univ of Toronto team are admirable, they should be careful not to read too much into it. Optimization is not intelligence, it is just more efficient to reach the predetermined answer. Just as computer giants fall into the trap of mistaking immense computing power as intelligence, researchers should always benchmark their AI concepts against the first human they can find in the street - she is still immensely superior to neatly arranged silicon chips, purported to replicate intelligence.

It is possible that humans could go extinct, seeking to replicate human intelligence in silicon. There are 7 billion unused quantum computers in the world - why not seek to connect them together?

(1) http://esciencenews.com/articles/2015/12/10/scientists.teach.machines.learn.humans

Tuesday, January 5, 2016

The Science of Economics

Many have wondered if economics is, in fact, science. Those who doubt it point to lack of testability and replicability of experiments. Natural experiments in macro systems are often unique and as they say in biological sciences, " a n of 1" is not useful. Further, predictions based on accepted theories often miss the mark. These appear to erect an insurmountable barrier to legitimizing the field of economics.

However, it is worthwhile to explore what is considered to be science. Physics, arguably the grandest of sciences, suffers from the same issues. Sure, human scale Physics is able to make eminently testable predictions based on Newtonian mechanics. Economics could also make such trivial predictions - for example on how demand will change with prices. And, quantum mechanics in the last hundred years has propelled the field further making fantastic and testable hypotheses. Whole industries have grown around it but those with knowledge and associated humility will contend that much remains unknown. In economics, there has been an analogous movement - where uncertainty and flexibility govern and not numbers in a spreadsheet. However, in economics, this has been delegated as something not many understand and thus not fully compatible with academic tenure. That is fair, we have seen that before but that does not indicate that the field is not scientific.

In biological sciences, experiments have been creating havoc. It is almost as if a hypothesis, once stated, could always be proven. In the world of empiricism, this may point to biases - confirmation and conformation - but more importantly in commerce, it showcases a lack of understanding of sunk costs (pardon the non-scientific term). Once hundreds of millions have been plunked into "R&D," the "drug" has to work, for without that, lives of many - if not the patients but the employees of large companies, could be at risk. So, testable hypotheses in themselves, albeit necessary, are not sufficient for science.

The dogma of science may be constraining development in many fields - such as economics, policy, psychology and social sciences. Those who are dogmatic may need to look back into their own fields before passing judgement.

Tuesday, December 29, 2015

Terraforming the Earth


As space agencies around the world race to the red planet and beyond in an attempt to satisfy ego and ignorance, they may want to focus their limited resources on real tactical problems facing the planet. As those, who had a tough time with science at school, rise to make polices that affect humanity, the danger of human extinction is now more real than ever. To top it off, those who were good at science appear to get real excitement by looking at pictures of the dwarf planet and designing ways to punch a one-way ticket for humans to a planet close-by. Admirable, of course, but completely irrelevant.

It is time NASA had a real resource and portfolio management process. Engineers, albeit being good at what they do, often fail to see the big picture. The risk of an asteroid impact or run-away greenhouse effect are so high in close quarters - it does not make any sense to allocate resources to finding green men in the solar system or among the thousands of exo-planets, that were found recently.  Terraforming the Earth, although not as exciting as the projects undertaken by the space agency, has real utility for humanity. Humans, designed to burn anything they get their hands to, have cooked up a real mess, that would require rectifying. It is a solvable problem if the best technologists in the world put their minds to it and perhaps forget the exploration of the universe for a little bit. Sure, this may not propel careers or pave the way to easy Nobel prizes, but limited resources have to be optimally deployed for the sake of humanity. Additionally, although not as exciting as science fiction including a black hole from the LHC devouring us all, an asteroid impact that could substantially extinguish humans is real. Having “plans on paper” to “bomb the rock” may not be realistic. It may require real technology to nudge the Earth bound catastrophe to safety.  

Those who are responsible and accountable for deploying the limited resources to practical uses may need to refocus their priorities. Ego cannot be part of the objective function, rationality has to be.

Monday, December 28, 2015

Barren universe

Not withstanding the efforts of space agencies and academic institutions around the world to find extra-terrestrial life, it appears that disappointment could be in store for humans. Recent excitement about an exoplanet just 14 light years away is symptomatic of scientific quest that is chasing pre-determined answers. Before turning all listening devices to the “target,” one may want to ask a few questions. If we are seeking human like life there, one has to assume that they are enjoying “Friends” and “Seinfeld” now. And, it is difficult to find fault with them not attempting to make contact. If we are seeking something less or more, then it is unlikely they will be spewing garbage into the airways. In other words, the attempts of humans to make contact are unlikely to be rewarded unless an extra-terrestrial civilization of similar incompetence is close at hand.

Humans, locked into a tiny window of space-time, have been chasing an unattainable dream – intelligent life that could teach them better tricks. As they peak through that microscopic window, they are most likely to see a barren universe, devoid of life and intelligence. Expanding the window, either by new Physics or by constructs deemed feasible with the status-quo, such as worm holes, information travelling at speeds many magnitudes higher than light and quantum entanglement, could provide a way out of these hard constraints.

Finding life in the searchable space-time in close proximity is as unlikely as spotting a needle in a haystack as big as the solar system. Good luck getting there by 2020.

Tuesday, December 1, 2015

The death of logic

 
In a country of blue, red and grey
In a land of every possible hue
Where policies are made on the back of a napkin
To satisfy donors and those who may become donors
 
In a country of red, white and blue
In a land of every possible opinion
Where judgments are made by pictures on TV
To satisfy friends and those who may become friends
 
In a country of East, West and Midwest
In a land of every possible culture
Where biases are made by location and accents
To satisfy those nearby and those who may move close
 
In a country of wealthy, poor and the middle-class
In a land of disappearing dreams for most
Where classes segregate by every possible means
To satisfy those who hold similar views
 
In a country of knowledge, ignorance and mediocrity
In a land of expensive and unattainable education
Where students march in the streets to be heard
To satisfy their own cults and egos
 
In a country of fake hair, fake stories and fake passion
In a land of politicians and incompetent policy-makers
Where debates are designed to expose the hatred
To satisfy the millions glued to the idiot box
 
In a country of science, religion and agnosticism
In a land of pretense and wisdom
Where they battle each other for superiority
To win prizes, acceptance and money
 
In a country of coasts, mountains and plains
In a land of inexplicable space and beauty
Where they battle for the last acre of land
To nourish their own false sense of wealth
 
In a country of finance, technology and movies
In a land of fraud, fallacy and fiction
Where the suits battle the turtle-necks
To stuff their own pockets and wallets
 
In a country of such complexity
Where logic is dead and buried
But, somehow, one can’t lose hope
For without it, the world will be in despair

Sunday, November 29, 2015

Proof of simulation

The idea that the universe is a simulation has been in the periphery of cosmology. This is not surprising – every established scientific arena, astrophysics, medicine and economics included, has not been kind to pursuits that questioned the status-quo. This abundant bias, nourished by the ability to publish and win Nobel prices in short horizons, has perpetuated established theories even in the absence of any evidence. Even “theories” that could never be tested has been gaining popularity, within the closed doors of academia, with even less interest to look outside than country club dwellers.

The thought experiment that the universe could be a simulation, however, has been around for over a decade. Some have even suggested ways to test it experimentally. Given that the established theories require 96% fantasy for them to work, it is not too big a leap to go a bit further. After all, thought experiments typically do not require 6 trillion experiments to ferret out an elusive particle and such statistical fantasies have been held as one of the greatest achievements of contemporary humans.

If the universe were a simulation, what would be the properties of such a system? In a sufficiently complex simulation:

1. The participants of the simulation, albeit capable of describing the processes that make the simulation work, will never be able to explain the origin of it.

2. The participants, who could measure the constants that drive the rules of the simulation, will find them finely tuned and held constant.

3. The simulation will exhibit recurring patterns.

4. The participants will find constraints within the system that limit them to certain parts of the simulation.

5. The participants will face an overall hard constraint that does not allow them to get outside the simulated system.

6. The participants of the system will remain unaware of anything outside the boundaries of the simulation for the duration of the simulation.

7. The participants will likely reject the hypothesis that they are part of the simulation.

8. The participants may find anomalies to the rules they have discovered because of the possible flaws in the simulation itself. Such flaws may be patched up over time and the anomalies may disappear.

9.  The system will exhibit no learning.

10. Any excursion – random, planned or induced by the participants, away from the rules, will revert back to the rules.

Within the context of the tiny part of the universe – humans - all these properties appear to be true. Moreover, no current observation negates the hypothesis. Hence, it is likely that the universe is a simulation.

Tuesday, November 24, 2015

Chicago’s moment to leap to the future

With nearly 75% of the US population expressing the idea that racism is a problem for the country, it is clear that the country is in deep thoughts as to how to get rid of the cancer affecting its soul. A modern society, afflicted by the disease of racism, could have a rather pessimistic prognosis. On the other hand, if most of the country sees the problem, then, we can certainly solve it. As they say in medicine, early diagnosis almost always leads to better outcomes. And, in the pain and despair of a horrible incident, Chicago has a unique opportunity to lead the country and perhaps the world out of ignorance and a curable disease through education and the advancement of culture.

Chicago has been at the forefront of advancing emerging ideas in economics, science, arts and journalism. Its educational institutions opened the eyes of those seeking knowledge but the city itself, could not get out of its artificially imposed boundaries, allowing irrational thoughts and actions to percolate through. In the process, they suffered from violence and segmented islands of wealth, information and ignorance. A city, that led in thought and culture advancement has been trailing in practical actions, however. Steeped in political corruption for decades, the city has been losing its just position in history. It will be a shame if one of the greatest cities in the world, sub optimizes itself, not because of lack of knowledge but of the constraints self imposed on itself.

This is Chicago’s moment to leap to the future. It has leaped many times before to open the eyes of the world. Now, it is time for introspection and out of that will come strength to leap again. 

A small leap for Man and a big jump for Math

Recent news that a University of Chicago mathematician may have reduced a NP complexity problem of network comparisons to that akin to P level complexity, signals a jump in knowledge in otherwise stagnant field. Consumed by big data and bigger noise, mathematicians and data scientists have been burning the midnight oil, solving everything under the Sun, using century old techniques and faster computers. In the process, most forget to think and step aside to see the challenge in front of them could be simplified before diving deep.

In the age of cheap hardware and companies plush with cash, innovation appears stagnant. Making a neural net with thousands of computers in a network is not innovation, it is just a show of brawn over brains. Pumping large number of rules through a supercomputer in an attempt to beat a human recollecting random facts is not innovation, it is scaling ignorance. Collecting, storing and analyzing large amounts of noise in an attempt to discover complex heuristics is not innovation, it is just sticking one’s head in the cloud.

Innovation happens but only rarely. Reducing the complexity of a problem class, fits the bill perfectly.

Saturday, November 14, 2015

The economics of violence

Organizations – religions, countries and country clubs, operate on a simple idea. Individuals in a system have a put option with a limited exercise horizon and if the system can coax an individual to exercise it prematurely for the presumed benefit of the system, then the managers of the system could effectively do anything they want. Since the objective function of the system itself is fuzzy and ill defined, managers do not need to show optimality and they can use anecdotes and unproven hypotheses, to elicit premature and suboptimal exercise from a small number of individuals. The act of violence, perpetuated by closed systems, autocratic and strategic, with benefits accruing to a small percentage of the members, require the ones on top convincing a few to engage in irrational acts by demonstrating the asset they hold is wasting and a premature extinguishing of their own lives is optimal, if not for themselves but the system itself.

For hundred thousand years, humans killed and mutilated their way to glory, aided and abetted by clan leaders, fully aware of what they were doing. If the value of an individual demonstrably improves, then it will diminish the ability of clan leaders to force premature exercise of put options held by the individual. Education, the only tool, that could improve asset values in closed systems, may be the last hope for humans, slipping away to oblivion at the height of their ascendancy. Education and knowledge have been stagnating, however, with a few drinking from fire hydrants and others infinitely seeking the illusive mirage. While some in the valley sleep dreaming about the singularity and the cure for death, there are seven billion elsewhere, without a clue what tomorrow is going to bring.

The foundations, sitting on billions still debating whether to provide white or pink nets to cure malaria, may have to rethink their strategy. If they really want to cure the ills of the world, they have to improve the knowledge content of humans.

Thursday, November 5, 2015

Puppet show

The Unified Microbiome Initiative (UMI), as described in a recent article in Science, albeit belated, is a step in the right direction. As indicated by one of the founding members from the department of Ecology and Evolution at the University of Chicago, it has been abundantly clear that microorganisms play a critical role in the welfare of humans. Perhaps, UMI should go a bit further – it is not just that microorganisms have a critical role in health and disease of humans in a tactical sense, they may be controlling how humans evolve, a true strategic imperative.

Intelligence has been ill defined. Ability to maximize utility for the system seems to move inversely to the complexity of the individual. Thus, humans, on top of the food chain, appear least competent to maximize societal utility. An alternative definition of intelligence is the ability of the individual to contribute to system welfare. Here, microorganisms are dominant. They have been here for 4 billion years and they will likely be here for another 4 billion before the Sun throws a temper tantrum and balloon to a red giant. Meanwhile, the “intelligent,” will likely perish in an asteroid impact within perhaps a few 100s of thousands of years.

More importantly, humans appear almost ignorant of the systemic effect imparted on them by microorganisms. The DNA provided by the parasites far surpasses any bits and pieces of the human DNA admitted to the ecosystem, some affectionately call, the human body. In this vast universe of organs and food digesting machinery, humans have been dancing to every whim of their visitors in the stomach and elsewhere. Diseases can be forecasted, health can be measured and even death can be predicted by a simple conversation with the Microbiome.

It is ironic that humans are simple puppets to those who have been dominating the Earth for so long.

Wednesday, November 4, 2015

Language optimization

Language, a distinct advantage that separated humans from chimps, may become a liability for them in a regime of accelerating information and forced specialization. Early on, language provided efficient communication among clan members with relatively simple objective functions to optimize. Later, as they branched into art, philosophy and literature, language became a construct that may have touched the souls of some, but it also meant that it began to lose the communication efficiency, it was originally designed to do. Presently, conventional languages, with complex semantics and grammar, appear unable to distill and communicate critical technical information.

Computer languages, that stay at a lower level without flowery grammar, are certainly more efficient to program machines. In the human sphere, millennials have been experimenting with a variety of constructs that remove the complexity of grammar and schema, but it is unclear if any of the current methods are efficient in communicating content. In an environment of deep but not broad knowledge per individual, science and engineering may need to invent a modern language that does not constrain them to formats that are designed for different purposes. For example, the abstract of a scientific article, that often limits the format to certain number of words but force the author to utilize inefficient grammar, lose in multiple ways.

It is time to rethink scientific language. Nobody has the time to read through the entire paper and content is not complete in allowed abstract format that conforms to artificial and old constraints.

Thursday, October 29, 2015

Gravitational annoyance

A recent paper in the Journal of Science that describes the failure to detect gravitational waves from any source including the merging of two galaxies with possible combination of two black holes, perpetuates the gravitational holdout to otherwise beautiful theories woven up by physicists. Gravitation has spoiled every attempt at Grand Unification of the hypothesized fundamental fields of nature. It is ironic that a field that is obvious at human scale has been the one most difficult to understand and explain with status-quo theories. Ever since an apple fell on Newton’s head, gravity has vexed humankind and it seems like the current situation will continue for a while.

Failure of Grand Unification Theories is a subtle sign that we do not yet have theories that can be unified. Early in the 20th century, a few brilliant minds made inexplicable leaps into emerging knowledge. It is scary to even think of a world in which they did not exist. Newtonian mechanics could have ruled the mediocrity for a few more centuries. However, there were visible cracks in both the theory of relativity and quantum mechanics, and the ones who followed are simply unable to mend them. More importantly, they have taken the hammers given to them and they have been looking for nails all around the universe to put the hammers to good use. None is able to spend a career or even risk a tenured position to ask if the hammers need modifications.

The jumps in the stochastic evolution of knowledge are rare and they are driven by an amazingly few members of billions of humans around the world. The next jump appears too far in a regime mediated by paid research and manufactured education.

Monday, October 26, 2015

Machines do not learn!

Artificial Intelligence is fashionable again, aided by cheap computing and cheaper memory, computer behemoths have been diving head first into the abyss. It has happened before, albeit, with much limited computing power. Just as in the previous iteration in the 80s, the current excitement may die down in a few years. Simulated machine voice and jeopardy are easier problems to solve but finding a cure for cancer is a lot harder. Similarly, churning through limitless text - “search”, is a mechanistic process but “curing death” is not. Soon, hopefully, the “leaders” will realize the naked truth – machines do not learn and that the “singularity” they envision is many centuries away.

Machines have captured the human imagination from the start. When they trained animals to drive farm equipment many thousands of years ago, they understood how machines could be built and powered to enhance productivity. More recently, the silicon chip, a conventional and mechanistic processor, has raised their ambitions to a level that may not be realistic. In the initial going, it was focused on productivity, very similar to farm equipment but now some believe their machines can learn. If so, this is a departure for humans from sustenance to imagination, from mediocrity to advancement, from tactics to strategy. But alas, machines do not learn!

“Machine learning,” a term used too liberally by information technology and consulting firms, may need further definition. One can clearly demonstrate machines do not learn by themselves and if so, we are back to using machines to enhance productivity – a posture humans have been in for ten thousand years.

Saturday, October 24, 2015

Real finance

Recent observations by a previous Fed chairman that it is puzzling why one would advocate a reinstatement of Glass-Steagall act because he could not see how the crisis in 2008 would have been avoided had it been in effect, is symptomatic of regulators who are good tacticians but not strategists. From Princeton to Wall street, financial institutions and their regulators still believe intermediation is “God’s work.” Academics, slaves to economics text books from last century, still believe financial intermediation is integral to markets and the economy. If the former chairman is intellectually honest, he may want to revisit the transcripts from Jackson Hole in 2005, before spewing wisdom. The one who came before him, who spewed infinite wisdom for many decades seem to have finally realized, “something was wrong.” He is still selling wisdom at $75K per pop. Meanwhile financial malls, that do everything but intermediation, with a skewed incentive to take risks for their bonuses that depend on the upside potential emanating from their actions and they are generally not accountable for the downside risk, invest into fooling both the academics and the regulators.

Although the chairman may be an expert of the depression era, what he may be missing is that the modern economy needs very little finance as we know it. Any financial activity, that does not have a direct connection to real assets and real investments, adds no value to the economy. Real assets today comprise of only two things – real estate and intellectual property. Financial intermediation in the former can now be done by the internet and it does not require “God’s men,” behind oak desks in the penthouses. And financing needed to develop and nourish IP is not something the mega banks have any clue about. They are dinosaurs with a ridiculous combination of disparate businesses with conflicting incentives that still pose a systemic risk to the entire population.

Glass-Steagall is a necessary condition to eliminate the greed of the ignorant and I am sure “God” will approve.

Wednesday, October 21, 2015

Hilbert hotel

Recent news that scientists have realized a quantum Hilbert hotel using a beam of light is indicative of the power of thought experiments, validated by simple experiments. Both interesting thought experiments as well as simple validation techniques are on the decline not only in Physics but also in other fields. In the Hilbert thought experiment, a hotel with infinite rooms, fully occupied, could make an additional room by shifting occupants one room up. More generally, the same hotel could make infinite empty rooms by shifting people to even number rooms leaving an infinite number of odd number rooms. The human brain that evolved through practical needs of Sapiens, has been lured into thoughts it was never designed to have.

Thought experiments on Infinity and zero (absolute nothing), such abstract concepts, are in a favorable direction to nourish the brain and perhaps move it into the next quantum state of knowledge. It takes no investment, no heavy machinery and not even high mathematics. It, however, takes imagination, something that has been draining out of humanity. Humans have been unable to internalize infinity and zero, perhaps because of the limitation of the brain but it is more likely because the brain gets inundated with the finite continuously. The thoughts that envelope the finite are mediocre at best and they are not extensible to the abstract nature of infinity or zero.

Thought experiments are possibly the best path to fundamental shifts in human knowledge.

Tuesday, October 20, 2015

ET’s very large SUV

Recent discovery of an object by NASA’s Kepler telescope around a star, some 1500 light years away that cannot be explained by typical exoplanet properties, has raised speculation that it may be an artificial structure, created by advanced intelligence to garner energy from the star. A simpler explanation, such as a fragmented comet could be more valid, but certainly less interesting. Astrophysics, quickly degrading into bedtime stories of fancy, may yet have a long way to sink in conventionalism, before it could make a contribution.

Humans, seeking intelligence elsewhere, with attributes similar to theirs, seem to forget that their own experiences clearly show that intelligence is a network process. The mighty bacterium, able to dance with billions in unison, appears much more intelligent as a system than humans could ever be. By substantially increasing the diversity and the probability of favorable mutations at the micro-level, the system is able to evolve and adapt faster in a universe that appears hostile in every direction. Harnessing energy in such systems will be an internal process and may not require structure-building or star-domination, only phenomena humans are able to understand fully. Granted, finding such intelligence will remain outside human capabilities for the foreseeable future, if not for ever.

The recent fervor in seeking extra-terrestrials, is symptomatic of the limited understanding of humans of the formation, development and propagation of intelligence.