Tuesday, March 19, 2019

AI and the weakest link

The recent debacle in aircraft design is a constant reminder that software engineers and “data scientists,” excited by the possibilities, could create havoc in many different industries. In transportation, as autopilot systems get smarter, they could take over virtually everything a vehicle does, terrestrial or otherwise. What the designers seem to have missed recently is that an aircraft is a conglomeration of data transmitting mechanical sensors and sophisticated software. Traditional engineering education would have informed the designers that the system is only as good as the weakest link, but the younger ones may have skipped those courses. Here, faulty data from an old technology may have confused the brain. There are multiple issues to consider here.

First, the design of systems needs to be holistic. This is easier said than done as a complex vehicle is designed in parts and assembled. Teams who work on these components may have different skill sets and the overall blueprint may not consider the biases in designs created by separate teams. For example, if the brain is designed with little flexibility to discard faulty data, the expectation would be that it is unlikely. However, if the data is emerging from mechanical devices, with no embedded intelligence, it is almost a certainty that faulty data will arrive at some point in the senor’s life. Two recent aircraft failures in Asia and Africa and the one much earlier over the Atlantic seem to have been caused by bad sensors sending bad data to a “sophisticated AI agent,” with little capability to differentiate between good and bad data. So, either the sensors and other mechanical devices in the system need to be smarter so as to recognize their own fallibilities or the central brain has to be able to recognize when it is fed bad stuff. There is a lull in engineering education that has moved in the direction of high specialization, without an overall understanding of systems design and risk. This is going to surface many issues across industries from transportation, manufacturing to healthcare.

Second, the human is still the best-known risk mitigator, with her brain fine-tuned over the millennia to sense and avert danger. In transportation, disengagement has to be a fundamental aspect of design. Although it could be tempting to sit back while an aircraft takes off and lands or to read “Harry Potter,” while behind the wheels of an autonomous terrestrial vehicle, these actions are ill-advised. The human has to expect the machine to ill behave and be at the very least ready to receive complete disengagement at any point in time. Excited engineers may think otherwise, but we are nowhere close to fail-safe AI. Let’s not kid ourselves – writing code is easy but making it work all the time, is not. Educational institutions will do a disservice for the next generation of engineers if they impart the idea that AI is human devoid.

Transportation is just one industry. The problems witnessed, span across every industry today. For example, in healthcare, AI is slowly percolating but the designers have to remember that there are weak links there too. Ironically, in the provider arena, the weak link is the human, who “inputs,” data into aging databases, sometimes called Electronic Medical Records (EMR) systems. Designed by engineers, with no understanding of healthcare, a couple of decades ago, they are receptacles of errors that can bring emerging AI and decision systems to their knees. If one designs AI driven decision systems in these environments, she has to be acutely aware of the uncertainty in inputs caused by humans, who are notorious in making mistakes with computer keyboards (or even voice commands) and database containers designed with old technologies. So, designs here need to systematically consider disengagement when the AI agent is unable to decipher data.

In manufacturing, led by data collection enthusiasts from the 90s, older database technologies, sometimes elegantly called, “Enterprise Resource Planning (ERP), systems, dominate. They have been “warehousing,” data for decades with little understanding of what it means. Now, “Hannah,” and her cousins, seem to have gotten religion but again, there is a problem here. Cutting and dicing data to make pretty pictures for decision-makers, does nothing to improve decision-making or to mitigate risk. The weak link here is the technology, designed and maintained by those who believe businesses are run by the collection, aggregation, and reporting of data. Unfortunately, successful businesses have moved on.

AI is a good thing, but not in the absence of logical thinking and systems design. Intelligence is not about the routine, but the ability to act when encountered the “non-routine.” As the software and hardware giants sell their wares, in the cloud and elsewhere, they have to understand the perils of bad and rushed technology. It is great to fool a restaurant by simulated voice, it is great to demonstrate that “machine learning,” on Twitter will create a racist and it is great to tag and collate music fast, but none of these activities is going to propel humanity to the next level. Being good in search, operating system design or good hardware, do not automatically make these companies, “experts” in an area that is erroneously called, Artificial Intelligence. There is nothing artificial about intelligence. Machines have the potential to be a lot more “intelligent,” than humans. If anybody has any doubt, just take a look at the nation’s capital and imagine a scenario of replacing the policy-makers with dumb machines. They will likely perform infinitely better. For the rest of us, the reality is still an important concept and there, we have to make sure the developments are in a beneficial direction.

Intelligence, ultimately, is about decision-making. Humans have been pretty good at it, barring a few narcissistic and mentally ill specimens in full view. They had to survive the dicey world they were handed when they climbed down the tree and stood upright in the African Savannah for the first time. Bad decision-making would have instantly eliminated them. They survived, albeit with less than 10K specimens through a harsh bottleneck. Later, single-cell organisms almost wiped them out on multiple occasions but they survived again. Now, they encounter a technology discontinuity, something that is so foreign to their psyche, the dominant reaction has to be, rejection. And, for the most part, it is. But their brains have morphed into a quantum computer, able to think about possibilities. This could be their Achilles heel, but then, life is not worth living without taking risks.

Educational institutions, still chasing the latest trends to make money, have the ultimate responsibility to get this right. To bring humanity to a level 1 society, we need to move past our instincts, created by tactical objective functions driven by food and sex and embrace intelligence. It is likely that machines will lead humanity out of its perils.

Saturday, March 16, 2019

Micro customization

Recent news (1) that a gastric resident delivery mechanism can deliver reliable, sustainable doses of agents for the long term is important. Innovation in chemical agents has moved ahead of mechanisms that would deliver them at the right time, in the optimum dose, by the best route and to the most receptive site. The ability to optimally deliver the agent is likely more important than the agent itself. In the absence of such delivery mechanisms, manufacturers have stuck to the original blue print - mass manufacturing of pills in a singular dose that shows the best therapeutic index in the population. Personalized medicine, thus, has remained elusive and more importantly, outside the business models of manufacturers.

It may be changing. Ironically, providers have moved ahead of other participants in the healthcare value chain, in the implementation of personalized medicine. Recent advancements in Artificial Intelligence and the availability of abundant data have better  positioned the providers to understand, treat and manage patients, individual by individual. If delivery mechanisms improve and become individually customizable, we can rapidly move into the next level of personalized medicine. Here, we can envision devices that can measure, decide and disburse micro doses to assure optimum delivery and complete compliance. Intelligent devices could be just round the corner, taking advantage of IoT. With embedded intelligence on board, such devices can not only operate as initially primed but also self learn and adjust over time. A couple of decades from now, medical professionals will likely view the current regime to be completely archaic.

More generally, any business that is driven by scale, a blind adherence to singular specifications, will have great difficulty to survive in the future. Technology is readily available, not just for mass customization but rather for individual intervention. This is a regime change that will affect every industry and every business. Getting ahead of this rapid transformation is a necessary condition for success.

(1) A gastric resident drug delivery system for prolonged gram-level dosing of tuberculosis treatment. Verma et al. http://stm.sciencemag.org/content/11/483/eaau6267

Sunday, March 10, 2019

Type 2 Debate

A recent article (1) brings the current diagnostic and treatment regimens for those who are deemed to be at risk for progressing into Type 2 Diabetes, into question. The arguments are fair and the conflicts are clear. But what the article misses is the risk/return trade-off for the wretched condition. The total cost of diabetes related cost in the US alone is fast approaching $0.5 Trillion. With India and China running fast to the precipice, happily feeding on Western food, we do have a significant issue to deal with.

So, there are two important questions. First, do we have a reasonable idea of the risk of incidence and progression? Age old heuristics such as fasting blood sugar has been shown to be utterly useless.The current golden metric, A1c, is likely equally flawed. The answer to this question appears to be an emphatic no. And second, are the people in charge of prescribing the thresholds for diagnosis and prevention influenced by factors outside science. Unfortunately, the answer to this is likely yes.

Medicine has always shunned data and analytics in making decisions. This has set them so far behind, the US spends close to 20% of its GDP for worse outcomes. And aging regulators, still using century old statistical measurements to make decisions, makes this worse. The manufacturers, bloated with conventional statisticians, ever ready to prove what the regulators seek, are progressing backward in time. So, it is not surprising that we have ended up in a confusing muddle regarding a disease state that affects close 20% of the world population.

Now, what? It is easy to show statistics that claims only 2% of the pre-diabetics progress into diabetes. Since we are unclear as to the "precise," demarcation between pre and post diabetes, a more relevant question is what the total cost is on the system. A few thousand years ago, humans invented agriculture - and it clearly had an impact on their health. Meat eating humans for millennia, moved to stuff their bodies are not designed to process. In the last hundred years, they had too much to eat and that will certainly create problems not only for their organs, efficiently designed for very little food but also their infrastructure, designed for a slim body. Half the world's population now exceed the parameters of original designs.

Medicine has to embrace modern technology. Doctors should not be simply following rules put down blindly, regulators have to open their windows and realize that the "p-value," is a dead construct and manufacturers need to understand that their job is not a constrained optimization based on what the regulators think.

It is time for the industry to move on.

(1) https://www.sciencemag.org/news/2019/03/war-prediabetes-could-be-boon-pharma-it-good-medicine

Wednesday, March 6, 2019


News (1) that a group of scientists is attempting to build up a health system in Madagascar from scratch, driven by data, analytics, and pure heart is welcome news. Health, a non-tradable asset, has kept modern humans bottled up. Electing dumb leaders has taken them further behind. The economics of health is a complex question and till scientists and medical professionals take it on, we will swirl around in political ignorance and impasse.

Humans did not have to worry about health until recently. Their able bodies were eaten by cats or crushed by heavier animals much before any required intervention against single cell organisms or auto-immune diseases. It is different now. They are living way past the design horizon. To make matters worse, they have been carrying unwanted weight, putting unexpected levels of stress on their joints. Their organs have been failing because of overuse and their infrastructure is crumbling, not able to hold them up. Modern medicine has acted as a band-aid to prolong life but it has not contributed to maximizing the utility of the individual or society.

A big part of this issue has to do with the inadequacies of systems that apparently intelligent people designed. When there are no markets, incompetence rises. When incentives are misaligned, humans behave as they are expected to, maximizing short term returns. And, humans have a tendency to gang up on the weak and the weary as they learned through the eons of evolution. It is a perfect storm - incompetence, ignorance and non-market designs - that could sink humanity.

It is time to step out. Experiment, revolutionize and let information and technology guide future actions.

(1) http://science.sciencemag.org/content/363/6430/918

Wednesday, February 27, 2019

Reducing software by hardware

New transistor designs (1) may help surpass the daunting constraints faced by contemporary software as we move toward solving highly specialized problems. The dominance of software over general computing architecture may be coming to an end. Until architectures can practically incorporate quantum computing, there is an impasse. And, quantum tunneling has brought conventional transistor designs to an asymptotic and predictable lethargy.

It is time to return to customized hardware for specialized problems. This will require more engineers again and less programmers and mathematicians. Educational institutions, grand masters at following the latest trends, while they teach the students "strategy," are incompetent in understanding what tomorrow will bring. And, that will certainly dampen innovation in the required direction.

However, this could be a short term phenomenon. Either of the two directions, squeezing more out of Silicon or finding better materials, may yield interesting designs that may allow a return to software in the future. The latter has much more potential but is also more risky. That would mean that the giants of the industry will shy away from it as they try to tie the quarterly numbers for the shareholders. And some of them may realize that the "cloud," is not a solution, just a sojourn.

The hardware regime is beginning again. We have to give electrons a more direct way to shuttle for efficient computing. Anything less will tie us down to the status-quo.

(1) http://advances.sciencemag.org/content/5/2/eaau7378

Sunday, February 24, 2019

AI for the brain

Consciousness, still an undefined construct, may emerge from the brain's attempt to implement AI on its automation infrastructure. The predictive brain (1) often fails and consciousness may provide an overarching control over manual and ad-hoc decision making, that is dangerous. As she stood up in the African Savannah for the first time, she had to predict well to thrive. But when the predictions failed, it was fatal. The brain's AI, consciousness, was a necessary condition for humans to tackle the hostile conditions they were in early in their progression.

For modern humans, however, it has become less relevant. The consciousness content of the brain has been declining for centuries for many reasons. First, the brain has been dumbed down with no need to predict to survive - instead, it has been delegated to an instrument that does arithmetic like a calculator. Second, the noise fed into the brain from a plethora of external devices has forced specialization in certain areas such as translation, optimization, and prioritization. These are programmatically driven with no scope of consciousness. Finally, it is possible that modern humans have much simpler objective functions driven largely by material wealth and that does not require the activation of consciousness.

If so, the advancements in brain design during the early years of human history may have become a liability later. As science took hold of the human psyche, humans have lost on consciousness led abstract thoughts. As the cave paintings in France indicate, there has been a right brain dominance in humans early as they struggled through the daily routines. Only a handful of specimens have been able to dream and predict simultaneously in the last few centuries. And now, since the left brain has been in complete control, humans have no use for consciousness. As the AI enthusiasts burn the midnight oil to make computers look and act like humans, they may want to consider that their task will become easier as time progresses. In the future, humans will automatically act like computers, without consciousness. So, there is no need for AI for computers, as humans devoid of brain AI, will proxy for the computers of the future.

It is ironic. As humans struggle to teach computers about themselves, they have become less than computers.

(1) https://www.scientificamerican.com/article/the-brains-autopilot-mechanism-steers-consciousness/

Tuesday, February 12, 2019

Entanglement of left and right brain

Humans seem to survive with two opposite personalities in their brain. The left and right hemispheres of the brain accumulate and process data differently - it is almost like two different people are living inside the same context, simultaneously. In most cases, it appears one or the other dominates unilaterally and consistently, providing an unchanging outward personality. In a limited number of cases, humans seem to be able to switch back and forth, demonstrating different personalities and thought processes - perhaps one at work and the other at home.

An entanglement of the left and right brain could provide humans a higher context. The left brain, typically analytical and precise, battles constantly the right, shy and holistic. The left, a master at tactics do not typically yield to strategy, that appears squishy and undefined. The time lag that comes with the tenuous hardware connection between the two often leads to confusion - and sometimes to diseases.

But if one could get the two sides entangled, perhaps thoughts and ideas could be transmitted without a loss in time. If so, humans could have an integrated brain - that combines tactics with strategy, precision with uncertainty, optimization with satisfaction, utilitarianism with emotion, wealth with knowledge, ideas with processes, bad with good, past with future, ignorance with science, technology with ordinary decisions, insurance with health of the populace, board rooms with manufacturing plants, teachers with students, Washington with the heartland, East with the West, business with academics, politicians with those who run away from it, the guy with beautiful artificial hair and the hairless, those who spent millions to make their skin look better before heading out to the red carpet with those who do not have enough for dinner.

Entangle two sides of your brain. If so, you could propel humanity further.

Sunday, February 10, 2019

Poop for depression

It has long been known that the microbiome has a significant influence on the human brain. Now more tactical observations (1) confirm that certain missing species in the gut could lead the host to depression. This is an important finding that may lower brain research to focus on lower parts of the body.

The gut has been an unavoidable and most important aspect of large biological systems including humans. There they converted plant and animal matter to energy, powered their ever agreeable computer and went to work. Most systems had very efficient designs of the brain, just capable of keeping their mechanical systems that controlled the body in tune. Then the freak, human, showed up, with significant excess capacity in an organ that tends to get bored. However, the small ones in the gut always held the strings. It is possible that they are using the quantum computer upstairs for research, without the knowledge of the host.

Fecal transplant seems to solve many problems and lately CNS issues, including depression. The predictive power of microbiome on the occurrence and amelioration of human disease states is higher than what can be done with the compendium of medical knowledge based on foreign chemical agents. In fact, chemical agents that destroy the microbiome, albeit being able to show short term benefits, lead to long term deleterious effects on the system. Antibiotics certainly fall into this category. More importantly, these agents seem to have accelerated the decline of beneficial bacteria while simultaneously increasing the potency of the bad ones.

Medicine may need to focus on, not on the host, but what the host carries. The later, perhaps accounting for over half of the live cells in the system, appears to control almost everything including the host's brain.

(1) https://www.sciencemag.org/news/2019/02/evidence-mounts-gut-bacteria-can-influence-mood-prevent-depression

Friday, February 8, 2019

Mirror, Mirror... Who is the smartest?

News (1) that a tiny reef fish can recognize itself in the mirror may have implications for how intelligence is defined, consciousness is understood and ultimately, how the contemporary darling of humans - "artificial intelligence," progresses out of hype and confusion. Humans, endowed with a massive quantum computer with almost infinite capacity, appear incompetent compared to life forms with limited resources. Assessment of intelligence has to consider both the output as well as the endowment - it has to be a ratio of these metrics. More tactically for humans, "IQ," has to be a function of initial conditions afforded to the individual, exhibiting intelligence. Biased standardized tests and IQ tests do not have a clue how this works.

A new measurement is sorely needed. Across the animal kingdom, if output is measured against brain endowment, humans are likely going to bring up the rear. It is apt, as they have been trying to burn the Oxygen out of their greenhouse before they suffocate, erect walls and hatred and smash a beautiful planet down, at the first opportunity. This is clearly not signs of intelligence, far from it. Recognizing self is an important leap in the intelligence spectrum. If a biological entity could do that with few brain cells, it would point to major pitfalls in our understanding of IQ. Self awareness is a necessary (but not necessarily sufficient) condition for consciousness. If we ultimately find aggregate consciousness across species is approximately the same, it will be damning for the human ego.

If consciousness can be replicated with few q-bits, as in the case of the reef fish, it may also mean that humans are on a wrong tangent toward "artificial intelligence." It would not matter if companies have infinite computing resources, that will not be sufficient to create intelligence. As the hardware, search and operating system giants have found out by repeated experiments, hooking up infinite processors and computers do not lead to "intelligence." It will certainly burn a hole in your clients' quarterly budgets but nothing beyond that. And, the ".ai" companies with stars in their eyes, doing everything to change the world, will do much less.

Let's learn from the reef fish - minimize resources and maximize output. That's the definition of intelligence.

(1) https://www.scientificamerican.com/article/a-tiny-reef-fish-can-recognize-itself-in-a-mirror/

Wednesday, February 6, 2019

Universal mathematics

Recent news (1) that honeybees can add and substract using colored symbolic representations, indicates that mathematics is a more fundamental concept, integral to any entity. For a biological system to survive and thrive, it appears that the conceptualization of symbolic representations and arithmetic are necessary conditions. It may have broader implications for those seeking life outside Earth. If life is Carbon based, it is possible that they will share a common mathematical language, regardless of the state of their evolution.

The physical discontinuity that led to mass extinctions 65 million years ago on Earth and sent the remaining biological diversity on a tangent, may have resulted in a slower than typical progression on mathematical capabilities. If Honeybees could have performed arithmetic 300 million years before humans arrived, it is possible that the survivors are less competent in conceptualization and symbolic representation. If so, extraterrestrial life is likely substantially more advanced in mathematics. The beauty of math, as a language, is that it is backward compatible. The advanced species, thus, will be able to recognize the crude conceptualization and arithmetic that humans are currently specializing in.

If the current constraints on space-time hold true (which is unlikely), physical transport across coordinates of possible life will be impossible. Assuming we got this reasonably correct, an advanced entity elsewhere will try to communicate using only mathematics and not rigid constraints such as languages. Voyager is carrying recordings of languages from around the world on the premise that when ET picks it up, they will be thrilled by English or Arabic. It is unlikely. Human languages are not backward compatible nor are they mathematically reducible. It is an accumulation of errors, an inelegant misrepresentation of beautiful math. More cynically, the human brain appears to be mathematical in operation, not in use. An advanced entity may ultimately be able to connect directly with the hardware but not to the carrier of it.

Is there free will?

(1) https://www.inverse.com/article/53065-can-bees-do-math-wtf

Saturday, February 2, 2019

Options for the blue planet

As the policy makers try hard to curtail the options left for the tiny blue dot based on ignorance and ego, there could still be a few left for humanity to consider. The planet appears to be sick and is under constant threat from space, both intra and extrasolar. Any external observer to this closed system is unlikely to bet on its survival. The existential threats to humanity fall largely into two buckets - a slow environmental deterioration that will lead to massive biological extinctions with an anticipated culmination and a catastrophic event that will wipe it out from the vast universal map.

So, what options are left? On the former, most scientists and policy-makers are focused on curtailing greenhouse emissions. It is a bit like studying how brakes work on an automobile that is falling down the mountain slope. Policies that incrementally curtail emissions will do nothing - that time has long passed. Data will show that it was never a reliable option -just an academic notion and a feel-good exercise. The only option is a technological leap - an invention that can substantially change the composition of the greenhouse we all breathe in, within a decade. This is fundamentally a material science and energy problem. If we get the best minds focused on it, we can certainly solve it. It is a waste of time trying to influence the nincompoops in the nation's capital and elsewhere.

On the later, the space agency has been a disappointment. Packed with a bimodal distribution of engineers, some very old and some very young,  they have completely missed the boat. The older ones want to make rockets and land on nearby planets. The younger ones with stars in their eyes want to time travel. Neither is going to be helpful in any way on an impending threat of collision with a large-sized body. Here again, the only option is a technological leap. Engineers have a tendency to rely on what they know, such as exploding an approaching asteroid with a nuclear bomb. What they need to focus on are inventions that will render the status-quo obsolete. Universities, money hungry machines, have not been helpful in teaching the next generation what is really important. The professors seek tenures, the administrators seek handouts and the students seek degrees with irrelevant rankings and stamps.

It is truly a comedy. As they run faster and faster like hamsters on a treadmill with no end goal, humans have to wonder how they can break out of the shackle.

Wednesday, January 30, 2019

The Chicago constant

Professor Hubble of Chicago proposed and measured a single number, with such broad implications that a century later we are still trying to figure it out (1). The fate of the universe, no less, rests on the thoughts of a singular individual from the Southside. Candles and CMB seem to disagree so closely that God should be chuckling as She let the humans seek unattainable truth as they vanish. But the accomplishments of a single individual, Hubble, cannot be underestimated.

The engineers got LIGO and those who do not have access, simpler ideas. Could we not understand if we expand into darkness or simply converge back into heat and fury? Could we not understand that an ailing World, a spec in the universe, does not matter? Could we not understand that the short horizons we are afforded are so ephemeral that counting money and power are useless? Could we not understand walls and hatred do not solve problems? Could we not understand only those hypotheses that are well supported by "grants," are proved and the rest not. Could we not understand that outcomes are mostly based on initial conditions and not capabilities, competence or assertions?

A single number, invented by an advanced human being, hangs in the balance. It is ironic.

(1) https://www.scientificamerican.com/article/the-universes-fate-rests-on-the-hubble-constant-mdash-which-has-so-far-eluded-astronomers1/#googDisableSync

Wednesday, January 23, 2019

The importance of language

A recent study (1) that shows the individual's language preferences could affect what she sees is interesting. But more importantly, one has to wonder if disparate languages drive the apparent cultural, social and policy differences across the globe. If so, that will be unfortunate.

Complex language, apparently the only differentiating quality of the human animal, has gotten them far. They could communicate thoughts, innovate, memorize and survive in the Savannah. They have been creative, perhaps starting in clicks and music and progressing toward more complex structures. In the modern context, languages seem to share common roots in Indo-European, Sino-Tibetan, and Afro-Asiatic with a few unusual divergences in the Pacific and South India. But the larger three streams of language progression appear to be distinctly different in architecture, specialization, and use. If the general hypothesis that language shapes one's comprehension is true, we may have separated ourselves into three different worlds (not to mention smaller ones in the Pacific and South India), purely by chance.

The human brain has been trying to cope with language forever. Language is an unnatural leap for the animal and the brain could not have coped with the idea without great effort. Reinforcement learning would have led the three cohorts of humans into a higher and higher association of language with outcomes. Thus, they would likely reject observations from outside that do not fit. Their science, religion, closely follows with few cultural variations. And, their organizational philosophy, objective functions, and expectations have diverged significantly.

A technology that integrates the three distinctly different streams of language could be a necessary condition for humans to make the next leap.

(1) https://www.scientificamerican.com/article/our-language-affects-what-we-see/

Monday, January 21, 2019

Is this what we really want?

Recent news (1) that a single digital photograph could predict genetic diseases, demonstrates where humans are heading.  Now, not only hardware but also behavior (2) is predictable. We are fast approaching a regime in which both a newborn's lifespan and her expected contribution to society are predictable at birth. This has broad implications for policy.

It is important to revisit the human objective function. What exactly are we trying to maximize?. Are we trying to maximize societal utility - aggregate happiness of 7.4 billion people across the world? Are we trying to segregate and locally optimize? If so, are we considering time as an axis? Without such a notion, tactical optimization is likely to fail.

In a regime where an individual's expected contributions are predictable, a utilitarian society could cull and advance at the expense of humanity. Mathematics could override emotions and ultimately, humans. Is this what we really want? More fundamentally, is a human an aggregation of her memories? If so, are those memories valuable in the context of a societal objective function?

It appears that it ultimately comes down to what humans want to maximize. A planet spinning in distress could go on for another 4 billion years but more likely will be eliminated by an asteroid collision in significantly less time. If so, should we not attempt to maximize tactical happiness? Should we not attempt to lift 7.4 billion from distress and sorrow? Why do few humans with immense resources not understand the construct of the universe?

It is certainly depressing, but then it has always been.


(2) https://www.sciencemag.org/news/2019/01/people-can-predict-your-tweets-even-if-you-aren-t-twitter

Saturday, January 19, 2019


Recent news (1) that the components of the microbiome has significant predictive power in the host's age is interesting. The dance between bacteria and other biological entities has been continuing for over 4 billion years. During that time, the first occupants of the planet, have substantially expanded their ability to manage and control every other entity. Now, they demonstrate species wide optimization based on the host's state and it is exciting and possibly, scary. They have shown efficient communication between members of a society and it is possible they are practicing a broader design.

Bacteria, the most beautiful, potent and strategic biological entity in the universe, may have arrived on Earth hitching a ride on an asteroid. They have been busy ever since. They may have aided the development of more complex biological designs for future harvesting or as an enclosure for a sojourn. Such is the power of the single cell entity that they could eliminate entire species within measurable timescales. Humans, the least robust of available substrates, arrested the advance of a superior entity by cobbling together agents from soil under their feet. But now, we are regressing to the past as more powerful bacteria arrive with an ambition to wipe out the miserable lot.

The human enclosure has been profitable for bacteria. They could influence the organ that sends out instructions across the substrate from the gut. And, that gave them immense power to design and control large entities at will. Now, species wide collaboration indicates an understanding of time and the impending demise of the tactical enclosure.

It is ironic that the blue planet has a singular owner.

(1) https://www.sciencemag.org/news/2019/01/bacteria-your-gut-may-reveal-your-true-age

Saturday, January 12, 2019


Voyager 1 and 2, crowning accomplishments of the space agency, when there were real people with passion, continue their decades old work. Data from Voyager 1, the man made object that left the solar system (2), seems to debunk the hypothesis that dark matter is composed of a large number of small black holes(1). A figment of the physicists' imagination to connect status-quo theory with inexplicable observations, dark matter, has been elusive. In a world governed by experimentalists, always looking to prove what has been speculated, micro black holes certainly fit the bill. But now, Voyager 1, indicates otherwise.

The human movie has been playing in slow motion. The information content of the universe, likely infinite, has been fed to humans in bit size chunks. Satisfied with so little, Lord Kelvin declared over a century ago that "there is nothing new to be discovered now and all that remains is more and more precise measurement." The smarter ones at the turn of the century who substantially changed the slope of human knowledge, remained relatively humble. The current crop, however, is unwilling to abandon age old ideas and they cook up theories with such mathematical precision, such ideas are dead on arrival. The engineering and experimental orientation of physicists have dampened the knowledge curve for modern humans.

It is always risky to propose something completely new and it is a lot safer to prove something that has already been stated by the lords of science. A cult like culture that seems to scorn religion has all the same characteristics. The modern dark ages that show little fundamental leap in knowledge is in full flow.

 (1) https://www.sciencemag.org/news/2019/01/aging-voyager-1-spacecraft-undermines-idea-dark-matter-tiny-black-holes

(2) http://www.scientificsense.com/2013/03/good-bye-solar-system-good-bye.html#axzz5cRMINFR0

Tuesday, January 8, 2019

The Economist(s)

Economists are odd people. They love numbers and they pretend to make policies with a level of determinism that does not even exist in physical sciences. They will cut, dice and serve numbers onto your plate like the most skilled chef in a Benihana restaurant. And they top that off with colorful charts and interesting conclusions.

A recent article in the grand magazine appears to conclude that the "retreat of democracy," stopped in 2018 (1). This is an amazing observation. In any other professional field, this will raise many questions but apparently not in economics. The first question will be seeking a definition of the phenomenon, "democracy." Sure, this throws a wrench into the neatly tied up spreadsheets used by the authors to make such a fascinating conclusion. The next question is whether we knew if "democracy (whatever that is)," was in decline. To stop the "retreat," one has to imagine the retreat first. Colorful charts and spreadsheets are useful but they typically do not provide any insights. Granted they look beautiful in the glossy paper of the magazine that travels across the world, spewing wisdom,

Let's step back for a moment and revisit the fundamental question of definition. The largest and arguably the greatest democracies of the world elect idiots and religious fanatics. The smaller and older ones that were pushed back into their corners by their "subjects," elect bureaucrats with no understanding of the world. And, across the rest of the world where democracy reigns, they elect those who want to create policies to fragment and concentrate. So, was democracy in decline before and more importantly, have we stopped the decline?

Humans have been democratic before the arrival of the modern variety. The "modern humans," have objective functions specializing in killing, pillaging and domination. This is incompatible with "democracy," as evolved from Greek philosophers. When the brain ruled, humans found interesting ways to tap into the information content of society. This was lost many decades ago when ignorance and materialism enveloped the declining species.

In spite of all the "numbers," democracy was never in decline nor have we "stopped the decline of it" We have not had democracy for hundreds of years.

(1) https://www.economist.com/graphic-detail/2019/01/08/the-retreat-of-global-democracy-stopped-in-2018

Monday, January 7, 2019

Smart fish

Recent finding (1) that archerfish demonstrates facial recognition, likely over and above the most hyped up AI technology, is a constant reminder that humans are not very smart. Their Silicon infused technologies have led them astray and their ego has gotten them blind and incompetent. It is not the count of brain cells nor the ratio of the brain weight to that of the body weight that matters. It is how you use it and the human is the most unlikely candidate for efficient use of her endowment.

As the search, operating system and hardware monopolists with infinite access to capital prove that they can create racism on Twitter (as if that needed proof) and play games like no human has done before, it is important to be aware of basic notions of intelligence. It is not deep mind, however superb that "great," mind is playing star wars, it is not alpha go, however awesome a go player she really is, it is not deep blue, in spite of the domination of arbitrary chess boards and guess game in Jeopardy - intelligence has nothing to do with any of these games that old men and women play. The archerfish can recognize a human face from any angle without any evolutionary specialization and she has very few resources.

Humans appear to be held back because of their own progress. It is an oxymoron, for the contemporary crop seems to have forgotten what got them here. As the whales and parrots mimic human speech, as the great apes sit and wonder what has gone wrong with the universe, as the elephants visit the spot where their companion fell to mourn, as the pigs play and sleep together in a harmonious community, as the dolphins attempt to guide, as the dogs attempt to rationalize, as the octopus hide, as the squirrels save and as the crows make tools, we have a species that hate, kill and destroy their own habitat.

Not too smart.

(1) https://www.scientificamerican.com/article/fishy-smarts-archerfish-can-recognize-human-faces-in-3-d/

Friday, January 4, 2019

Plan S

As ignorant politicians battle against each other to make the world worse, there is a movement afoot that could move humanity closer to a positive slope of advancement. It is ironic as the undemocratic countries sign on to an idea of open science, the "advanced democracies," stay on the sidelines. There are good reasons for this as money make many "scientists," go blind and the rest seek fame, tenure and Nobel prizes by hiding their research, funded by the public.

It is time for Plan S (1). Knowledge should not be bottled up even for money and fame. This is especially true if such money and fame came from funded research by the public. The racket has been widespread - authors, publishers and "peer reviewers," - an in-bred community who do not understand the world at large. They assert they are "elite," and know everything. They attempt to teach at universities with singular metrics for performance that counts the number of "publications." Such manufactured research, akin to sausage making, moves us back and not forward.

It is time for Plan S. For nearly half a million years, homo-sapiens progressed across unknown terrains by sharing information freely. They collaborated, brain-stormed and advanced thinking. The academic sham has put an end to human progress as they protect marginal increments in knowledge to extract economic rents. They pretend to teach but only those who do not understand their "research." They "advise," but only those who do not recognize the shallowness of their knowledge. They "influence," policy as if they can see the future and they look down on the rest.

It is time for Plan S. If you discovered something with public money, it is for the public and not for you.

(1) https://www.sciencemag.org/news/2019/01/will-world-embrace-plan-s-radical-proposal-mandate-open-access-science-papers