Tuesday, March 19, 2019

AI and the weakest link


The recent debacle in aircraft design is a constant reminder that software engineers and “data scientists,” excited by the possibilities, could create havoc in many different industries. In transportation, as autopilot systems get smarter, they could take over virtually everything a vehicle does, terrestrial or otherwise. What the designers seem to have missed recently is that an aircraft is a conglomeration of data transmitting mechanical sensors and sophisticated software. Traditional engineering education would have informed the designers that the system is only as good as the weakest link, but the younger ones may have skipped those courses. Here, faulty data from an old technology may have confused the brain. There are multiple issues to consider here.

First, the design of systems needs to be holistic. This is easier said than done as a complex vehicle is designed in parts and assembled. Teams who work on these components may have different skill sets and the overall blueprint may not consider the biases in designs created by separate teams. For example, if the brain is designed with little flexibility to discard faulty data, the expectation would be that it is unlikely. However, if the data is emerging from mechanical devices, with no embedded intelligence, it is almost a certainty that faulty data will arrive at some point in the senor’s life. Two recent aircraft failures in Asia and Africa and the one much earlier over the Atlantic seem to have been caused by bad sensors sending bad data to a “sophisticated AI agent,” with little capability to differentiate between good and bad data. So, either the sensors and other mechanical devices in the system need to be smarter so as to recognize their own fallibilities or the central brain has to be able to recognize when it is fed bad stuff. There is a lull in engineering education that has moved in the direction of high specialization, without an overall understanding of systems design and risk. This is going to surface many issues across industries from transportation, manufacturing to healthcare.

Second, the human is still the best-known risk mitigator, with her brain fine-tuned over the millennia to sense and avert danger. In transportation, disengagement has to be a fundamental aspect of design. Although it could be tempting to sit back while an aircraft takes off and lands or to read “Harry Potter,” while behind the wheels of an autonomous terrestrial vehicle, these actions are ill-advised. The human has to expect the machine to ill behave and be at the very least ready to receive complete disengagement at any point in time. Excited engineers may think otherwise, but we are nowhere close to fail-safe AI. Let’s not kid ourselves – writing code is easy but making it work all the time, is not. Educational institutions will do a disservice for the next generation of engineers if they impart the idea that AI is human devoid.

Transportation is just one industry. The problems witnessed, span across every industry today. For example, in healthcare, AI is slowly percolating but the designers have to remember that there are weak links there too. Ironically, in the provider arena, the weak link is the human, who “inputs,” data into aging databases, sometimes called Electronic Medical Records (EMR) systems. Designed by engineers, with no understanding of healthcare, a couple of decades ago, they are receptacles of errors that can bring emerging AI and decision systems to their knees. If one designs AI driven decision systems in these environments, she has to be acutely aware of the uncertainty in inputs caused by humans, who are notorious in making mistakes with computer keyboards (or even voice commands) and database containers designed with old technologies. So, designs here need to systematically consider disengagement when the AI agent is unable to decipher data.

In manufacturing, led by data collection enthusiasts from the 90s, older database technologies, sometimes elegantly called, “Enterprise Resource Planning (ERP), systems, dominate. They have been “warehousing,” data for decades with little understanding of what it means. Now, “Hannah,” and her cousins, seem to have gotten religion but again, there is a problem here. Cutting and dicing data to make pretty pictures for decision-makers, does nothing to improve decision-making or to mitigate risk. The weak link here is the technology, designed and maintained by those who believe businesses are run by the collection, aggregation, and reporting of data. Unfortunately, successful businesses have moved on.

AI is a good thing, but not in the absence of logical thinking and systems design. Intelligence is not about the routine, but the ability to act when encountered the “non-routine.” As the software and hardware giants sell their wares, in the cloud and elsewhere, they have to understand the perils of bad and rushed technology. It is great to fool a restaurant by simulated voice, it is great to demonstrate that “machine learning,” on Twitter will create a racist and it is great to tag and collate music fast, but none of these activities is going to propel humanity to the next level. Being good in search, operating system design or good hardware, do not automatically make these companies, “experts” in an area that is erroneously called, Artificial Intelligence. There is nothing artificial about intelligence. Machines have the potential to be a lot more “intelligent,” than humans. If anybody has any doubt, just take a look at the nation’s capital and imagine a scenario of replacing the policy-makers with dumb machines. They will likely perform infinitely better. For the rest of us, the reality is still an important concept and there, we have to make sure the developments are in a beneficial direction.

Intelligence, ultimately, is about decision-making. Humans have been pretty good at it, barring a few narcissistic and mentally ill specimens in full view. They had to survive the dicey world they were handed when they climbed down the tree and stood upright in the African Savannah for the first time. Bad decision-making would have instantly eliminated them. They survived, albeit with less than 10K specimens through a harsh bottleneck. Later, single-cell organisms almost wiped them out on multiple occasions but they survived again. Now, they encounter a technology discontinuity, something that is so foreign to their psyche, the dominant reaction has to be, rejection. And, for the most part, it is. But their brains have morphed into a quantum computer, able to think about possibilities. This could be their Achilles heel, but then, life is not worth living without taking risks.

Educational institutions, still chasing the latest trends to make money, have the ultimate responsibility to get this right. To bring humanity to a level 1 society, we need to move past our instincts, created by tactical objective functions driven by food and sex and embrace intelligence. It is likely that machines will lead humanity out of its perils.




Saturday, March 16, 2019

Micro customization

Recent news (1) that a gastric resident delivery mechanism can deliver reliable, sustainable doses of agents for the long term is important. Innovation in chemical agents has moved ahead of mechanisms that would deliver them at the right time, in the optimum dose, by the best route and to the most receptive site. The ability to optimally deliver the agent is likely more important than the agent itself. In the absence of such delivery mechanisms, manufacturers have stuck to the original blue print - mass manufacturing of pills in a singular dose that shows the best therapeutic index in the population. Personalized medicine, thus, has remained elusive and more importantly, outside the business models of manufacturers.

It may be changing. Ironically, providers have moved ahead of other participants in the healthcare value chain, in the implementation of personalized medicine. Recent advancements in Artificial Intelligence and the availability of abundant data have better  positioned the providers to understand, treat and manage patients, individual by individual. If delivery mechanisms improve and become individually customizable, we can rapidly move into the next level of personalized medicine. Here, we can envision devices that can measure, decide and disburse micro doses to assure optimum delivery and complete compliance. Intelligent devices could be just round the corner, taking advantage of IoT. With embedded intelligence on board, such devices can not only operate as initially primed but also self learn and adjust over time. A couple of decades from now, medical professionals will likely view the current regime to be completely archaic.

More generally, any business that is driven by scale, a blind adherence to singular specifications, will have great difficulty to survive in the future. Technology is readily available, not just for mass customization but rather for individual intervention. This is a regime change that will affect every industry and every business. Getting ahead of this rapid transformation is a necessary condition for success.

(1) A gastric resident drug delivery system for prolonged gram-level dosing of tuberculosis treatment. Verma et al. http://stm.sciencemag.org/content/11/483/eaau6267

Sunday, March 10, 2019

Type 2 Debate

A recent article (1) brings the current diagnostic and treatment regimens for those who are deemed to be at risk for progressing into Type 2 Diabetes, into question. The arguments are fair and the conflicts are clear. But what the article misses is the risk/return trade-off for the wretched condition. The total cost of diabetes related cost in the US alone is fast approaching $0.5 Trillion. With India and China running fast to the precipice, happily feeding on Western food, we do have a significant issue to deal with.

So, there are two important questions. First, do we have a reasonable idea of the risk of incidence and progression? Age old heuristics such as fasting blood sugar has been shown to be utterly useless.The current golden metric, A1c, is likely equally flawed. The answer to this question appears to be an emphatic no. And second, are the people in charge of prescribing the thresholds for diagnosis and prevention influenced by factors outside science. Unfortunately, the answer to this is likely yes.

Medicine has always shunned data and analytics in making decisions. This has set them so far behind, the US spends close to 20% of its GDP for worse outcomes. And aging regulators, still using century old statistical measurements to make decisions, makes this worse. The manufacturers, bloated with conventional statisticians, ever ready to prove what the regulators seek, are progressing backward in time. So, it is not surprising that we have ended up in a confusing muddle regarding a disease state that affects close 20% of the world population.

Now, what? It is easy to show statistics that claims only 2% of the pre-diabetics progress into diabetes. Since we are unclear as to the "precise," demarcation between pre and post diabetes, a more relevant question is what the total cost is on the system. A few thousand years ago, humans invented agriculture - and it clearly had an impact on their health. Meat eating humans for millennia, moved to stuff their bodies are not designed to process. In the last hundred years, they had too much to eat and that will certainly create problems not only for their organs, efficiently designed for very little food but also their infrastructure, designed for a slim body. Half the world's population now exceed the parameters of original designs.

Medicine has to embrace modern technology. Doctors should not be simply following rules put down blindly, regulators have to open their windows and realize that the "p-value," is a dead construct and manufacturers need to understand that their job is not a constrained optimization based on what the regulators think.

It is time for the industry to move on.


(1) https://www.sciencemag.org/news/2019/03/war-prediabetes-could-be-boon-pharma-it-good-medicine

Wednesday, March 6, 2019

Health

News (1) that a group of scientists is attempting to build up a health system in Madagascar from scratch, driven by data, analytics, and pure heart is welcome news. Health, a non-tradable asset, has kept modern humans bottled up. Electing dumb leaders has taken them further behind. The economics of health is a complex question and till scientists and medical professionals take it on, we will swirl around in political ignorance and impasse.

Humans did not have to worry about health until recently. Their able bodies were eaten by cats or crushed by heavier animals much before any required intervention against single cell organisms or auto-immune diseases. It is different now. They are living way past the design horizon. To make matters worse, they have been carrying unwanted weight, putting unexpected levels of stress on their joints. Their organs have been failing because of overuse and their infrastructure is crumbling, not able to hold them up. Modern medicine has acted as a band-aid to prolong life but it has not contributed to maximizing the utility of the individual or society.

A big part of this issue has to do with the inadequacies of systems that apparently intelligent people designed. When there are no markets, incompetence rises. When incentives are misaligned, humans behave as they are expected to, maximizing short term returns. And, humans have a tendency to gang up on the weak and the weary as they learned through the eons of evolution. It is a perfect storm - incompetence, ignorance and non-market designs - that could sink humanity.

It is time to step out. Experiment, revolutionize and let information and technology guide future actions.

(1) http://science.sciencemag.org/content/363/6430/918

Wednesday, February 27, 2019

Reducing software by hardware

New transistor designs (1) may help surpass the daunting constraints faced by contemporary software as we move toward solving highly specialized problems. The dominance of software over general computing architecture may be coming to an end. Until architectures can practically incorporate quantum computing, there is an impasse. And, quantum tunneling has brought conventional transistor designs to an asymptotic and predictable lethargy.

It is time to return to customized hardware for specialized problems. This will require more engineers again and less programmers and mathematicians. Educational institutions, grand masters at following the latest trends, while they teach the students "strategy," are incompetent in understanding what tomorrow will bring. And, that will certainly dampen innovation in the required direction.

However, this could be a short term phenomenon. Either of the two directions, squeezing more out of Silicon or finding better materials, may yield interesting designs that may allow a return to software in the future. The latter has much more potential but is also more risky. That would mean that the giants of the industry will shy away from it as they try to tie the quarterly numbers for the shareholders. And some of them may realize that the "cloud," is not a solution, just a sojourn.

The hardware regime is beginning again. We have to give electrons a more direct way to shuttle for efficient computing. Anything less will tie us down to the status-quo.

(1) http://advances.sciencemag.org/content/5/2/eaau7378

Sunday, February 24, 2019

AI for the brain

Consciousness, still an undefined construct, may emerge from the brain's attempt to implement AI on its automation infrastructure. The predictive brain (1) often fails and consciousness may provide an overarching control over manual and ad-hoc decision making, that is dangerous. As she stood up in the African Savannah for the first time, she had to predict well to thrive. But when the predictions failed, it was fatal. The brain's AI, consciousness, was a necessary condition for humans to tackle the hostile conditions they were in early in their progression.

For modern humans, however, it has become less relevant. The consciousness content of the brain has been declining for centuries for many reasons. First, the brain has been dumbed down with no need to predict to survive - instead, it has been delegated to an instrument that does arithmetic like a calculator. Second, the noise fed into the brain from a plethora of external devices has forced specialization in certain areas such as translation, optimization, and prioritization. These are programmatically driven with no scope of consciousness. Finally, it is possible that modern humans have much simpler objective functions driven largely by material wealth and that does not require the activation of consciousness.

If so, the advancements in brain design during the early years of human history may have become a liability later. As science took hold of the human psyche, humans have lost on consciousness led abstract thoughts. As the cave paintings in France indicate, there has been a right brain dominance in humans early as they struggled through the daily routines. Only a handful of specimens have been able to dream and predict simultaneously in the last few centuries. And now, since the left brain has been in complete control, humans have no use for consciousness. As the AI enthusiasts burn the midnight oil to make computers look and act like humans, they may want to consider that their task will become easier as time progresses. In the future, humans will automatically act like computers, without consciousness. So, there is no need for AI for computers, as humans devoid of brain AI, will proxy for the computers of the future.

It is ironic. As humans struggle to teach computers about themselves, they have become less than computers.


(1) https://www.scientificamerican.com/article/the-brains-autopilot-mechanism-steers-consciousness/




Tuesday, February 12, 2019

Entanglement of left and right brain

Humans seem to survive with two opposite personalities in their brain. The left and right hemispheres of the brain accumulate and process data differently - it is almost like two different people are living inside the same context, simultaneously. In most cases, it appears one or the other dominates unilaterally and consistently, providing an unchanging outward personality. In a limited number of cases, humans seem to be able to switch back and forth, demonstrating different personalities and thought processes - perhaps one at work and the other at home.

An entanglement of the left and right brain could provide humans a higher context. The left brain, typically analytical and precise, battles constantly the right, shy and holistic. The left, a master at tactics do not typically yield to strategy, that appears squishy and undefined. The time lag that comes with the tenuous hardware connection between the two often leads to confusion - and sometimes to diseases.

But if one could get the two sides entangled, perhaps thoughts and ideas could be transmitted without a loss in time. If so, humans could have an integrated brain - that combines tactics with strategy, precision with uncertainty, optimization with satisfaction, utilitarianism with emotion, wealth with knowledge, ideas with processes, bad with good, past with future, ignorance with science, technology with ordinary decisions, insurance with health of the populace, board rooms with manufacturing plants, teachers with students, Washington with the heartland, East with the West, business with academics, politicians with those who run away from it, the guy with beautiful artificial hair and the hairless, those who spent millions to make their skin look better before heading out to the red carpet with those who do not have enough for dinner.

Entangle two sides of your brain. If so, you could propel humanity further.

Sunday, February 10, 2019

Poop for depression

It has long been known that the microbiome has a significant influence on the human brain. Now more tactical observations (1) confirm that certain missing species in the gut could lead the host to depression. This is an important finding that may lower brain research to focus on lower parts of the body.

The gut has been an unavoidable and most important aspect of large biological systems including humans. There they converted plant and animal matter to energy, powered their ever agreeable computer and went to work. Most systems had very efficient designs of the brain, just capable of keeping their mechanical systems that controlled the body in tune. Then the freak, human, showed up, with significant excess capacity in an organ that tends to get bored. However, the small ones in the gut always held the strings. It is possible that they are using the quantum computer upstairs for research, without the knowledge of the host.

Fecal transplant seems to solve many problems and lately CNS issues, including depression. The predictive power of microbiome on the occurrence and amelioration of human disease states is higher than what can be done with the compendium of medical knowledge based on foreign chemical agents. In fact, chemical agents that destroy the microbiome, albeit being able to show short term benefits, lead to long term deleterious effects on the system. Antibiotics certainly fall into this category. More importantly, these agents seem to have accelerated the decline of beneficial bacteria while simultaneously increasing the potency of the bad ones.

Medicine may need to focus on, not on the host, but what the host carries. The later, perhaps accounting for over half of the live cells in the system, appears to control almost everything including the host's brain.


(1) https://www.sciencemag.org/news/2019/02/evidence-mounts-gut-bacteria-can-influence-mood-prevent-depression