Google

YouTube

Spotify

Scientific Sense Podcast

Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Sunday, March 10, 2019

Type 2 Debate

A recent article (1) brings the current diagnostic and treatment regimens for those who are deemed to be at risk for progressing into Type 2 Diabetes, into question. The arguments are fair and the conflicts are clear. But what the article misses is the risk/return trade-off for the wretched condition. The total cost of diabetes related cost in the US alone is fast approaching $0.5 Trillion. With India and China running fast to the precipice, happily feeding on Western food, we do have a significant issue to deal with.

So, there are two important questions. First, do we have a reasonable idea of the risk of incidence and progression? Age old heuristics such as fasting blood sugar has been shown to be utterly useless.The current golden metric, A1c, is likely equally flawed. The answer to this question appears to be an emphatic no. And second, are the people in charge of prescribing the thresholds for diagnosis and prevention influenced by factors outside science. Unfortunately, the answer to this is likely yes.

Medicine has always shunned data and analytics in making decisions. This has set them so far behind, the US spends close to 20% of its GDP for worse outcomes. And aging regulators, still using century old statistical measurements to make decisions, makes this worse. The manufacturers, bloated with conventional statisticians, ever ready to prove what the regulators seek, are progressing backward in time. So, it is not surprising that we have ended up in a confusing muddle regarding a disease state that affects close 20% of the world population.

Now, what? It is easy to show statistics that claims only 2% of the pre-diabetics progress into diabetes. Since we are unclear as to the "precise," demarcation between pre and post diabetes, a more relevant question is what the total cost is on the system. A few thousand years ago, humans invented agriculture - and it clearly had an impact on their health. Meat eating humans for millennia, moved to stuff their bodies are not designed to process. In the last hundred years, they had too much to eat and that will certainly create problems not only for their organs, efficiently designed for very little food but also their infrastructure, designed for a slim body. Half the world's population now exceed the parameters of original designs.

Medicine has to embrace modern technology. Doctors should not be simply following rules put down blindly, regulators have to open their windows and realize that the "p-value," is a dead construct and manufacturers need to understand that their job is not a constrained optimization based on what the regulators think.

It is time for the industry to move on.


(1) https://www.sciencemag.org/news/2019/03/war-prediabetes-could-be-boon-pharma-it-good-medicine

Wednesday, March 6, 2019

Health

News (1) that a group of scientists is attempting to build up a health system in Madagascar from scratch, driven by data, analytics, and pure heart is welcome news. Health, a non-tradable asset, has kept modern humans bottled up. Electing dumb leaders has taken them further behind. The economics of health is a complex question and till scientists and medical professionals take it on, we will swirl around in political ignorance and impasse.

Humans did not have to worry about health until recently. Their able bodies were eaten by cats or crushed by heavier animals much before any required intervention against single cell organisms or auto-immune diseases. It is different now. They are living way past the design horizon. To make matters worse, they have been carrying unwanted weight, putting unexpected levels of stress on their joints. Their organs have been failing because of overuse and their infrastructure is crumbling, not able to hold them up. Modern medicine has acted as a band-aid to prolong life but it has not contributed to maximizing the utility of the individual or society.

A big part of this issue has to do with the inadequacies of systems that apparently intelligent people designed. When there are no markets, incompetence rises. When incentives are misaligned, humans behave as they are expected to, maximizing short term returns. And, humans have a tendency to gang up on the weak and the weary as they learned through the eons of evolution. It is a perfect storm - incompetence, ignorance and non-market designs - that could sink humanity.

It is time to step out. Experiment, revolutionize and let information and technology guide future actions.

(1) http://science.sciencemag.org/content/363/6430/918

Sunday, February 24, 2019

AI for the brain

Consciousness, still an undefined construct, may emerge from the brain's attempt to implement AI on its automation infrastructure. The predictive brain (1) often fails and consciousness may provide an overarching control over manual and ad-hoc decision making, that is dangerous. As she stood up in the African Savannah for the first time, she had to predict well to thrive. But when the predictions failed, it was fatal. The brain's AI, consciousness, was a necessary condition for humans to tackle the hostile conditions they were in early in their progression.

For modern humans, however, it has become less relevant. The consciousness content of the brain has been declining for centuries for many reasons. First, the brain has been dumbed down with no need to predict to survive - instead, it has been delegated to an instrument that does arithmetic like a calculator. Second, the noise fed into the brain from a plethora of external devices has forced specialization in certain areas such as translation, optimization, and prioritization. These are programmatically driven with no scope of consciousness. Finally, it is possible that modern humans have much simpler objective functions driven largely by material wealth and that does not require the activation of consciousness.

If so, the advancements in brain design during the early years of human history may have become a liability later. As science took hold of the human psyche, humans have lost on consciousness led abstract thoughts. As the cave paintings in France indicate, there has been a right brain dominance in humans early as they struggled through the daily routines. Only a handful of specimens have been able to dream and predict simultaneously in the last few centuries. And now, since the left brain has been in complete control, humans have no use for consciousness. As the AI enthusiasts burn the midnight oil to make computers look and act like humans, they may want to consider that their task will become easier as time progresses. In the future, humans will automatically act like computers, without consciousness. So, there is no need for AI for computers, as humans devoid of brain AI, will proxy for the computers of the future.

It is ironic. As humans struggle to teach computers about themselves, they have become less than computers.


(1) https://www.scientificamerican.com/article/the-brains-autopilot-mechanism-steers-consciousness/




Saturday, December 22, 2018

High-throughput Screening for Energy


The battle for innovation will be won at the intersection of materials and information. The field has been lagging for nearly a century as scientists focused on incremental improvements to established media. Now, there appears to be hope for progress at an accelerated pace (1). Well established techniques in fields such as life sciences could boost productivity in other areas.

Humans have gotten used to relying on nature for materials for half a million years. In the modern world, that substantially curtails their ability to move further. They have been given a matrix of simple molecules and the capability to combine them at will to create new properties and applications. They have been misguided for ever, trying to make gold from charcoal and attempting to fuse hydrogen in a cold test-tube. Industries such as pharmaceuticals that claim to have found "new agents," largely relied on tree barks and soil. It was nature that made the humans tick, albeit at a very uninteresting level.

The ability to custom develop materials to fit desired properties will be an indication of human advancement. It is not the ability to code, to send mechanical toys to nearby planets, to keep the weak and the weary on life support systems, to devise theories of nothing, to postulate the growth and decline of countries, markets and cryptic currencies, to create humanoids without consciousness, to make vehicles that move at the speed of sound or to inject poison to the political swamps.

Next generation materials will redefine the energy and the future of the "tiny blue dot."


(1) https://www.sciencemag.org/news/2018/12/megalibraries-nanomaterials-could-speed-clean-energy-and-other-grand-challenge-targets

Monday, December 10, 2018

It is all in your mind


An experiment at Stanford (1) appears to demonstrate the power of the placebo effect. Most pharmaceutical research clearly points to the effects of believing and as suggested in the study, it has implications for how information is captured and disseminated through tests. Humans are susceptible to suggestion and can completely rewire the infrastructure of their body from their brain. This should have had survival benefits early as the village elder may have segregated people into random groups and reinforced one side positively and the other not. Those who where lucky enough to be in the right group started believing and ultimately succeeded, proving the point.

An over-tested and over-treated contemporary population is not only suffering from ineffective treatment regimens but also the negative effects imparted on their bodies by their brains. As medical schools get more technical in their educational stance, they have to remember that the weakest link in the chain remains to be the patient, who could easily fight technology. In this context, it may be time to redesign education bottoms up with more focus on how patients internalize information. Ultimately, it is the content of revealed information that drives outcomes. As technology advances we are likely to be exposed to more information and the effects of such exposure could completely negate any positive impact of advanced treatments.

For a variety of reasons, humanity is at crossroads. On one hand, we have accelerating technologies that boasts to make everything better and on the other, it conflicts with the psyche of ordinary human beings. With a harsh timing constraint, once an individual is sliding down the slope, it is nearly impossible to reverse the trend.

Everything appears to come back to how society manages and uses information.


(1) https://www.sciencemag.org/news/2018/12/just-thinking-you-have-poor-endurance-genes-changes-your-body


Thursday, December 6, 2018

GammaGo

News (1) that AlphaGo can successfully learn Chess, Shogi and Go through self-play is interesting. It is symptomatic of trends in AI largely relying on raw computing power. Typically, innovation lags when resources become infinite and we have early signs of trouble here. Reinforcement learning through self-play is not a new concept - it has been here from the advent of computers. It is just that not many have access to computing resources necessary to create demonstrable prototypes.

More importantly, this approach is unlikely to culminate in cognition and consciousness, the possible end game. It is clearly the case that computers can create usable heuristics by repeated experiments, just as humans do. However, those heuristics are generated within a framework of rules that were specified ex. ante. The "deep mind," enthusiasts had argued a few years ago that their computer found a "new way," to play an ancient game. It is quite possible that given a large number of experiments, computers can learn from cases that are outside the norm. But to label this "creativity," is a stretch. It is more an accident than invention. One could argue that humans have benefitted handsomely from accidents in the past and so why not computers. This is true and so the general question is whether computing resources running amuck with an infinite number of repeated experiments can provide learnings from accidents at a faster rate than humans are capable of.

It is tantalizing. What the AI leaders need to understand, however, is that we have been here before. A critical look at the approach may be beneficial. We knew that we could predict from historical data ever since math was invented and we knew that repeated search of the design space could yield usable results since the advent of computers. The question is whether we have done anything new except pouring money into scaling conventional technologies. Stacking countless "computers," in the "cloud" on the promise of AI has many drawbacks.

It is time to go back to the drawing board. A field replete with engineers seems to be going in the wrong direction. As innovation lags in materials and quantum processing, they are creating mountains of Silicon to show heuristics generation is possible. The mathematicians locked up in low productivity areas such as finance, may be well advised to go back and think.

Thinking has a low premium currently and that is problematic.


(1) http://science.sciencemag.org/content/362/6419/1140

Tuesday, September 25, 2018

The fragmentation of knowledge

Philosophers of yesteryear have argued that knowledge emanates from integration and not fragmentation. Even the early scientific disciplines, such as religion, attempted to integrate and simplify information into a set of holistic heuristics. That has been a highly successful process for most of human history. But in the modern world, largely dominated by contemporary scientific disciplines, fragmentation reigns supreme. This is problematic.

Information does not unambiguously lead to knowledge. Diving deep into silos with impenetrable walls has been the defining characteristic of modern education. As one gets deeper and deeper into highly structured information, the chance of creating knowledge largely disappears. This is because of conventional academic metrics favoring the known rather than the less certain and lack of integration across disciplines, aides tunnel vision. Here, the publishing gurus who make trivial incremental improvements to the known, win and those who seek the periphery, perish. Here, the managers of businesses driven by measurable tactics, win and those with stars in their eyes, lose predictably. Here, politicians who can appeal to emotions, win and those who make cogent arguments that could advance humanity, lose. Here artists who produce conventionally expected work, win and those who explore emerging areas, lose. Here, musicians who can scream and babble win, and those who integrate lyrics with beauty, lose. Here, those who want to make the world better lose, and those who want to dominate it, win.

Knowledge is not trivial and few achieve it.




Friday, August 24, 2018

The end of "Machine Learning."


Machine learning, an obnoxious term, that simply means statistical modeling, has the potential to lead many budding data scientists and universities clamoring to create programs that support it, down blind alleys. Machines do not learn and apparently, those immersed in this concept do not either. In the coming decade, "Machine learning," could create a significant drag on the economy as the hype is pumped up by "reputable," academic institutions, software companies and even politicians.

Regression was the "original," machine learning. The statistical modeling platforms have added all sorts of ancient mathematics in neat little packages, they sometimes even call, Artificial Intelligence. But calling Arithmetic better names, does not improve anything, let alone intelligence. What is most disappointing is the fact that universities have created entire programs around this "fake news," as they have seen favorable economics and the possibility of their graduates skating to the C suite on the back of degrees. Academic integrity used to be important and as the crop of professors who loved to advance knowledge, vanish behind time, we are approaching a regime that will devalue education. We have education rendering casinos, with all the adornments that surpass the real thing and the bricks in the wall they manufacture are going to be incompetent to face the future.

Hype has a negative value. Academic institutions should understand this. If they do not, the only competitive advantage we possess, graduate education, could be at risk. Perhaps it is time we shut down the .ai domain names and academic courses designed to appeal to pure hype.





Monday, August 13, 2018

Extending the brain

A recent publication (1) that describes a Brain-machine interface (BMI) to control a robotic arm simultaneously with human arms open up interesting possibilities for maximizing brain utilization. By a quirk of nature, humans have been endowed with an organ that far surpasses their routine needs to live and die. With simple objective functions, humans have substantially sub-optimized this endowment. But now, there may be mechanical means to keep the organ interested.

There has been a lot in the literature about the inability of humans to multitask. However, it is possible that multitasking improves with practice just like anything else (2). The quantum computer they carry, albeit being an energy hog, requires little to maintain from an infrastructure point of view. And the calorie requirement to keep it going is very small in the grand scheme of things. Hence, maximizing the use of the brain is an important consideration for every human and humanity in general.

Brain utilization shows an upward trend as people network across the world, surpassing the constraints offered by race, religion, and ignorance. This electronic extension of the brain has been unambiguously good for humanity but it feels like there is still a lot in the tank for every individual. If she can multiply limbs by mechanical multitasking it is likely that such an activity will grow neurons upstairs with unpredictable beneficial effects in the long run.

Extending the brain - mechanically and electronically - is dominant for humans. That will allow them to get over all the tactical problems currently plaguing humanity.


(1) BMI control of the third arm for multitasking: http://robotics.sciencemag.org/content/3/20/eaat1228

(2) 



Sunday, May 27, 2018

Brainy bitter

A recent study (1) suggests a novel way to control blood sugar and more generally, reduce the complications arising from type II diabetes, the disease responsible for over half of healthcare costs around the world. Meat eating homo-sapiens found agriculture recently and an overdose of carbs now threaten to create a negative slope to their expected lifespan for the first time in human history.  As millions around the world take to processed food and sugar-infused drinks as their economies improve, the world is moving closer to a healthcare precipice. They lose feeling in their feet, sight, movement and even reach amputations of limbs, not to mention the complete loss of life due to heart attacks and strokes, due to a simple condition - excess sugar and insulin in their system. The miracle of insulin has kept them going beyond expiry but all attempts at delivering the drug in easier ways, have failed.

Now, the intriguing new study shows how the brain could play an important part in the regulation of glucose (1) in the body. Dopamine release induced by deep brain electrical stimulation seems to improve insulin sensitivity, the loss of which portends the arrival of the wretched disease. Loss or lack of production of dopamine appears to reduce insulin sensitivity, likely leading to type II diabetes over time in non-diabetics. The pancreas which is responsible for optimally producing insulin to break down the bad intake is totally within the control of the brain that appears to behave differently based on the amount of dopamine it produces. The experiments explained in the study appear to support that neuronal activity in the forebrain could improve insulin sensitivity with beneficial effects on humans, either suffering from or tending toward the well-understood condition.

An active and stimulated brain could be the least invasive intervention to most diseases.


(1) Striatal dopamine regulates systemic glucose metabolism in humans and mice
http://stm.sciencemag.org/content/10/442/eaar3752



Thursday, March 15, 2018

The good people

Recent finding (1) that uses Uranium-Thorium dating on cave paintings in Spain seems to show that they are at least 64,000 years old, well before the arrival of the dominant species in Europe. The gentle and shy humanoids, the Neanderthals, perhaps more artistic and humble than their modern day counterparts, have been wiped out in the blink of an eye by those who migrated from the South. There have been many debates about their brain power and capabilities, arguments likely biased toward those who are making them. But now, it seems like their ability to create art, a clear precursor to intelligence, has predated the humans by a sizable slice of time.
The misunderstood species, now living in less than 5% of the human genome in the world may have been a more worthy occupants of the blue planet. Their swift elimination by those who invaded their hunting grounds indicates that they were gentle and perhaps accommodating. We have many modern day scenarios of the same. In South India, they welcomed most varieties of humans from around the world in recent times only to be run over later. In the Americas, the curiosity of the original inhabitants seem to have done them in. It has happened before, advanced societies seem to perish in the presence of brutal invaders and it could happen again. This implies that advancing thoughts and culture is not necessarily dominant if you want your genome to survive.
The simple objective function that drives most biological entities today - to optimally spread their genes - has a downside. It sub optimizes societal evolution and prefers micro advancement without any overall objectives. Humans are in the worst position - most believe they are put on this earth by God or something similar. And, they try to eliminate anybody who is different.
(1) http://science.sciencemag.org/content/359/6378/912

Saturday, March 10, 2018

Blue monkeys

As they roll out the next great technology - operating system and all - those behind the "revolution" seem to have missed some basic things. They have reinvented the "blue box," that shows up arbitrarily on your screen - and since they are all engineers, they do not want to give any options to the user. Often on my server, they show a blue box that requires you to "see" the updates and on my desktop, they give me only a few minutes before they forcibly shut my computer down. Monopoly has costs and if the company does not learn it, there could be trouble ahead.
Granted, they may be saving you as they realize the attacks from the East (and perhaps even from the West), but is it worth having a blue box at the center of the screen when you are doing something important or even watching a Netflix movie? More importantly, forcibly rebooting one's computer in the middle of watching a movie, may be taking monopoly power a bit too far. Even if the evil twin from the East is clamoring to get into your computer, throwing up a screen that proclaims your computer has been infected and you should call them so that they can disinfect you, it seems like a high price to pay when you are enjoying a movie. As often the case, engineers do not have much respect for the population and their programs are "most efficient." Efficiency, however, is not the only thing in life.
Makers of operating systems, autonomous cars and search, need to have an introspection. It is unclear if the leaders of these companies, "know everything." The blue box and failed artificial brain are ample evidence that they do not.

Sunday, December 17, 2017

The tsunami in healthcare

As the thousand people in Washington, whose healthcare is covered for life, figure out how many millions they would like to deny the same benefits, the industry is going through a massive transformation. The system, suffering from misaligned incentives and sophisticated gameplay, is likely the most complex. It is a lot easier to figure out autonomous cars and even “artificial intelligence.” The fundamental question in healthcare is how to maintain the health of every individual in a cost-effective fashion. There is only one class of humans who come close to this objective – providers who take care of patients and clinicians in manufacturing companies who want to solve big problems.
However, providers are suffering from technophobia. In less than five years, steering wheels will disappear from automobiles and humans will be a rare sight in manufacturing and power plants. Machines, without biases, are proving to be superior to humans in many decision processes. Every aspect of medicine, even the most cherished clinical components, will be influenced by machines in a few years. Machines, like it or not, will get better at diagnosis and treatment. The role of the provider will change to explain rather than to determine, for humans constrained by slow evolutionary processes will remain prisoners of the present.
The tsunami in healthcare is on the way. In its foggy supply chain including manufacturers, providers, payers, and patients, sunlight will descend and there will be no hiding anymore. Prevention shall matter more than treatment, non-invasive intervention more than invasive procedures, primary care more than specialty care, inventions more than incremental therapies, the patient more than a singular disease state and care plans more than procedures.
Providers who embrace technology will accelerate this trend and others could get ossified.

Saturday, December 2, 2017

Robust engineering

News that NASA engineers have been successful in firing the trajectory correction maneuver thrusters on Voyager 1, some 13 billion miles away, after not using them for nearly 40 years, to align its antennas toward the Earth, exemplifies the quality in engineering that used to exist. In spite of all the developments in the last 40 years, engineering has been slipping in both creativity and quality. As engineers head for the "street," and such largely useless activities, the field has been suffering and in the "valley," they do not care for building tangible things, just vaporware. The downward trend in the field has resulted in lagging innovation in many areas, with deleterious effects on computing hardware, transportation and city planning.
Traditional engineering has been less sexy than the ideas pursued by the purveyors of "deep mind." But what educators and policymakers may be missing is that we don't yet have bots, able to plan for the long term. Groundbreaking ideas such as the hyperloop are good, but they are not going to make much difference to the masses. We are bifurcating into dreamers who want to save the world by sending probes to Mars and those struggling with an inferior infrastructure to cover basic necessities. Those sitting on 10s of billions of capital, wondering what to do, may be well advised to look into how they could aid engineering innovation in materials, construction, and basic transportation across the world. These may not get them a Nobel prize or bring instant accolades, but they could make a much broader beneficial effect on society.
A society degrading into classes of haves and have-nots, those who live in the valley and mountain tops, those who pretend to be in academic ivory towers and those who are trying to climb out of lagging hopes and dreams, those who commit crimes with presumed immunity and those who are peaceful and content, those who want a better tomorrow and those who would like to destroy what could be, tacticians and strategists, politicians and the religious, the educated and those who could not afford it, scientists and those who do not believe in science, we have a tragic comedy with a bad ending.
Conventional engineering, a lost art form, could be as important as anything else today.

Thursday, November 9, 2017

The coding myth

As the world turns upside down aided by accelerating technologies and disruptive business models, forward-looking individuals and companies are making alternative plans for the future. Outdated education systems, clamoring to catch up with the present by providing classes online or designing graduate degrees in analytics and artificial intelligence, may be sending the wrong message to their customers. As the CEO of a fortune 100 company recently remarked, "We need more coders," and this appears to be the consensus even in those companies who are reluctant to hire coders from the "other sex." But this notion may need to be challenged - Does the world really need more coders?" What evidence do we have for this?
Coders always had a cult status and coding has been a coveted activity. However, it is unclear why this is the case. Coding is a mechanistic activity that we are very close to teaching machines how to do. However, designing what to code is not that easy. So it may not be coders and coding that we need but a deep understanding of what coding can do. And here, experience appears to matter. It is ironic that as millennials attempt to systematically dismantle the generation that has given them grief, what they really need is the raw experience of the ones they would like to exclude. The portraits of billionaires rising from code country may have created the wrong impression - for every one of them, there are millions who have simply perished coding.
Lately, it has been the gamers who made coding sexy. Some, after getting bored at the games they helped create, have set out to replicate the human mind. This is not technology but marketing and it clearly seems to have worked as the search giant stitches together technologies for the future - mind, body and all. Far away from the Silicon heart, there are large assemblies of coders - ready to make anything come alive to make their masters happy. Most did not attend fancy schools and some, none at all, but they all know how to code. But now that we can automate coding, what will the coders do?
Humans, prone to myopia, do not seem to learn from the past - but the machines they build, certainly do.

Friday, February 17, 2017

A memory lapse

Modern science has been struggling to understand human memory for ever. It appears volatile and highly manipulatable but indestructible compared to the memory of the toys humans have been able to assemble. Attempting to bridge the gap in memory between humans and computers have led many researchers astray. Understanding human memory is a necessary condition toward a robust theory of consciousness. Without that, the over excited millennials trying to reach "Artificial Intelligence," are going to come out empty.

To understand a complex phenomenon, it is better to start in the basics. It appears that the hardware afforded to a human at inception is significantly more sophisticated than what humans have assembled thus far. The subtle differences in design in which the CPU is integrated closely with memory may provide guidance to those toiling to manufacture "deep mind." The human processing unit (HPU) is not a construct separate from its memory and thus functions differently from conventional computers. Consideration of memory as separate from processing power, has led conventional computer designs away from what is optimal for intelligent computing. Recent attempts by Hewlett Packard in hardware and MIT academics in software, could be in the right direction to elevate memory to be central to intelligent computing.

Intelligent computing, however, has never been in the scope of tactically optimizing humans. As they advance autonomous vehicles, deep learning game boy, music and breast cancer deciphering big steel, they seem to be unaware of a basic idea - computer scientists and engineers have never been able to understand the human mind, not even close. As they scorn the religious fanatics across the world, following unintelligible and unprovable hypotheses, they seem to be missing that they are are not too far. It is just that their ego is a bit higher than anybody else and that make them opaque to reality.

To advance computer science incrementally forward, it will require massive infusion from philosophy, psychology and creativity and a moratorium on engineers getting anywhere close to "Artificial Intelligence," technologies.

Saturday, December 31, 2016

A new spin on Artificial Intelligence


New research from Tohoku University (1) demonstrating pattern finding using low energy solid state devices, representing synapses (spintronics), has potential to reduce the hype of contemporary artificial intelligence and move the field forward incrementally. Computer scientists have been wasting time with conventional computers and inefficient software solutions on what they hope to be a replication of intelligence. However, it has been clear from the inception of the field that engineering processes and know-how fall significantly short of its intended goals. The problem has always been hardware design and the fact that there are more software engineers in the world than those who focus on hardware, has acted as a brake on progress.

The brain has always been a bad model for artificial intelligence. A massive energy hog that has to prop itself up on a large and fat storing gut just to survive, has always been an inefficient design to create intelligence. Largely designed to keep track of routine systems, the brain accidently took on a foreign role that allowed abstract thinking. The over design of the system meant that it could do so with relatively small incremental cost. Computer scientists' attempts to replicate the energy inefficient organ, designed primarily for routine and repeating tasks, on the promise of intelligence have left many skeletons in the long and unsuccessful path to artificial intelligence. The fact that there is unabated noise in the universe of millennials about artificial intelligence is symptomatic of a lack of understanding of what could be possible.

Practical mathematicians and engineers are a bad combination for effecting ground breaking innovation. In the 60s, this potent combination of technologists designed the neural nets - to simulate what they felt was happening inside the funny looking organ. For decades, their attempts to "train," their nets met with failure with the artificial constructs taking too long to learn anything or spontaneously becoming unstable. They continued with the brute force method as the cost of computers and memory started to decline rapidly. Lately, they have found some short cuts that allows faster training. However, natural language processing, clever video games and autonomous cars are not examples of artificial intelligence by any stretch of the imagination.

To make artificial intelligence happen, technologists have to turn to fundamental innovation in hardware. And, they may be well advised to lose some ego and seek help from very different disciplines such as philosophy, economics and music. After all, the massive development of the human brain came when they started to think abstractly and not when they could create fire and stone tools at will.


  1. William A. Borders, Hisanao Akima, Shunsuke Fukami, Satoshi Moriya, Shouta Kurihara, Yoshihiko Horio, Shigeo Sato, Hideo Ohno. Analogue spin–orbit torque device for artificial-neural-network-based associative memory operationApplied Physics Express, 2017; 10 (1): 013007 DOI: 10.7567/APEX.10.013007

Thursday, December 29, 2016

Coding errors

A recent publication in Nature Communications (1) seems to confirm that DNA damage due to ionizing radiation is a cause of cancer in humans. The coding engine in humans has been fragile, prone to mistakes even in the absence of such exogenous effects. As humans attempt interplanetary travel, their biggest challenge is going to be keeping their biological machinery, error free. Perhaps what humans need is an error correction mechanism that implicitly assumes that errors are going to be a way of life. Rather than attempting to avoid it, they have to correct it optimally.

Error detection and correction have been important aspects of electronic communication. Humans do have some experience with it, albeit in crude electronic systems. The human system appears to be a haphazard combination of mistakes made over a few million years. They have been selected for horrible and debilitating diseases and every time they step out into the sunlight, their hardware appears to be at risk. It is an ironic outcome for homosapiens who spent most of their history naked under the tropical sun. Now ionized radiation from beyond the heavens render them paralyzed and ephemeral.

Perhaps it is time we have taken a mechanistic and computing view of humans. The clever arrangement of $26 worth of chemicals seem to last a very short period of time, stricken down by powerful bugs or her own immune system. Now that bugs have been kept at a safe distance, it is really about whether the system can code and replicate optimally. The immediate challenge is error detection and correction at a molecular level. If some of the top minds, engaged in such pointless activities as investing, death curing and artificial intelligence, could focus on more practical matters, humans can certainly come out ahead.


(1) http://esciencenews.com/articles/2016/09/13/study.reveals.how.ionising.radiation.damages.dna.and.causes.cancer

Saturday, December 17, 2016

Does life matter?

Philosophical, ethical and religious considerations have prevented humans from defining the value of life. Short sighted financial analysis that defined the value of life as the NPV of the future utility stream, is faulty. Additionally, there is a distinct difference between personal utility and societal utility that do not coincide. The more important deficiency in the approach is that it does not account for uncertainty in future possibilities and the flexibility held by the individual in altering future decisions. And in a regime of accelerating technologies that could substantially change the expected life horizon, the value of life is increasing every day, provided expected aggregate personal or societal utility is non-negative.

The present value of human life is an important metric for policy. It is certainly not infinite and there is a distinct trade-off between the cost of sustenance and expected future benefits, both to the individual and society. A natural end to life, a random and catastrophic outcome that is imposed by exogenous factors, is highly unlikely to be optimal. The individual has the most information to assess the trade-off between the cost of sustenance and future benefits. If one is able to ignore the excited technologists, attempting to cure death by Silicon, data and an abundance of ignorance, one could find that there is a subtle and gentle slope upward for the human’s ability to perpetuate her badly designed infrastructure. The cost of sustenance of the human body, regardless of the expanding time-span of use, is not trivial. One complication in this trade-off decision is that the individual may perceive personal (and possibly societal) utility, higher than what is true.  Society, prevented from the forceful termination of the individual on philosophical grounds, yields the decision to the individual, who may not be adept enough to do so.

Humans are entering a tricky transition period. It is conceivable that creative manipulation of genes may allow them to sustain copies of themselves for a time-span, perhaps higher by a factor of 10 in less than 100 years. However, in transition, they will struggle, trying to bridge the status-quo with what is possible. This is an optimization problem that may have to expand beyond the individual, if humanity were to perpetuate itself. On the other hand, there appears to be no compelling reasons to do so.

Wednesday, December 14, 2016

Milking data

Milk, a new computer language created by MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) promises a four fold increase in the speed of analytics on big data problems. Although true big data problems are still rare, albeit the term is freely used for anything from large excel sheets to relational data tables, Milk is in the right direction. Computer chip architecture designs have been stagnant, still looking to double speed every 18 months, by packing silicon ever closer with little innovation.

Efficient use of memory has been a perennial problem for analytics, dealing with sparse and noisy data. Rigid hardware designs shuttle unwanted information based on archaic design concepts never asking the question if the data transport is necessary or timely. With hardware and even memory costs in a precipitous decline, there has not been sufficient force behind seeking changes to age old designs. Now that exponentially increasing data is beginning to challenge available hardware again and the need for speed to sift through the proverbial haystack of noise to find the golden needle is in demand, we may need to innovate again. And, Milk paves the path for possible software solutions.

Using just enough data at the right time to make decisions is a good habit, not only in computing but also in every other arena. In the past two decades, computer companies and database vendors sought to sell the biggest steel to all their customers on the future promise of analytics once they collect all the garbage and store it in warehouses. Now that analytics has "arrived," reducing the garbage into usable insights has become a major problem for companies.

Extracting insights from sparse and noisy data is not easy. Perhaps academic institutions can lend a helping hand to jump start innovation at computer behemoths, as they get stuck in the status-quo.