The recent debacle in aircraft design is a constant reminder
that software engineers and “data scientists,” excited by the possibilities, could
create havoc in many different industries. In transportation, as autopilot
systems get smarter, they could take over virtually everything a vehicle does,
terrestrial or otherwise. What the designers seem to have missed recently is
that an aircraft is a conglomeration of data transmitting mechanical sensors
and sophisticated software. Traditional engineering education would have
informed the designers that the system is only as good as the weakest link, but
the younger ones may have skipped those courses. Here, faulty data from an old
technology may have confused the brain. There are multiple issues to consider
here.
First, the design of systems needs to be holistic. This is
easier said than done as a complex vehicle is designed in parts and assembled. Teams
who work on these components may have different skill sets and the overall
blueprint may not consider the biases in designs created by separate teams. For
example, if the brain is designed with little flexibility to discard faulty
data, the expectation would be that it is unlikely. However, if the data is
emerging from mechanical devices, with no embedded intelligence, it is almost a
certainty that faulty data will arrive at some point in the senor’s life. Two
recent aircraft failures in Asia and Africa and the one much earlier over
the Atlantic seem to have been caused by bad sensors sending bad data to a
“sophisticated AI agent,” with little capability to differentiate between good
and bad data. So, either the sensors and other mechanical devices in the system
need to be smarter so as to recognize their own fallibilities or the central
brain has to be able to recognize when it is fed bad stuff. There is a lull in
engineering education that has moved in the direction of high specialization,
without an overall understanding of systems design and risk. This is going to
surface many issues across industries from transportation, manufacturing to
healthcare.
Second, the human is still the best-known risk mitigator,
with her brain fine-tuned over the millennia to sense and avert danger. In
transportation, disengagement has to be a fundamental aspect of design.
Although it could be tempting to sit back while an aircraft takes off and lands
or to read “Harry Potter,” while behind the wheels of an autonomous terrestrial
vehicle, these actions are ill-advised. The human has to expect the machine to
ill behave and be at the very least ready to receive complete disengagement at
any point in time. Excited engineers may think otherwise, but we are nowhere
close to fail-safe AI. Let’s not kid ourselves – writing code is easy but
making it work all the time, is not. Educational institutions will do a
disservice for the next generation of engineers if they impart the idea that AI
is human devoid.
Transportation is just one industry. The problems witnessed,
span across every industry today. For example, in healthcare, AI is slowly
percolating but the designers have to remember that there are weak links there
too. Ironically, in the provider arena, the weak link is the human, who
“inputs,” data into aging databases, sometimes called Electronic Medical
Records (EMR) systems. Designed by engineers, with no understanding of
healthcare, a couple of decades ago, they are receptacles of errors that can
bring emerging AI and decision systems to their knees. If one designs AI driven
decision systems in these environments, she has to be acutely aware of the
uncertainty in inputs caused by humans, who are notorious in making mistakes
with computer keyboards (or even voice commands) and database containers
designed with old technologies. So, designs here need to systematically
consider disengagement when the AI agent is unable to decipher data.
In manufacturing, led by data collection enthusiasts from
the 90s, older database technologies, sometimes elegantly called, “Enterprise
Resource Planning (ERP), systems, dominate. They have been “warehousing,” data
for decades with little understanding of what it means. Now, “Hannah,” and her
cousins, seem to have gotten religion but again, there is a problem here.
Cutting and dicing data to make pretty pictures for decision-makers, does
nothing to improve decision-making or to mitigate risk. The weak link here is
the technology, designed and maintained by those who believe businesses are run
by the collection, aggregation, and reporting of data. Unfortunately, successful
businesses have moved on.
AI is a good thing, but not in the absence of logical
thinking and systems design. Intelligence is not about the routine, but the
ability to act when encountered the “non-routine.” As the software and hardware
giants sell their wares, in the cloud and elsewhere, they have to understand
the perils of bad and rushed technology. It is great to fool a restaurant by
simulated voice, it is great to demonstrate that “machine learning,” on Twitter
will create a racist and it is great to tag and collate music fast, but none of
these activities is going to propel humanity to the next level. Being good in
search, operating system design or good hardware, do not automatically make
these companies, “experts” in an area that is erroneously called, Artificial
Intelligence. There is nothing artificial about intelligence. Machines have the
potential to be a lot more “intelligent,” than humans. If anybody has any doubt,
just take a look at the nation’s capital and imagine a scenario of replacing the
policy-makers with dumb machines. They will likely perform infinitely better.
For the rest of us, the reality is still an important concept and there, we have to
make sure the developments are in a beneficial direction.
Intelligence, ultimately, is about decision-making. Humans
have been pretty good at it, barring a few narcissistic and mentally ill specimens
in full view. They had to survive the dicey world they were handed when they
climbed down the tree and stood upright in the African Savannah for the first
time. Bad decision-making would have instantly eliminated them. They survived,
albeit with less than 10K specimens through a harsh bottleneck. Later, single-cell organisms almost wiped them out on multiple occasions but they survived
again. Now, they encounter a technology discontinuity, something that is so foreign
to their psyche, the dominant reaction has to be, rejection. And, for the most part, it is. But their brains have morphed into a quantum computer, able to think
about possibilities. This could be their Achilles heel, but then, life is not
worth living without taking risks.
Educational institutions, still chasing the latest trends to
make money, have the ultimate responsibility to get this right. To bring humanity
to a level 1 society, we need to move past our instincts, created by tactical
objective functions driven by food and sex and embrace intelligence. It is likely
that machines will lead humanity out of its perils.