Google

YouTube

Spotify

Scientific Sense Podcast

Saturday, August 18, 2018

Artificial Intelligence: Long way to go


A recent observation that seems to provide additional structure on how the bots may be learning (1) is interesting. The Artificial General Intelligence enthusiasts, who have been making a lot of noise by observing that they can train a machine from pixels on the screen, may want to take note. Training a machine to play games is a lot easier than getting Silicon to "think." The computer scientists appear to be getting a bit ahead of themselves (even those with a Neuroscience degree). God does have a  sense of humor and She will lead many astray in the coming decades.

Engineering and Computer Science are easy. Medicine and Economics are less so. It is difficult to reduce complex questions to deterministic equations with binary outcomes. And, AI is squarely in the latter, for intelligence emanates from understanding uncertainty and how it affects outcomes. Equations do not work in this realm and experiments that create, "big noise," do not either. We are now creating a crop of computer scientists, locked onto the keyboard, hunting Python and drinking Java, as if there is nothing beyond it. This is problematic. If they continue in that vein, they could reduce themselves to the guy, who tweets garbage every day.

Humans have an ego - and that is going to keep them constrained from progress. The ones who make noise, are endowed with ignorance and the ones in the know, may keep quiet.


(1) http://www.sciencemag.org/news/2018/08/why-does-ai-stink-certain-video-games-researchers-made-one-play-ms-pac-man-find-out

Wednesday, August 15, 2018

Never look back

The human brain, a compendium of false and true memory, formed by past interactions and events, feels comfortable creating heuristics from history to deal with the future. For millennia, this was a dominant strategy as the ability to predict the presence and behavior of predators from historical data helped them survive. But now, this has become a huge liability. Even basic ideas in finance, such as sunk costs, have been difficult for many to internalize. Even those in the know, seem to make bad decisions because it has been difficult not to look back. The software giants found out recently that using historical data to model the future has some drawbacks and this has implications for decision-making and policy design at many levels.

Looking back has been costly for humans in the modern context. They may be better off rolling the dice to pick from available future states than using faulty heuristics shaped by the past. If machines can only learn from the past, then, they will be simply perpetuating the status-quo with no insights. This is equally true in education, where history and experience have been given undue credit and research, where conformation bias has led many astray.  What is most problematic is a recent experiment (1) that shows that children have a tendency to conform to robots. In the current technology regime that appears to be accelerating toward fake humanoids, we may be dumbing ourselves down by using history and the prompts provided by robots.

Looking back is costly in many ways for humans. Looking back is value enhancing only if the cost of doing so provides future benefits. It is tough to find use cases where such an activity adds value. There is little practical value in history or how one lived last year. If the future generations can mend the ills caused by the "greatest," that went before them, they could inherit a world that is peaceful and forward-looking. In such a world, there will be no looking back and every day will start with fresh ideas. In such a world, there will not be any recordings, only future possibilities. In such a world, they will reject past theories in favor of uncertain future hypotheses. In such a world, thought experiments will dominate over attempts at proving what was observed. In such a world, experiments will triumph over institutions and legacy.

Only look forward, for anything else will be costly.


(1) http://robotics.sciencemag.org/content/3/21/eaat7111

Monday, August 13, 2018

Extending the brain

A recent publication (1) that describes a Brain-machine interface (BMI) to control a robotic arm simultaneously with human arms open up interesting possibilities for maximizing brain utilization. By a quirk of nature, humans have been endowed with an organ that far surpasses their routine needs to live and die. With simple objective functions, humans have substantially sub-optimized this endowment. But now, there may be mechanical means to keep the organ interested.

There has been a lot in the literature about the inability of humans to multitask. However, it is possible that multitasking improves with practice just like anything else (2). The quantum computer they carry, albeit being an energy hog, requires little to maintain from an infrastructure point of view. And the calorie requirement to keep it going is very small in the grand scheme of things. Hence, maximizing the use of the brain is an important consideration for every human and humanity in general.

Brain utilization shows an upward trend as people network across the world, surpassing the constraints offered by race, religion, and ignorance. This electronic extension of the brain has been unambiguously good for humanity but it feels like there is still a lot in the tank for every individual. If she can multiply limbs by mechanical multitasking it is likely that such an activity will grow neurons upstairs with unpredictable beneficial effects in the long run.

Extending the brain - mechanically and electronically - is dominant for humans. That will allow them to get over all the tactical problems currently plaguing humanity.


(1) BMI control of the third arm for multitasking: http://robotics.sciencemag.org/content/3/20/eaat1228

(2) 



Monday, July 30, 2018

Redefining Artificial Intelligence

Artificial Intelligence, the contemporary darling of technologists and investors, has been largely focused on trivial consumer-oriented applications and robotics/automation, thus far.  Constrained by conventional computing, AI has been bottled up in hype and confusing name calling. What the AI enthusiasts do not seem to understand is that AI was never meant to be a technology that fakes what a human being appears to do externally but rather it was supposed to replicate her thought processes internally. As the search giant demonstrates how its technology could fool a restaurant reservation system or play games, as the world's largest shipper of trinkets demonstrates how they could send you things faster and the purveyors of autonomous vehicles demonstrate how they could move people and goods without the need for humans at the driving wheel, they need to understand one important thing: these technologies are not using AI, they are using smarter automation. They do not replicate human thought processes. They either fake what a human appears to do or simply automate mundane tasks. We have been doing this for over half a century and as everybody knows, every technology gets better over time. So, before claiming victory in the AI land, these companies may need to think deeply about if their nascent technologies could actually do something good.

However, there is a silver lining on the horizon that could move AI to real applications (1) including predicting and controlling the environment, designing materials for novel applications and improving the health and happiness of humans and animals. AI has been tantalizingly "close" since the advent of computers. Imagination and media propelled it further than what it could ever deliver. As with previous technology waves, many companies attempt(ed) to reduce this problem to its apparently deterministic components. This engineering view of AI is likely misguided as real problems are driven fundamentally by dynamically connected uncertainties. These problems in domains such as the environment, materials, and healthcare require not only computing resources beyond what is currently available but also approaches further from statistical and mathematical "precision."

Less sexy areas of AI such as enhancing business decisions have attracted less interest, thus far. Feeble attempts at "transforming," a large healthcare clinic using a "pizza-sized," box of technology that apparently solved all the world's problems already, seem to have failed. Organizations chasing technology to solve problems using AI may need to spend time understanding what they are trying to tackle first, before diving head first into "data lakes" and "algorithms." Real solutions exist at the intersection of domain knowledge, technology, and mathematics. All of these are available in the public domain but the combination of this unique expertise does not.

Humans, always excitable by triviality and technology, may need better skills to succeed in the emerging regime, driven by free and fake information and the transformation of this noise into better decisions. Those who do this first may hold the keys to redefining AI and the future of humanity. It is unlikely to be the companies you know and love because they are focused on the status-quo and next quarter's earnings.

(1) http://science.sciencemag.org/content/361/6400/342