Google

YouTube

Spotify

Scientific Sense Podcast

Friday, June 21, 2019

Revisiting AI for Policy

Policy making, a complex activity that needs to consider large amounts of disparate data and optimize within constraints in the long and short run, is likely better tackled by Artificial Intelligence. Humans, let alone politicians, are notorious for their unsubstantiated biases, conflicts of interest and lack of decision-making abilities in the presence of uncertain data. Machines appear to be significantly better in this realm. A world in which machines make policy choices is likely better than the status-quo, democracy and autocracy included, for decisions made on subsets of data with bias will always be less effective compared to those based on the entire information content, without bias.

More practically, nations may need to deploy AI in the policy making realm, to at least augment decision-making. At the very least, it may reveal how inefficient human policy-makers are, how out of touch they are from emerging information and how they are destroying a world, the next generation will inherit. Such is the promise of AI in decision and policy making, it is almost trivial for machines to reach optimum choices, far superior to what their masters could accomplish. More importantly, machines are able to consider interconnected decisions into the future and use optimum control to reach best current decisions. It will be a far cry from the octogenarians in capitol hill, unable to read and understand the policy choices they are voting on.

Countries that embrace AI for policy could be the future powerhouses. In this regime, scale does not matter as the smallest and biggest countries in the world could access the same technology. In the limit, such an optimization process may make contemporary segmentation schemes - religions, countries and languages - irrelevant. If so, AI could manage by exception, raising red flags at the right points in time for human actions and guiding humanity to a better place. It could suggest best paths for innovation that will reduce downside risk and maximize upside potential. It could maximize the value of humanity and its fickle environment.

We are augmenting human decision-making with AI in every realm. It is time we provided the same for clueless politicians.

Saturday, June 8, 2019

Free Will is Real (1), Really?


A recent philosophical argument that seems to hypothesize that free will is real (1) because of the "existence of alternative possibilities, choice and control over actions," may be faulty. As the philosopher attempts to make a distinction between reductionism and "intentional agency," he seems to have fallen into a "reductionist trap."

Both physics and philosophy suffer from the same basic issues. Decisions, choices, observations, particles and systems do not stand independently. There are spatial and temporal connections among them, disallowing hypotheses based on singular instances. It is not that a human being is making a choice among possibilities that are indeterminate but rather she is forced into a choice by optimizing a sequence of interconnected decisions. Thus, apparent flexibility and control observed at a decision point is an illusion. By dynamic programming, the decision-maker reaches an optimal choice (as defined as utility maximizing for her). That decision is determined mathematically and not by choice.

Physics, now fully infused with determinism and reductionism in spite of a century old theory that shows nothing is deterministic and philosophy, always struggling to prove what has not been defined yet, are both unproductive avenues for humans. They are certainly academically rich but neither in their current posture will be able to advance thinking. To move to a different regime, we need simplification and humility and a macro understanding that humans may be hypothesizing based purely on illusion.

Free Will is Real, Really?

(1) https://blogs.scientificamerican.com/cross-check/free-will-is-real/

Thursday, June 6, 2019

The trouble with conservation

A recent article (1) articulates how well intentioned conservation policies could have unintended effects. From inception, humans have been attempting to shape the environment, first to their own tactical benefits and then for undefined strategic goals. Humans generally deliver bad outcomes to a plethora of life designs surrounding them and themselves. They like control and satisfaction emerging from their efforts to destroy and then attempting to mend the greenhouse they are part of.

Conservation, the darling of millennials and those following them, could turn out be a bad thing. Those engaged in these sentiments also do not like "markets," and would like to set everything "right." What they may be missing is that there is a cost to playing with nature and humans do  not appear to be smart enough to predict the effects of singular actions on a highly non-linear and connected system.

Good intentions are necessary but not sufficient for better outcomes. More importantly, the idea of manipulating a complex non-linear system with linear policy choices is fraught with danger. The universe appears to be anchored on "markets," as illustrated by evolution. However, it is too crude and thought experiments in the direction of universal optimization may be apt. But ironically, it does not mean that such a state can be reached by incremental manipulation of the status-quo.

Stuck in a trough, humans appear to have bad instincts. Most of them want to climb out of the hole but the policy choices they impart are likely sub optimal and may pull them further down.

(1) https://www.scientificamerican.com/article/when-one-protected-species-kills-another-what-are-conservationists-to-do1/?redirect=1

Sunday, May 26, 2019

36,000 days and optimization within a catastrophic constraint

Humans have a very interesting mathematical problem. They have to optimize within a harsh time constraint. Although the endowment is not known, it is increasingly predictable. Even though the range is broad - from 0 to say, 36,000 days, the variance has been going down, thanks to modern medicine. But most humans sub optimize. Materialism, ignorance, hunger for power and a variety of other value destroying metrics have misguided nearly 100 billion samples from inception.

The time afforded to a human appears limited. Irrational thinking has led most astray, some believing "God," is going to save her and others trying to create "legacy," in the absence of such an entity. It is unclear what a random individual is trying to maximize. For half the contemporary population, it is all about maximizing the probability of higher utility in after life. For the rest, it is more complex. As the seconds wind down to the inescapable event horizon, most humans run for tactical metrics without any value.

What would human 2.0 feel like? For her, contribution to society will be supreme for anything else seems meaningless. As the event horizon is specified without any flexibility, it would be important to contribute before she crosses the inescapable boundary. It is counter-intuitive as it is not that a human has to contribute to the perpetuation of a species, that appears less interesting, but rather that her role in the larger context, has meaning. It is meaning for society we are after and that is likely too conceptual for many.

Human 2.0 - If we ever get there, it could be a fantastic world with a simpler objective function, something that maximizes happiness not wealth, something that maximizes knowledge not ego, something that maximizes society not the individual, something that maximizes ideas, not the mere description of them, and something that maximizes empathy not the portrayal of the same.