Scientific Sense Podcast

Showing posts with label technology. Show all posts
Showing posts with label technology. Show all posts

Sunday, April 18, 2021

  ® Podcast with Gill Eapen: Most popular podcasts in Physical Sciences.

1. Prof. Cecilia Lunardini of Arizona State University on neutrinos

2. Prof. Keith Olive of the University of Minnesota on the early evolution of the universe.

3. Dr. Masood Parvania of the University of Utah on the electric grid

4. Prof. Jenny Greene of Princeton University on black holes

5. Prof. Josh Frieman of the University of Chicago on the evolution of the universe.

6. Prof. Daniel Holz of the University of Chicago on gravitational waves

7. Professor Wendy Freedman of the University of Chicago on the Hubble constant

8. Prof. Ghassan AlRegib of the Georgia Institute of Technology on machine learning

9. Prof. David Spergel of Princeton University on the death of the universe

10. Prof Kit Parker of Harvard University on emerging technologies

#physics #astrophysics #cosmology #scientificsense #artificialintelligence

Saturday, March 6, 2021

Scientific Sense ® Podcast with Gill Eapen: Top episodes in Feb 2021

Scientific Sense ® Podcast with Gill Eapen: Top episodes in Feb 2021.

1. Prof. Igor Shovkovy of Arizona State University on semimetals

2. Prof. Ghassan AlRegib of Georgia Institute of Technology on Deep Learning

3. Prof. Lenn Goodman of Vanderbilt University on Science & Religion

4. Prof. Anne Hart of Brown University on neuromuscular defects

5. Prof. Murat Kantarcioglu of The University of Texas at Dallas on AI security

6. Prof. Emery Brown of Harvard Medical School on anesthetics

7. Prof. Ellen Armour of Vanderbilt University on gender and sexuality

8. Prof. Andrew Newberg of Thomas Jefferson University on religious experiences

9. Prof. Jason Hafner of Rice University on plasmonic structures

10. Prof. James Sauls of Northwestern University on quantum computing

Sunday, August 2, 2020

Ignorance, the biggest threat to society

In the midst of pandemics, environmental degradation, social unrest, and other observable catastrophes, the primary threat to humans remains to be ignorance (1). Merriam-Webster defines ignorance as a lack of knowledge, education, or awareness. More generally, it is a state of being apathetic to emerging information and a lack of a framework to evaluate such data.

It should worry every human on earth that the greatest and largest democracies have leaders who demonstrate ignorance to such levels that their mere presence is a threat to humanity. It has been assumed that in a democratic system, fair elections will guarantee that elected officials will be competent at the very least. It was also an implicit assumption that democracies will avoid those who have evil intentions to roll back the ideals of the system. It is clear that these assumptions do not hold and it may be time to ask if the democratic systems, as designed, are appropriate.

The concentration of power has always been a problem in a democratic system. The world’s largest democracy, which purports a unitary system, has accumulated power at the center and that has led to the uneven treatment of states all through its short history. As the pandemic illustrated, the center has schizophrenia, taking credit for what works and blaming the states for the rest. In the world’s greatest democracy, which is apparently getting greater every minute, the dangerous effects of concentration of power in the executive branch are becoming clearer.

Democracy has always been a fragile system. It relied on the intelligence, foresight, and compassion of elected leaders to perpetuate it. All it takes is one or a few individuals to turn it back. Large democracies are sitting at the precipice of a societal tsunami. How they manage through this period will have a profound impact on history.


Tuesday, November 12, 2019

No Artificial Intelligence without Consciousness

Artificial Intelligence, a nebulous area, has been around from the advent of computers. Every decade, aided by increasing computing power and cheaper memory, those who just got out of school start to believe they found something new. Most often, new terms are invented to relabel what has been known forever. In the latest iteration, terms such as machine and deep learning have been trending. More interestingly, in the current wave, a new profession is coined, aptly called, "data science." Consulting firms, running out of ideas, strategies and PowerPoint magic, have been jumping in, to make a fast buck. The larger ones have assembled "thousands of data scientists," to make AI for their clients. The smaller ones have raised many 100s of millions of $ to "change the world." Now that we are approaching practical quantum computing within a decade, the next wave is just about to start. The behemoths, stuck with excess cloud capacity, have been providing "tools," so that they can download the costs of the stranded investments to the users. Unfortunately, all of these could be rendered obsolete in a few years. It may be a warning sign for educational institutions scrambling to create more data scientists on-line or not.

Autonomous automobiles and aircrafts are not AI, they are transportation modalities with a computer onboard. Robots that can put nuts and bolts together, assemble objects of use and occasionally jump in magnificent ways are not evidence of AI, just expert logic embedded in mechanical systems. Fooling people into thinking there is a human on the other side of the telephone is not AI just a set of rules fed into a synthesizer. Machines beating humans in prescriptive games is not AI - they are either a massive set of rules fed into high powered computers or pattern-finding neural nets (some call it deep learning) on steroids. None of these use cases have anything to do with AI, generalized or not. They just make some feel important and make a lot of positive economics for their proponents.

However, we cannot move an inch forward in AI without a coherent theory of consciousness. Engineers have been on a quest to define what they do not seem to understand, by quantitative means. It is possible that consciousness is a property that is externally applied. If so, the entities with consciousness are unlikely to understand it. In the absence of a theory from within, one possible explanation is that consciousness is induced by the simulator of the game. If so, it is likely that consciousness is a democratized property and is not limited to humans, let alone living things. This may explain why humans locked in a mathematical jail seem unable to understand it. 

Those chasing AI may need to spend more time thinking about a possible theory of consciousness. Without that, it is just age-old statistics.

Monday, October 28, 2019


The future of cloud computing is getting cloudy. The roundtrip from mainframes to personal computers and back to centralized computing has been inefficient, to say the least. It just allowed a plethora of mediocre companies and ideas to try and die. Massive computing power never solved any problems, it just misled a lot of technologists seeking fame and riches. And, in the process, it contributed to worsening the climate problem. Granted, it did create the world's richest person, in anticipation of a never-ending scale-up, that is unlikely to materialize. As Silicon Valley burns from fires started by electric wires, it is time to refocus on computing with less power.

We are reentering a regime governed by distributed computing once again. This time, it is not going to be on desktops but everywhere. It is not going to be about data but decisions. Humans could have taken a clue from their own societies thriving on distributed brainpower. Those seeking efficiencies and scale always preferred centralization (1) not only in computing but also in organizational structures. But with centralization came a variety of costs including but not limited to lack of redundancy, flexibility and, volatile decision-making. Aided by a few monopolistic behemoths willing to sink billions of unused cash on computer farms, the "cloud," has been growing. Their strategies are ably aided by consulting gurus, experts of the present and not the future. Not to be left behind, the developing countries have been in hot pursuit, assembling centralized computing power as if there is no tomorrow.

The future will not require such stranded investments spewing heat and pollution. Instead, we will need to invent massively distributed computing that requires almost no power. The minuscule amount of power needed should be produced in-situ by movement, ambient temperature or air.


Sunday, April 21, 2019

Artificial Intelligence and the slowing of Time

Artificial Intelligence has been percolating in many domains lately. If properly applied, AI could significantly slow down time for humans and organizations. From their inception, humans have been prisoners of space and time. Even in the modern context, most appear to lack time, with "work," expanding to fill any empty voids. The modus operandi for organizations has been "putting out fires." And, both the creation and ultimate extinguishment of "fires," have been the distinguishing feature of large companies and that takes a lot of time.

The most valuable resource for humans, time, has been inflexible forever. Crude attempts at extending it beyond available horizons have had minimal impact. But now, they could slow time down by delegating time-consuming tasks to obedient machines. Any organization or individual, squeezed for time, is going to fall further behind as it is a clear symptom that they have been unable to move beyond the status-quo. Humans are good at some things and they are exceptionally bad at others. Machines are quite complementary in this respect.

Any repeated task taking the same amount of time in the current iteration compared to the previous one would indicate a deteriorating process. Ironically, those attempting to apply AI rely largely on human time as defined presently. Some appear to be proud of how many data scientists they have hired and others, how much Silicon they have assembled in close proximity. Neither will allow organizations to slow down time, just the opposite. Use of conventional metrics such as the quantity of human time and computers is symptomatic of a disease that is preventing the slowing down of time for organizations of all types - both the users and providers of services and products.

Individuals and organizations have a singular metric to assess if they are able to utilize AI properly. That metric is Time - if it is not slowing down for you now, it is problematic.

Tuesday, March 19, 2019

AI and the weakest link

The recent debacle in aircraft design is a constant reminder that software engineers and “data scientists,” excited by the possibilities, could create havoc in many different industries. In transportation, as autopilot systems get smarter, they could take over virtually everything a vehicle does, terrestrial or otherwise. What the designers seem to have missed recently is that an aircraft is a conglomeration of data transmitting mechanical sensors and sophisticated software. Traditional engineering education would have informed the designers that the system is only as good as the weakest link, but the younger ones may have skipped those courses. Here, faulty data from an old technology may have confused the brain. There are multiple issues to consider here.

First, the design of systems needs to be holistic. This is easier said than done as a complex vehicle is designed in parts and assembled. Teams who work on these components may have different skill sets and the overall blueprint may not consider the biases in designs created by separate teams. For example, if the brain is designed with little flexibility to discard faulty data, the expectation would be that it is unlikely. However, if the data is emerging from mechanical devices, with no embedded intelligence, it is almost a certainty that faulty data will arrive at some point in the senor’s life. Two recent aircraft failures in Asia and Africa and the one much earlier over the Atlantic seem to have been caused by bad sensors sending bad data to a “sophisticated AI agent,” with little capability to differentiate between good and bad data. So, either the sensors and other mechanical devices in the system need to be smarter so as to recognize their own fallibilities or the central brain has to be able to recognize when it is fed bad stuff. There is a lull in engineering education that has moved in the direction of high specialization, without an overall understanding of systems design and risk. This is going to surface many issues across industries from transportation, manufacturing to healthcare.

Second, the human is still the best-known risk mitigator, with her brain fine-tuned over the millennia to sense and avert danger. In transportation, disengagement has to be a fundamental aspect of design. Although it could be tempting to sit back while an aircraft takes off and lands or to read “Harry Potter,” while behind the wheels of an autonomous terrestrial vehicle, these actions are ill-advised. The human has to expect the machine to ill behave and be at the very least ready to receive complete disengagement at any point in time. Excited engineers may think otherwise, but we are nowhere close to fail-safe AI. Let’s not kid ourselves – writing code is easy but making it work all the time, is not. Educational institutions will do a disservice for the next generation of engineers if they impart the idea that AI is human devoid.

Transportation is just one industry. The problems witnessed, span across every industry today. For example, in healthcare, AI is slowly percolating but the designers have to remember that there are weak links there too. Ironically, in the provider arena, the weak link is the human, who “inputs,” data into aging databases, sometimes called Electronic Medical Records (EMR) systems. Designed by engineers, with no understanding of healthcare, a couple of decades ago, they are receptacles of errors that can bring emerging AI and decision systems to their knees. If one designs AI driven decision systems in these environments, she has to be acutely aware of the uncertainty in inputs caused by humans, who are notorious in making mistakes with computer keyboards (or even voice commands) and database containers designed with old technologies. So, designs here need to systematically consider disengagement when the AI agent is unable to decipher data.

In manufacturing, led by data collection enthusiasts from the 90s, older database technologies, sometimes elegantly called, “Enterprise Resource Planning (ERP), systems, dominate. They have been “warehousing,” data for decades with little understanding of what it means. Now, “Hannah,” and her cousins, seem to have gotten religion but again, there is a problem here. Cutting and dicing data to make pretty pictures for decision-makers, does nothing to improve decision-making or to mitigate risk. The weak link here is the technology, designed and maintained by those who believe businesses are run by the collection, aggregation, and reporting of data. Unfortunately, successful businesses have moved on.

AI is a good thing, but not in the absence of logical thinking and systems design. Intelligence is not about the routine, but the ability to act when encountered the “non-routine.” As the software and hardware giants sell their wares, in the cloud and elsewhere, they have to understand the perils of bad and rushed technology. It is great to fool a restaurant by simulated voice, it is great to demonstrate that “machine learning,” on Twitter will create a racist and it is great to tag and collate music fast, but none of these activities is going to propel humanity to the next level. Being good in search, operating system design or good hardware, do not automatically make these companies, “experts” in an area that is erroneously called, Artificial Intelligence. There is nothing artificial about intelligence. Machines have the potential to be a lot more “intelligent,” than humans. If anybody has any doubt, just take a look at the nation’s capital and imagine a scenario of replacing the policy-makers with dumb machines. They will likely perform infinitely better. For the rest of us, the reality is still an important concept and there, we have to make sure the developments are in a beneficial direction.

Intelligence, ultimately, is about decision-making. Humans have been pretty good at it, barring a few narcissistic and mentally ill specimens in full view. They had to survive the dicey world they were handed when they climbed down the tree and stood upright in the African Savannah for the first time. Bad decision-making would have instantly eliminated them. They survived, albeit with less than 10K specimens through a harsh bottleneck. Later, single-cell organisms almost wiped them out on multiple occasions but they survived again. Now, they encounter a technology discontinuity, something that is so foreign to their psyche, the dominant reaction has to be, rejection. And, for the most part, it is. But their brains have morphed into a quantum computer, able to think about possibilities. This could be their Achilles heel, but then, life is not worth living without taking risks.

Educational institutions, still chasing the latest trends to make money, have the ultimate responsibility to get this right. To bring humanity to a level 1 society, we need to move past our instincts, created by tactical objective functions driven by food and sex and embrace intelligence. It is likely that machines will lead humanity out of its perils.

Saturday, March 16, 2019

Micro customization

Recent news (1) that a gastric resident delivery mechanism can deliver reliable, sustainable doses of agents for the long term is important. Innovation in chemical agents has moved ahead of mechanisms that would deliver them at the right time, in the optimum dose, by the best route and to the most receptive site. The ability to optimally deliver the agent is likely more important than the agent itself. In the absence of such delivery mechanisms, manufacturers have stuck to the original blue print - mass manufacturing of pills in a singular dose that shows the best therapeutic index in the population. Personalized medicine, thus, has remained elusive and more importantly, outside the business models of manufacturers.

It may be changing. Ironically, providers have moved ahead of other participants in the healthcare value chain, in the implementation of personalized medicine. Recent advancements in Artificial Intelligence and the availability of abundant data have better  positioned the providers to understand, treat and manage patients, individual by individual. If delivery mechanisms improve and become individually customizable, we can rapidly move into the next level of personalized medicine. Here, we can envision devices that can measure, decide and disburse micro doses to assure optimum delivery and complete compliance. Intelligent devices could be just round the corner, taking advantage of IoT. With embedded intelligence on board, such devices can not only operate as initially primed but also self learn and adjust over time. A couple of decades from now, medical professionals will likely view the current regime to be completely archaic.

More generally, any business that is driven by scale, a blind adherence to singular specifications, will have great difficulty to survive in the future. Technology is readily available, not just for mass customization but rather for individual intervention. This is a regime change that will affect every industry and every business. Getting ahead of this rapid transformation is a necessary condition for success.

(1) A gastric resident drug delivery system for prolonged gram-level dosing of tuberculosis treatment. Verma et al.