New Horizons for Chip Design

 

Challenges Remain for AI

In the movies, artificial intelligence (AI) is often represented as an android, a human-like robot. In reality, AI will be mostly in the background, anticipating our needs based on prior experience in terms of autonomous driving, healthcare, even financial services.

To work as intended, artificial intelligence requires large data sets. The data can be the result of many sensors in the field or gathered from traditional data such as eCommerce websites. Understanding these large datasets requires pattern recognition and machine learning. Here, algorithms create models based on unique subsets of the larger dataset. Machine learning can be thought of as the learning from raw data that is necessary to empower AI systems.

One unique quality that all humans have is the innate ability to recognize patterns. Patterns in nature, in numbers. Art in all its forms is a delicate balance of patterns. Humans uniquely and automatically recognize faces, and facial expressions. Other mammals rely on different senses or don’t necessarily differentiate individuals.

AI pattern recognition is, at best, nascent but rapidly improving. Today there are some AI systems that do a better job of identifying common traits than humans, if the skills are mundane and common such as in manufacturing. There are several areas, however, where AI falls behind in pattern recognition and therefore behind in our romanticized view of how it should all work.

Objects, for example. When presented full-on, in good light, an AI system can correctly identify an object if it is contained within its database. With machine learning, an AI system can also catalog unfamiliar objects, recognizing them a second time.

However, when presented with a familiar object at an oblique angle or in less-than ideal lighting, the AI system will today fail. Humans are naturally able to infer the complete shapes or missing details when a part is hidden. This is based on experience or expectations. For example, a ball is a sphere, so only seeing half would suggest the other half is hidden; an AI system might not grasp that subtlety.

Sounds are another area where AIs sometimes fail today. With music, changes in pitch, tone, even tempo, can confuse an AI system – ironic since music is based on mathematical concepts. Here, a human might appreciate a dance remix of a popular song, but an AI system would not recognize it and perhaps consider it brand new.

If music has its roots in math, but what about linguistics? Parsing language is another uniquely human skill. Humans use a variety of nuances to connote different meanings to the same word. Today, AI systems cannot comprehend the difference in tone when someone is angry or when someone is happy. Then again, there is sarcasm, being able to hear when someone is being insincere.

One of the early pioneers in artificial intelligence, Marvin Minsky, wrote a textbook for his course at M.I.T. called Society of Mind. In it, Minsky examines how humans learn through experience, citing various experiments including one proving that the human brain automatically flips the images from our corneas (we have only one lens in each of our eyes). Here, a newborn was intentionally blindfolded for a few days at birth. At the end of the experiment, the child’s brain did not automatically flip the real world (it did correct for it after the experiment; the human brain is somewhat resilient) so the child would reach up for a rattle when it was in fact down.

From these experiments, Minsky shows how humans have thousands of years of evolution on our side. Intelligence, as we know it today, evolved from our reptilian brain (serving our basic and animal needs) to what is known as the neo-cortex where high-level thinking comes into play. Survival instincts also play a vital role in our brain’s development. We jolt alert with certain sounds or even behavior in others. Some of this is hard-wired into our brains and requires now direct experience to learn.

AI systems have no built-in advantages. Everything must be learned. The problem here is the potential for garbage in, garbage out. In other words, in the amazingly large data sets required for AI, the data itself could be poisoned with deliberately bad data. Imagine if an autonomous vehicle was given incorrect speed limits on roadways or even incorrect geolocation coordinates, sending the car where no roads exist or into obstacles such as buildings or bodies of water.

All this isn’t to say that we will never have AI. We will, it’s just going to take a while. In addition to creating better algorithms for machine learning, there is enormous potential for new chip designs. And, perhaps not surprisingly, these new chip designs will utilize machine learning and AI.