Javascript on your browser is not enabled.

Advertisement

Shalini Lal

Dr. Shalini Lal is the founder of Infinity OD, a boutique consulting firm enabling Organizational Transformations. Before this she was Director HR, Deutsche Bank and CHRO, Escorts Agri-Machinery. She has a PhD from UCLA, and is an IIM-A and St. Stephen’s College Alumnus.

More From The Author >>

Learning to Learn from Artificial Intelligence

An AI system at the University of Adelaide and UNSW ADFA, recreated an experiment that won the Nobel Prize for Physics in 2001 in just 1 hour.

?

Photo Credit : analyticsindiamag.com,

I bet this sounds strange. Very strange.

Why should we learn from artificial intelligence when artificial intelligence is in fact trying hard to become like us? Isn’t our ability to think broadly and abstractly one of the things that in fact gives us an edge over machines?

After all, aren’t machines simply programmed to do one (or a few) things repeatedly. Sure, they can crunch ridiculous amounts of data, but they do so by following instructions given by us. Where is the learning in all of this?

Well, more on that in a bit.

Let’s first go back to how you and I have spent our lives learning. Think back to school and try and remember times when you were actively encouraged to find your own answers.

If you went to most mainstream schools, chances are you were given chunks of knowledge, and asked to prove your understanding of the material. Sometimes, there was just one right answer expected from you–the one in the book; and at other times you could provide one of a few answers, all within an acceptable format.

And things may not have been all that different at work either. Depending on your organization, learning may have been limited to understanding and fulfilling expectations of stakeholders. Sometimes this could mean learning to do things the way they were done in the past, at other times this could mean learning to do things incrementally better.

In fact, one could argue that our experience of learning, was at least partially (some would argue mostly) constrained by the systems we were part of. Of course, there were always the few who managed to retain or even hone their abilities to keep learning. And these people are the ones in high demand today.

In the last few years, “learning agility” has entered our vocabulary as a key competency expected of leadership. In fact, several consulting firms insist that learning agility is the key differentiator for senior leaders. After all, in a rapidly changing VUCA world, the future belongs to those who can quickly make sense of and respond to a changing environment.

We want people who have experience, but are not constrained by it.

And nowhere is this as true as in the entrepreneurial world, where each day the entrepreneur, sometimes alone and sometimes with a few colleagues, must figure out how best to respond to our changing world.

Yet, this leads us to the big challenge—how do we now learn to learn?

Here’s an idea. What if we look at how Artificial Intelligence programs are learning to learn?

For some time now, AI has been able to perform well within well defined parameters of tasks. Think of Siri, Google Maps or even Amazon’s recommendations. All are examples of narrow AI.

However, a new set of programs is building AI that is general. These perform not just by being programed to perform a particular task, but by being able to learn. Called Deep Learning (technically-reinforcement learning with neural networks), these programs are designed to mimic the human potential to learn.

In the last few months, these programs have astounded us with their abilities.

Take AlphaGo, a program designed by Google DeepMind. It has taught itself how to play the incredibly complex ancient Chinese board game “Go”, by learning from experience—it’s own and that of others. Those who understand Go explain that winning this game is not about logic alone. It requires a human like intuition, one that cannot be achieved just by crunching data. You can watch a short video here about the how AlphaGo trained. Or read more here.

In 1997, IBM’s Watson defeated Chess Grandmaster Kasparov, shocking the world with it’s power. But what makes AI like AlphaGo different is that it was not programed to just play Go.

Instead it was designed to learn. And by learning to play a game which experts say is one of the world’s most complicated, it was able to defeat the reigning champion Lee Sedol.

An AI system at the University of Adelaide and UNSW ADFA, recreated an experiment that won the Nobel Prize for Physics in 2001 in just 1 hour. You can learn more about that here. The AI system was able to recreate the complex quantum experiment to create an extremely cold gas trapped in a laser beam, known as a Bose-Einstein condensate.

"I didn't expect the machine could learn to do the experiment itself, from scratch, in under an hour," said co-lead researcher Paul Wigley, of the Australian National University Research School of Physics and Engineering, in a statement.

"A simple computer program would have taken longer than the age of the universe to run through all the combinations and work this out."

Since these programs were originally modelled on the human learning process, perhaps it would be an interesting case of reverse engineering if we could in fact learn the principles of learning coded into these programs.

Here’s how AlphaGo learnt to beat the game from scratch.

1. Very wide exposure to different games.
In this case it was exposed to 100,000 different games played by skilled amateurs. It used this wide exposure to learn the many ways the game could be played (literally).

2. Mimicry: Then from learning the rules of the game it needed to be able to play like a human, perhaps a novice at this stage.

3. Progressive Learning: Once it learnt to play like a human it played itself 30 million times! Using reinforcement learning, the program learnt to improve itself incrementally. This meant it learnt to avoid errors and improve its wins. As the program became better it was able to beat early versions of itself 80-90% of the time.

Just three simple rules—wide exposure to learn from the experience of others; getting into the arena and playing the game oneself; and all this followed by progressive learning.

What implications does this have for people learning to learn? There are several.

First, it reinforces the importance of broad exposure. For a program to learn the rules of the game, it needs to see how many different games are played, in many different settings. Wide exposure is critical to pattern recognition. Learning then begins with a wide and broad exposure to the field.

Second, it reminds us how important it is to do something in order to learn about it. In the case of AlphaGo, it was mimicry or learning to act like a human. In our case it is might be about making the leap from being a critic to a practitioner. Even a novice.

Third, we now have a way to use past experience as an asset—but not as a constraint. While wide exposure helped AlphaGo make sense of the many ways the game could be played, the real learning happened when it needed to play several million rounds of the game, each while improving incrementally.

Fourth, the principles of reinforcement learning are all about examining which moves led to wins and which led to losses. In our case, this sounds a lot like continuous action learning or learning from reflection or feedback. In other words reflect, absorb and repeat.

Of course, it all sounds like a lot of work, which I think it is. And we may need to decide for ourselves which learning tasks we would like to devote our energies to.

But one thing is clear—humans have all along had the potential for great learning. Sometimes it just takes a machine to remind us of that.

Disclaimer: The views expressed in the article above are those of the authors' and do not necessarily represent or reflect the views of this publishing house


Tags assigned to this article:
artificial intelligence

Around The World

Advertisement