The little-known relationship between deep learning and the innovator’s dilemma

In 1997, Harvard Business School professor Clayton Christensen’s book, “The Innovator’s Dilemma,” became a sensation among venture capitalists and entrepreneurs. The lesson most learned is that well-run businesses cannot afford to switch to a new approach—one that will eventually replace their existing business model—until it’s too late. This statement also applies to research. The second wave of neural networks in the 1980s and 1990s was a good example of new approaches that underperformed, but finally started to revolutionize artificial intelligence around 2010 .

Various neural networks have been studied as machine learning mechanisms since the early 1950s, but they are not very good at learning interesting things. In 1979, Kunihiko Fukushima first publicized his work on so-called shift-invariant neural networks, which allowed his self-organizing networks to learn to classify handwritten digits that appear anywhere in an image. In the 1980s, a technique called backpropagation was rediscovered; it allowed a form of supervised learning in which the network was told what the correct answer should be. In 1989, Yann LeCun combined backpropagation with Fuksuhima’s ideas into something called a Convolutional Neural Network (CNN). LeCun also focuses on the recognition of handwritten digit images.

Over the next 10 years, the National Institute of Standards and Technology (NIST) proposed a database modified by LeCun containing 60,000 training digits and 10,000 test digits. A standard test database called MNIST allows researchers to precisely measure and compare the effects of different improvements to CNNs. It’s a huge advance, but when applied to arbitrary images generated by early self-driving cars and industrial robots, CNNs can’t match AI methods ingrained in computer vision.

But in the 2000s, more and more learning techniques and algorithmic improvements were added to CNNs, resulting in what is now called deep learning. In 2012, deep learning came out of nowhere, suddenly outperforming standard computer vision algorithms on a set of object pictures tests called ImageNet. The poor cousin of computer vision triumphed, and it revolutionized the field of artificial intelligence. A few have worked hard for decades, surprising everyone. Congratulations to all of them, famous and not so famous.

But be careful. The message of Christensen’s book is that the destruction will never stop. People on the top today will be amazed at new approaches they haven’t even begun to think about. A small group of renegades try all kinds of new things, some of whom are willing to work in silence for decades, regardless of success or failure. Someday, some of these people will surprise us all.

This article is reproduced from: https://www.solidot.org/story?sid=71865
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment