Artificial Intelligence (AI) is in an exciting time. Advancements in computing power and cloud services have meant that data can be processed on massive scales allowing it to be utilised for AI tools available to organisations of all shapes and sizes.
Expectations are that AI should be part of every business, the market is booming, and investment has never been so high. The way that the AI is going then surely, it is here to stay and the future has arrived. Unfortunately, history tells us that it’s not necessarily the case.
AI in one form or another has been around for a long time. In 1950, Alan Turing explored the potential for machines to solve problems using information in the same way that humans do, and in 1956 the term “artificial intelligence” was coined.
Since that point AI has had a rollercoaster existence with several ups and downs and two monumental collapses where all confidence and investment dropped out of the market seemingly overnight. These collapses followed periods of intensive excitement, that in retrospect were overhyping the technology and its capabilities at the time.
The big question is how do we know that we are not in yet another period of overhype before an upcoming winter. What will make this time different, and where will the AI applications come from? To answer that, we need to understand what happened in the first two AI winters.
Following the early hypotheses of the 1950s, AI research focused on translation and replicating neurons of the human brain, and after some early success, the potential of AI was generating excitement and funding. There was little progress to follow in the subsequent decade though, and as the world changed, and machine intelligence didn’t materialise in any meaningful way, excitement began to turn to criticism.
Subsequently, in 1973, the UK Parliament commissioned an investigation into the potential of AI which resulted in a highly critical report describing the failure of AI to achieve its objectives. A similar review in the U.S. concluded similarly that most AI research was unlikely to produce anything truly useful in the foreseeable future, with the result that almost all funding for AI research was withdrawn across the globe.
The first AI winter began to thaw in the early 80s as interest in the potential of AI began to grow again, this time with more of a focus on commercialised products and services.
Central to this renaissance was the emergence and uptake of expert systems. These aimed to replicate human decision making through a series of choices – if this, then do that. The potential of expert systems to replace humans in business processes and decision making again drove a great deal of interest.
This continued for several years with spending reaching over a billion dollars by 1985. Such was the excitement that in 1984 two AI researchers called Roger Schank and Marvin Minsky predicted it was, in actual fact, hype, that it was out of control and would lead to another crash. They even coined the term AI winter.
In 1987 they were proved right. The reality of what expert systems could achieve began to emerge in 1984 when John McCarthy criticized expert systems because they lacked common sense and knowledge about their own limitations. The sector again collapsed and by the early 1990s hundreds of AI companies had failed or been acquired signalling the start of the second AI winter.
As with the first winter the combination of excitement, aspiration, promising technology and investment led to expectations far outreaching reality which ultimately led to a loss of confidence.
Since the early 2000s interest, funding and development in AI has been growing yet again and from 2010 there has been a real resurgence in the sector once more.
Huge volumes of data can now be processed opening up the potential for machine learning and data mining to be used on a far wider scale. This capability has been used by the leading vendors to create AI services available on demand and at a remarkably low price point.
In parallel there has been the rise of automation software that replicates simple activities normally performed by humans, along with many other levels of automation and intelligent automation offered to users via easy to use platforms.