The notion of Artificial Intelligence has been around since ancient Greece in mythological form, from Pygmalion to that little robot owl in Clash of the Titans. Actual hardware coincides with these myths: the very real Antikythera mechanism, the first known computer. Researchers believe the salvaged archaeological find was used for calculating astronomical events such as eclipses, and even the Olympics games.
Humans vs. Machines
Fritz Lang's 1927 silent film Metropolis, set in the year 2026, imagines an AI-powered android so advanced it can impersonate a revolutionary worker to the detriment of all. All the fears and skepticism concerning AI may just have sprung forth from this cautionary tale, stuck in the world's collective consciousness and repeated in endless science fiction media. In short, it cast AI as humanity's rival in a competition only one side can win, a trope many still cling to.
"The Imitation Game" / Netflix
In 1950, the first game surrounding AI was naturally created by the father of computer science, Alan Turing. He called the chess program TurboChamp, though the Ferranti Mark 1 computer he tried to run it on lacked the proper memory. In 1952, in lieu of an actual computer, Turing manually calculated the program's moves with paper and pencil during a match with fellow computer luminary Alick Glennie. Turing's program lost in 29 moves.
The Real Imitation Game
"Ex Machina" / Universal
Of course, he also invented the Turing test, which sets the level at which an artificial intelligence is able to trick you into thinking it's human.
Recognizing that chess may be a tad too ambitious to start off with, IBM programmer Arthur Samuel created a checkers-playing program for the IBM 701 in 1956. The program leveraged a checkers guide book to discern good from bad moves, the first instance of "machine learning," a phrase Samuel also coined. It lost, but IBM's stocks rose 15 points after the televised event. Six years later, the program, now running on an IBM 7094, defeated a checkers champ.
"Horizons of Science: Thinking Machines"
It wasn't until 1957 that a fully-operational chess program was able to run on a computer. Again, an IBM programmer was responsible. This time it was Alex Bernstein using the 704 model. This CPU could perform one billion calculations in a single day to compute the orbit of satellites. It took one-tenth of second to understand its current situation, and eight minutes to plot out outcomes for several possible moves. This program did not learn from mistakes, so any flaw it made could be exploited over and over.
AI went through a lengthy trough of disillusionment before a resurgence during the computer boon of the 1980s. Led by Feng-hsiung Hsu, a team of computer scientists at Carnegie Mellon University developed Deep Thought, a chess-playing computer that took on and defeated chess grandmaster Bent Larsen in 1988. Able to think up to 11 moves ahead, Deep Thought won several chess championships over the next half-decade.
New Kind of Intelligence
Deep Thought lost to Garry Kasparov in 1989. By 1996, the powerful new Deep Blue, also developed by Hsu (now working for IBM), wanted to dethrone the Russian, who was now the world champion. With grandmaster Joel Benjamin as a human coach, Deep Blue shook Kasparov with an odd gambit, sacrificing a pawn for no discernible benefit.
“I had played a lot of computers but had never experienced anything like this," Kasparov later wrote in an essay for Time. "I could feel — I could smell — a new kind of intelligence across the table.”
Kasparov lost the game, but eked out a win in the overall match.
The next year, IBM upgraded its team to include several more human chess masters and doubled how many positions it could evaluate per second, now at 200 million. It could anticipate up to 20 moves ahead and ended up defeating Kasparov, ushering in a new era of interest in AI.
It Belongs in a Museum
Deep Blue showed a machine can outmaneuver a human in logic-based strategy, which it should be able to do. Chess comes down to cornerstone of computer programming: conditional logic. If you move here, I can move here or here, etc. But from an operational standpoint, that has more in common with the Greek Antikythera mechanism than the human brain. Both are now enshrined in museums.
So IBM decided the next supercomputer they make would be able to think more like a human, only way faster, boasting 16 TB of RAM and 1 TB of memory. In 2011, Watson was ready to do just that.
The Game is Afoot
The name of the game was Jeopardy!, and the competition included Ken Jennings, who won 74 games in a row, and Brad Rutter. Though named after IBM's former president Thomas Watson, Sr., not the super sleuth's sidekick, Watson was designed to turn clues into victory, in the form of a question, of course.
What Is Planned Obsolescence, Alex?
Like Kasparov, Jennings thought there was no way a machine could beat him, unable to detect the abstract and sometimes tricky nuances of trivia game show. Watson finished with $53,147 more than Jennings over the two-day tournament.
"I felt obsolete," Jennings said after. "I felt like a Detroit auto worker of the '80s seeing a robot that could do his job on the assembly line."
Working for the Man
Six years later, Watson isn’t hitting the game show circuit, although it would be a great teammate for The $25,000 Pyramid. Instead, it's joined the world of enterprise and industry, using that giant cloud-based brain to improve customer service or alert workers to equipment and logistical issue. The key is connecting the analytics power of Watson to all the sensors and machines in your plant and data up and down the supply chain, ushering in Industry 4.0 to not just the Fortune 500 companies, but virtually any business.
Ready for Anything
If Watson's older slower brother could see 20 moves ahead on a chess board, think of how far ahead of the game you would be if every piece of your supply chain was accounted for in terms of how they affect various processes, production shifts, supplier changes, and equipment maintenance weeks in advance, updating in real-time. IBM calls this cognitive computing.
Benefits Include: Reduction in Downtime
Merging Your IoT network to IBM's AI platform and asset management system Maximo allows you to sense, communicate with, and diagnose problems with any and all connected devices and machinery in the plant. IBM says this will reduce unplanned downtime by up to 47%.
… Improving Process and Product Quality
The data gathered form workflow processes, throughput and yield reducing defect rate by up to 48%.
…Optimizing Product Development
Yes, all this data coming in will help in making on-the-fly decisions to meet a deadline or find the best supplier, but it also informs how you should iterate the next product you're making. It's referred to as "continuous engineering."
"When designing within the context of the IoT, engineers must plan ahead to collect the data that will support the analytics that will provide the insight that will guide engineering decisions that will improve the design," explains Steve Shoaf, Watson IoT marketing manager.
Try Watson for Yourself
And in an extremely meta-moment for the AI, IBM has turned Watson into a game. In it, which you can play here, you get the digital keys to a model shoe factory, and are asked a series of questions to keep production humming. As in life, there are plenty of unexpected challenges, such as weather and equipment hiccups. In this simulation, Watson is there as backup, the phone-a-friend that actually calls you and provides the answers before you ask the question.