In: Economics
History and development of Artificial Intelligence (AI).
Just write the 500 words about its history and development. No Need the definition of Artificial Intelligence. And No Handwriting please.
The content will be checked through plagiarism detecting software. Thank you
Early Days
During the Second World War, noted British computer scientist Alan Turing worked to crack the ‘Enigma’ code which was used by German forces to send messages securely. Alan Turing and his team created the Bombe machine that was used to decipher Enigma’s messages.
The Enigma and Bombe Machines laid the foundations for Machine Learning. According to Turing, a machine that could converse with humans without the humans knowing that it is a machine would win the “imitation game” and could be said to be “intelligent”.
In 1956, American computer scientist John McCarthy organised the Dartmouth Conference, at which the term ‘Artificial Intelligence’ was first adopted. Research centres popped up across the United States to explore the potential of AI. Researchers Allen Newell and Herbert Simon were instrumental in promoting AI as a field of computer science that could transform the world.
Getting Serious About AI Research
In 1951, an machine known as Ferranti Mark 1 successfully used an algorithm to master checkers. Subsequently, Newell and Simon developed General Problem Solver algorithm to solve mathematical problems. Also in the 50s John McCarthy, often known as the father of AI, developed the LISP programming language which became important in machine learning.
In the 1960s, researchers emphasized developing algorithms to solve mathematical problems and geometrical theorems. In the late 1960s, computer scientists worked on Machine Vision Learning and developing machine learning in robots. WABOT-1, the first ‘intelligent’ humanoid robot, was built in Japan in 1972.
AI Winters
However, despite this well-funded global effort over several decades, computer scientists found it incredibly difficult to create intelligence in machines. To be successful, AI applications (such as vision learning) required the processing of enormous amount of data. Computers were not well-developed enough to process such a large magnitude of data. Governments and corporations were losing faith in AI.
Therefore, from the mid 1970s to the mid 1990s, computer scientists dealt with an acute shortage of funding for AI research. These years became known as the ‘AI Winters’.
New Millennium, New Opportunities
In the late 1990s, American corporations once again became interested in AI. The Japanese government unveiled plans to develop a fifth generation computer to advance of machine learning. AI enthusiasts believed that soon computers would be able to carry on conversations, translate languages, interpret pictures, and reason like people.In 1997, IBM’s Deep Blue defeated became the first computer to beat a reigning world chess champion, Garry Kasparov.
Some AI funding dried up when the dotcom bubble burst in the early 2000s. Yet machine learning continued its march, largely thanks to improvements in computer hardware. Corporations and governments successfully used machine learning methods in narrow domains.
Exponential gains in computer processing power and storage ability allowed companies to store vast, and crunch, vast quantities of data for the first time. In the past 15 years, Amazon, Google, Baidu, and others leveraged machine learning to their huge commercial advantage. Other than processing user data to understand consumer behavior, these companies have continued to work on computer vision, natural language processing, and a whole host of other AI applications. Machine learning is now embedded in many of the online services we use. As a result, today, the technology sector drives the American stock market.