ml-history

[linkstandalone]

Machine Learning a brief history

A brief history of Machine Learning based on my work for Autonomous Underwater Intervention using Intelligent Computer Vision methods.

The concept of Machine Learning originated in the 1950s when the Computer Science pioneer Alan Turing proposed the question “Can machines think?” in the paper “Computing Machinery and Intelligence” (Turing, 1950) and introduced the famous “Turing test” which explains how an intelligent machine should perform and other concepts that give life to modern Artificial Intelligence (AI) and Machine Learning (ML). The general concept that Turing proposes is how a general-purpose computer can learn to perform a specific task on its own. In classical computer programming, the human gives instructions (the program) and data to be processed to the computer and the output is the answer to the problem. In contrast, with machine learning, humans give the data and the answers to the computer and the outputs are the rules that the computer developed during the process, and these rules can be further applied to new data to generate novel answers.

In the early days, Machine Learning was connected with Artificial Intelligence and tried to develop systems that were equipped with intelligence and could be adopted in complex environments to change computer behaviour accordingly (Langley, 2011). Thus, it can be said that early machine learning models focused on acquiring knowledge and understanding the surrounding environment, and only later focused on developing complex mathematical and statistical models (Michalski et al., 1983).

Machine Learning, in the present, is a rapidly growing field and is used in many different areas and disciplines, among them are various industries such as the oil/gas industry and shipping industry, the financial sector, the healthcare system, autonomous cars and robots. One of the most important machine learning applications is image recognition and classification since the present world has abundant digital images. Other machine learning applications are found in day-to-day activities such as web searches, smartphone speech recognition systems, and camera face detection modes. The research around the Machine Learning field and generally in Artificial Intelligence is a strong and rapidly growing that has already developed self-driving car prototypes, autonomous Unmanned Aerial Vehicles (UAVs), and Autonomous Underwater Vehicles (AUVs) which are capable of self-navigating without human intervention.

The growth of machine learning was mainly possible during the last decade because of the unprecedented expansion of computing power and the abundance of available data (Big Data), as well as the development of more sophisticated and efficient neural network algorithms which allowed the explosion of machine learning applications. Artificial Neural Networks were proposed for the first time in 1943 by McCulloch & Pitts, McCulloch & Pitts (1943) who introduced simple artificial neurons that were designed as electric circuits based on human biological neurons.

In 1986 Rosenberg and Terrence (1986) introduce a revolutionary, for the time, network which was able to convert English text to speech, and could learn by itself to pronounce more than 20,000 words (Sejnowski & Rosenberg, 1986). Another significant breakthrough in the science of neural networks was the studies of Rumelhart ad Hinton (Rumelhart et al., 1986; Hinton, 1990) where they proposed a novel learning method using back-propagation in neuron networks. The back-propagation method repeatedly adjusts the weights between the neuron connections of the network to minimise the output error. In the same period (LeCun et al., 1990) used networks with back-propagation for handwritten digit recognition. The main objective of that research was to demonstrate that neural networks can be used for image recognition problems without complex data reprocessing. The handwritten digit recognition back-propagation algorithm was later used for the development of a multilayer neural network and the development of the MNIST Dataset for handwritten digit recognition (LeCun et al., 1998). Specifically designed Convolutional Neural Networks (CNNs) deal with the variability of the inputs which are images of handwritten digits.

However, it was not until the mid of 2000s that the first significant breakthrough in machine learning and the use of deep learning networks occurred when Hinton and Salakhutdinov (2006) proposed a new way of training a deep neural network and Deep Learning was once more the focus. The work of (Hinton & Salakhutdinov, 2006) suggests that the training of deep models could be done one layer at a time, and they observed that the performance was particularly good with even just three hidden layers. This, deep learning model, was tested on handwritten digit images and gave much higher success in digit classification than any previous algorithms. The evolution of Machine Learning and Deep Learning was continued over the following years with many significant contributions in different areas of the field such as the use of unsupervised learning techniques for the development of an exceptional model which corrects any corruption of the input data as Vincent et al. (2008) presented.

The developments in machine learning as well as in deep learning continued the over the following years in areas of image recognition, face recognition, speech recognition, natural language processing, real-time translation and even the first intelligent system, with Google’s Brain project of AlphaGo, which beat a human professional player in the game of Go without human intervention, in 2017 (Silver et al., 2017). Furthermore, in the image classification competition organised by ImageNet, the ILSVRC (ImageNet Large Scale Visual Recognition Challenge), the top five image classification error has been decreasing since its inception in 2009 (Jia Deng et al., 2009) and reached an astonishing 2.3%4 which is well below the human performance of around 5% error rate (Hu et al., 2018).

The evolution of Machine Learning shows that, although it started as a branch of the AI, today it has evolved to be a completely different field of computer science focused more on mathematical and statistical models and theories, with the aim to train the machines to “learn” from data rather than acquiring a more abstract understanding of their environment as AI does. Therefore, with the boom of data as well as with the explosion in capabilities of computer hardware, Machine Learning models have become more efficient in specific tasks, such as image recognition, and consequently, this has made possible another, long-forgotten field, that of Deep Learning.

References