Browse Definitions :

A Timeline of Machine Learning History

Machine learning, an application of artificial intelligence (AI), has some impressive capabilities. A machine learning algorithm can make software capable of unsupervised learning. Without being explicitly programmed, the algorithm can seemingly grow "smarter," and become more accurate at predicting outcomes, through the input of historical data.

The History and Future of Machine Learning

Machine learning was first conceived from the mathematical modeling of neural networks. A paper by logician Walter Pitts and neuroscientist Warren McCulloch, published in 1943, attempted to mathematically map out thought processes and decision making in human cognition.

In 1950, Alan Turning proposed the Turing Test, which became the litmus test for which machines were deemed "intelligent" or "unintelligent." The criteria for a machine to receive status as an "intelligent" machine, was for it to have the ability to convince a human being that it, the machine, was also a human being. Soon after, a summer research program at Dartmouth College became the official birthplace of AI.

From this point on, "intelligent" machine learning algorithms and computer programs started to appear, doing everything from planning travel routes for salespeople, to playing board games with humans such as checkers and tic-tac-toe.

Intelligent machines went on to do everything from using speech recognition to learning to pronounce words the way a baby would learn to defeating a world chess champion at his own game. The infographic below shows the history of machine learning and how it grew from mathematical models to sophisticated technology.  

Machine learning history timeline
A timeline of major events in machine learning history. Click the image to view a clearer version.

Dig Deeper on Artificial intelligence - machine learning

SearchCompliance
  • pure risk

    Pure risk refers to risks that are beyond human control and result in a loss or no loss with no possibility of financial gain.

  • risk reporting

    Risk reporting is a method of identifying risks tied to or potentially impacting an organization's business processes.

  • What is risk management and why is it important?

    Risk management is the process of identifying, assessing and controlling threats to an organization's capital and earnings.

SearchSecurity
  • encryption key

    In cryptography, an encryption key is a variable value that is applied using an algorithm to a string or block of unencrypted ...

  • payload (computing)

    In computing, a payload is the carrying capacity of a packet or other transmission data unit.

  • script kiddie

    Script kiddie is a derogative term that computer hackers coined to refer to immature, but often just as dangerous, exploiters of ...

SearchHealthIT
SearchDisasterRecovery
  • What is risk mitigation?

    Risk mitigation is a strategy to prepare for and lessen the effects of threats faced by a business.

  • fault-tolerant

    Fault-tolerant technology is a capability of a computer system, electronic system or network to deliver uninterrupted service, ...

  • synchronous replication

    Synchronous replication is the process of copying data over a storage area network, local area network or wide area network so ...

SearchStorage
  • cloud NAS (cloud network attached storage)

    Cloud NAS (network attached storage) is remote storage that is accessed over the internet as if it is local.

  • Terabyte (TB)

    A terabyte (TB) is a unit of digital data that is equal to about 1 trillion bytes.

  • object storage

    Object storage, also called object-based storage, is an approach to addressing and manipulating data storage as discrete units, ...

Close