A restricted Boltzmann machine (RBM) is a type of artificial neural network (ANN) for machine learning of probability distributions. An artificial neural network is a system of hardware and/or software patterned after the operation of neurons in the human brain.
Created by Geoff Hinton, RBM algorithms are useful in defining dimensionality reduction, classification, regression, collaborative filtering, feature learning and topic modeling. Like perceptrons, they are a relatively simple type of neural network.
RBMs fall into the categories of stochastic and generative models of artificial intelligence. Stochastic refers to anything based on probabilities and generative means that it uses AI to produce (generate) a desired output. Generative models contrast with discriminative models, which classify existing data.
Like all multi-layer neural networks, RBMs have layers of artificial neurons, in their case two. The first layer is the input layer. The second is a hidden layer that only accepts what the first layer passes on. The restriction spoken of in RBM is that the different neurons within the same layer can’t communicate with one another. Instead, neurons can only communicate with other layers. (In a standard Boltzmann machine, neurons in the hidden layer intercommunicate.) Each node within a layer performs its own calculations. After performing its calculations, the node then makes a stochastic decision about whether to pass on the on to the next layer.
Though RBM are still sometimes used, they have mostly been replaced by generative adversarial networks or vibrational auto-encoders.