We will represent the visible layer activation variables by v  i,

We will represent the visible layer activation variables by v  i, the hidden activations by h  j and the vector variables by v=viv=vi and h=hjh=hj where i=[1‥N]i=[1‥N] and j=[1‥S]j=[1‥S] index the individual neurons in the visible and hidden layers, respectively. Restricted Boltzmann Machines   are stochastic models that assume symmetric connectivity between the visible and hidden layers (see Fig. 1A) and seek to model the structure of a given dataset. They are energy-based models,

where the energy of a given configuration of activations vivi and hjhj is given by ERBM(v,h|W,bv,bh)=−v⊤Wh−bv⊤v−bh⊤h,and the probability of a given configuration is given by P(v,h)=exp(−ERBM(v,h|W,bv,bh))/Z(W,bv,bh),where Z(W,bv,bh)Z(W,bv,bh) is the partition function. One can extend the

RBM to continuous-valued PF-01367338 clinical trial visible variables by modifying the energy function, to obtain the Gaussian-binary RBM ERBM(v,h|W,bv,bh)=−v⊤σ2Wh+∥bv−v∥22σ2−bh⊤h.RBMs are usually trained through contrastive divergence, which approximately follows the gradient of the cost function CDn(W,bv,bh))=KL(P0(v|W,bv,bh)||P(v|W,bv,bh))−KL(Pn(v|W,bv,bh)||P(v|W,bv,bh)),CDn(W,bv,bh))=KL(P0(v|W,bv,bh)||P(v|W,bv,bh))−KL(Pn(v|W,bv,bh)||P(v|W,bv,bh)),where selleck compound P  0 is the data distribution and P  n is the distribution of the visible layer after n   MCMC steps ( Carreira-Perpinan and Hinton, 2005). The function CD  n gives an approximation to maximum-likelihood (ML) estimation of the weight matrix ww. Maximizing the marginal probability P(vD|W,bv,bh)P(vD|W,bv,bh) of the data vDvD in the model leads to a ML-estimate which is hard to compute, as it involves averages over the equilibrium distribution P(v|W,bv,bh)P(v|W,bv,bh). The parameter update for

an RBM using CD learning is then given by Δθ∝〈∂ERBM∂θ〉0−〈∂ERBM∂θ〉n,where the <>n<>n denotes an average over the distribution Pn of the hidden and visible variables after n MCMC steps. The Montelukast Sodium weight updates then become ΔWi,j∝1σ2〈vihj〉0−1σ2〈vihj〉n.In general, n=1 already gives good results ( Hinton and Salakhutdinov, 2006). Autoencoders   are deterministic models with two weight matrices W1W1 and W2W2 representing the flow of data from the visible-to-hidden and hidden-to-visible layers, respectively (see Fig. 1B). AEs are trained to perform optimal reconstruction of the visible layer, often by minimizing the mean-squared error (MSE) in a reconstruction task. This is usually evaluated as follows: Given an activation pattern in the visible layer vv, we evaluate the activation of the hidden layer by h=sigm(v⊤W1+bh)h=sigm(v⊤W1+bh), where we will denote the bias in the hidden layer by bhbh. These activations are then propagated back to the visible layer through v^=sigm(h⊤W2+bv) and the weights W1W1 and W2W2 are trained to minimize the distance measure between the original and reconstructed visible layers.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>