Ticker

6/recent/ticker-posts

Artificial Neurons



Trying to understand how the biological brain works in order to design AI, Warren McCulloch and Walter Pitts published the first concept of a simplified brain cell, the McCulloch-Pitts (MCP) neuron (1943).

Neurons are interconnected nerve cells in the brain that are involved in the processing and transmitting of chemical and electrical signals. McCulloch and Pitts described a neuron as a simple logic gate with binary outputs — multiple signals arrive at the neuron and are integrated into the cell body. If the accumulated signal exceeds a certain threshold, an output signal is generated.

A few years later, Frank Rosenblatt published the first concept of the perceptron learning rule based on the MCP neuron model. He proposed an algorithm that would automatically learn the optimal weight coefficients that are multiplied with the input features in order to make the decision of whether a neuron fires or not. In the context of supervised learning and classification, such an algorithm could then be used to predict if a sample belongs to one class or the other.

The Formal Definition of an Artificial Neuron

More formally, we can put the idea behind artificial neurons into the context of a binary classification task where we refer to our two classes as 1 (positive class) and -1 (negative class). We can then define a decision function ϕ(z) that takes a linear combination of certain input values, x= [x1,x2,....,xm] and a corresponding weight vector w= [w1,w2,....,wm] , where z is the net input z = w1x1 + .... + wmxm.

Now, if the net input of a particular sample x^{(i)}x(i) is greater than a defined threshold θ, we predict class 1, and class -1 otherwise. In the perceptron algorithm, the decision function ϕ(⋅) is a variant of a unit step function:

ϕ(z) = 1 if z>=theta , -1 otherwise

In machine learning literature, the negative threshold, or weight, w0=−θ, is called the bias unit.

The net input z = w^Tx is squashed into a binary output (-1 or 1) by the decision function of the perceptron (left figure) and how it can be used to discriminate between two linearly separable classes

Suhaib Bin Younis | @suhaibbinyounis

Post a Comment

0 Comments

Donate

If you like my posts and love what I do, please consider donating to one of my favourite charity: