Spaun - world’s largest functional brain model.
- Created using specialized simulation software for modelling neural networks (Nengo).
- The simulated brain is comprised of 3 key components: eye, brain, and arm.
- Eye provides info for the brain to process, on a vertical screen attached to laptop-like graphic Spaun faces.
- Brain, “thought bubbles” show how Spaun processes info from its eyes, uses different parts of “brain” which is symbolized by color. (ie. writing = motor section warmer)
- Arm, writes out answer to a test after brain processes input. Displayed on horizontal surface of the “laptop”. Not perfect but rather some flaws while being concise/scribbles, similar to humans.
- Connection to Neural network is that the brain is a neural network, a collection of nerve cells that communicate with each other called neurons. SImilar to humans, Spaun operated with sub-networks of neurons.
Both Spaun and the human brain:
- contain the following regions: motor cortex, striatum, and globus pallidus (C. Eliasmith, personal communication, February 15, 2017).
- have the ability to control an arm and write thoughts.
- perform recognition tasks of writing out either what single number it sees or from a list. This includes the number corresponding to a certain position in a list.
- possess approximately 100% chance of recognizing typewritten numbers from 0 to 9.
- perform the same recognition tasks as mentioned in 3, but writing the digit as identical as possible (called ‘copy drawing’).
- can count numbers and write them out, starting with a number and increase by a certain number.
- Spaun is missing regions of the brain, including the hippocampus, amygdala, and cerebellum (C. Eliasmith, personal communication, February 15, 2017)
- Spaun only deals with numbers 0 to 9 Spaun possesses approximately 94% chance of recognizing handwritten numbers (from 0 to 9), while the human brain is about 98% accurate
Feed Forward Network, Single/Multi Layer Perceptron, Back-Propagation
- While a single layer perceptron can only learn linear functions, a multi layer perceptron can also learn non – linear functions.
The goal of Neural Networks is to mimic the behavior of brain and not the mind: Scientists are struggling to define what “thought” is — which means that the mind/brain translation problem will not be overcome until scientists come up with a clear definition of human thought, consciousness, perception and action as cerebral phenomena.
Artificial neural networks vs Human Brain in terms of architecture: Human brain is a “network” of 100 milliards of neurons wherein each neuron is connected to many thousands of other neurons, which means in a brain there are millions of connections. One of the most common kind of neural network architecture is the simple three layers structure of artificial neurons, like the three layers “perceptron” as shown below which is called the TLP architecture.
Now, neural networks are both feed-forward or feedback networks, emphasizes the paper: In feed-forward neural networks like TLP the information goes in one direction, from input layer to output layer through the hidden layer (that can be more than one), and there are no cycles. Meanwhile, in the feedback network (or recurrent networks) there are no input or output layers and all neurons are inputs and outputs units.
On the other hand, the human brain works like a feed-forward network with layers, but it has also many connections that lead the information backward to neurons of “preceding layer”, i.e. the brain is a feedback network in which can be many cycles of neurons.
ANN vs DANN
- Image recognition, input million of pixels, output is only few numbers with some meaning (ie probability image is of an animal or a building).
- Input layer, 1 or 2 hidden layer, output layer. Trained SUPERVISED
- Deep ANN is simply a ANN with more than one hidden layer. Trained both UNSUPERVISED and SUPERVISED. Directed graph
- Deep ANN are therefore now used to perform crucial tasks of computer vision, you’ll find them in self driving cars for example. Other uses are drugs discovery in medicine, recommendation systems, natural language processing, speech recognition. And many many others.
- Deep learning works because of the architecture of the network AND the optimization routine applied to that architecture.
- The network is a directed graph, meaning that each hidden unit is connected to many other hidden units below it. So each hidden layer going further into the network is a NON-LINEAR combination of the layers below it, because of all the combining and recombining of the outputs from all the previous units in combination with their activation functions.
- When the OPTIMIZATION routine is applied to the network, each hidden layer then becomes an OPTIMALLY WEIGHTED, NON-LINEAR combination of the layer below it.
- When each sequential hidden layer has less units than the one below it, each hidden layer becomes a LOWER DIMENSIONAL PROJECTION of the layer below it as well. So the information from the layer below is nicely summarized by a NON-LINEAR, OPTIMALLY WEIGHTED, LOWER DIMENSIONAL PROJECTION in each subsequent layer of the deep network.
- Unknown Type