Before starting reading the blog I want to tell you that much of the applied knowledge, including the images, are taken from this book, in case you want to acquire it:
Biological neural networks:
They work with neurobiological processes that establish highly complex relationships since they work with electricity and exchange electricity from one neuron to another.
It is composed of the cell body called the soma and two types of branches, the axon and the dendrites. Neurons receive electrical impulses from other neurons through the dendrite, and transmit signals through the cell body through the axon. The body and dendrites receive the input signals and the cell body combines and integrates them and emits the output signals.
Artificial neural networks:
Artificial neural networks emulate part of the functioning of the human brain, so their main difference is that artificial ones only try to imitate the behavior of a biological neuron.
They can be single layer or multilayer, this is multilayer, it has some input layers
which are the ones that receive the information from the outside, then they have a hidden layer where the data is processed, they are the ones that transform the input that the unit could use, then the output layer which are the possible results.
Threshold logical unit:
The Ulus are those that compare the weighted sum of the inputs with the threshold, if it exceeds it the result is 1, otherwise it is 0.
A single ulu is known as a perceptron, training consists of adjusting the weights of variables until the desired outputs are achieved, it also consists of minimizing an error function, one of the most used is the quadratic error, we need its inputs to be numerical if it is 1, it means that the dot product is greater than the threshold otherwise it will be 0.
It is each ulu and has a linearity of non-binary functionality that can be separated.
EXAMPLE OF A NEURAL NETWORK:
Three types of layers: Input layers, hidden layers and output layers.
In the input layers we have a medical patient with certain medical values and in the output layers we see if he or she has a high risk of suffering from a disease or a low risk of not having one.
Forward propagation networks:
They are graphs that do not have loops that are static and only produce a set of values as a result.
These do have loops due to feedback connections.
Back propagation networks:
An input pattern is applied to the first layer of the network and propagated to all higher layers and generates an output after which the error is calculated for each output neuron.
Medicine: analysis of cells carrying breast cancer
Voice: Voice assistants (Siri, Amazon Alexa)
Security: fingerprint recognition
Banks: Check readings
Automobiles: autopilot system