avatarShalise S. Ayromloo, PhD

Free AI web copilot to create summaries, insights and extended knowledge, download it at here

2626

Abstract

06">2. Summation function: The inputs are multiplied by their respective weights and then summed together to compute a weighted sum of inputs. Imagine this as what a nutritionist would do when consulted to recommend a dietary plan.</p><p id="b9f3">3. Activation function: The output of the summation function is then passed through an activation function, which determines the output of the neuron. Just like dietary plans vary depending on an individual’s health goals, such as weight loss, sugar control, or muscle gain, the choice of activation function depends on the task at hand, e.g., binary classification, multi-class classification, or regression.</p><figure id="4d66"><img src="https://cdn-images-1.readmedium.com/v2/resize:fit:800/1*gqz6Sm1yKGjZA9u8puybng.png"><figcaption>Figure 1. Author’s drawing for a visual representation of an artificial neuron. Arrows pointing towards the neuron represent inputs. The triangles at the tail of each arrow represent the weights associated with each input. Inside the neuron, the output of the summation function is passed to the activation function denoted by tilde.</figcaption></figure><p id="e6dc">Neuron outputs become the inputs for other neurons, and those outputs become inputs to others until the web of interconnected layers form an artificial neural network, much like the layers of a scrumptious cake. This layered organization allows for sequential information extraction, with each layer building on the output of the previous one. It’s a fantastic way to process high-dimensional and large datasets efficiently. A neural network has three main layers: the input, hidden layer(s), and output layers. Imagine an irresistibly indulgent, rich, and moist neural cake with layers you’ve been craving without realizing it!</p><p id="a9e4">1. Input layer: The first layer of a neural network is the input layer, which consists of neurons that receive data. The number of neurons in the input layer depends on a dataset's number of variables (features) and not the number of observations. Every neuron in the input layer corresponds to a specific variable of the input data and is responsible for processing that particular aspect of the data.</p><p id="58f4">2. Hidden layers: There can be one or more hidden layers between the input and output layers. They are called “hidden” because they don’t interact with the external data directly, like data analysts in a statistical agency who might not interact directly with external stakeholders but whose work ensures the release of quality data to the public. The neurons in the hidden layers process the information rece

Options

ived from the input or the preceding hidden layer and pass it along to the next layer. The number of hidden layers and the number of neurons in each hidden layer depend on the complexity of the problem, the amount of training data, and the desired performance. The higher the number of hidden layers, the higher the neural network's capacity for solving complex problems. But (you knew there was a but coming, didn’t you?) additional hidden layers could lead to overfitting without sufficiently large training data.</p><p id="d7ad">3. Output layer: The final layer of the neural network is the output layer, and, as its name suggests, it consists of neurons that produce the network’s output. The number of neurons in the output layer depends on the task. For example, in a binary classification task, the output layer would have a single output neuron representing the probability of the input belonging to a particular class. In a multi-class classification task, the output layer could consist of multiple output neurons, each representing the probability of the input belonging to a specific class. In regression tasks, the output layer typically has a single neuron representing the predicted continuous value.</p><figure id="5ea7"><img src="https://cdn-images-1.readmedium.com/v2/resize:fit:800/1*D2FcT6yKqqdNenow7AbITg.png"><figcaption>Figure 2. Author’s drawing for a visual illustration of a simple neural network. The arrows represent the flow of information between the neurons and each have a weight associated with them.</figcaption></figure><p id="5d12">Neural networks are excellent at learning patterns and making predictions. With their interconnected layers and neurons, these amazing networks can tackle problems that require human-like intelligence or even ones that are tough or time-consuming for humans. Remember the CAPTCHA challenge at the beginning? Well, neural networks can also be trained to solve CAPTCHAs! But, as CAPTCHAs evolve and become more complex, so must the neural networks designed to solve them.</p><p id="5850">Understanding the basics of neural networks is essential for appreciating the jaw-dropping advances in AI technology and how it’s transforming our world. But what’s even cooler is that learning about neural networks empowers people to join discussions about AI’s ethical implications on data privacy and algorithmic bias. I hope this introduction has tickled your curiosity and left you hungry for more! If you want to dive deeper and start part 2, just click <a href="https://readmedium.com/demystifying-neural-networks-part-2-c316f72efb7e">here</a>.</p></article></body>

Demystifying Neural Networks: Part 1

Ever had that moment when you’re about to submit an online form, only to be met with a CAPTCHA? That little twinge of annoyance might just turn into a cheeky grin when you realize that CAPTCHAs are designed to distinguish you, a fabulous human, from those pesky automated bots. After all, CAPTCHA stands for Completely Automated Public Turing test to tell Computers and Humans apart! Let’s celebrate this tiny triumph and tackle a text-based CAPTCHA together, shall we?

Go ahead and replace the numbers and symbols in the text below with the correct letters to reveal a famous song lyric.

CAPTCHA: W0rkin’ 9 t0 5, Wh@t @ w@y t0 m@k3 @ l1v1n’

Did you get it?

Corrected Phrase: Workin’ 9 to 5, What a way to make a livin’

Nailed it! We humans excel at pattern recognition, context comprehension, and deciphering distorted information. (And if you didn’t recognize that iconic lyric from Dolly Parton’s 9 to 5, what have you been doing with your life?) Alright, let’s cool it with the bragging — point made. Give yourself a well-deserved pat on the back, and carry on!

We owe our ability to think, learn, remember, perceive, and physical functions like movement, breathing, and maintaining a stable internal environment to our nervous system. Simply, our biological Neural Network is the machine that makes it happen.

As scientists aim to emulate that biological Neural Network with Artificial Intelligence (AI), they look to the human nervous system for inspiration to create machines or algorithms capable of performing tasks that would typically require human intelligence.

The building blocks of our nervous system’s Neural Network are neurons, which are specialized cells designed to transmit and process information in the form of electrical and chemical signals. Within AI, the building blocks of neural networks are artificial neurons, also known as the perceptron.

Every artificial neuron has three components, just like your favourite trilogy (Lord of the Rings, anyone?).

1. Inputs: Data points fed into the neuron are called inputs. Think of these like a delicious feast in the Hobbits’ Shire but for the brain. Each input is associated with a weight, which represents the strength or importance of that input in determining the output of the neuron. To stick with the feast analogy, you could consider the weights as the nutritional values of different dishes.

2. Summation function: The inputs are multiplied by their respective weights and then summed together to compute a weighted sum of inputs. Imagine this as what a nutritionist would do when consulted to recommend a dietary plan.

3. Activation function: The output of the summation function is then passed through an activation function, which determines the output of the neuron. Just like dietary plans vary depending on an individual’s health goals, such as weight loss, sugar control, or muscle gain, the choice of activation function depends on the task at hand, e.g., binary classification, multi-class classification, or regression.

Figure 1. Author’s drawing for a visual representation of an artificial neuron. Arrows pointing towards the neuron represent inputs. The triangles at the tail of each arrow represent the weights associated with each input. Inside the neuron, the output of the summation function is passed to the activation function denoted by tilde.

Neuron outputs become the inputs for other neurons, and those outputs become inputs to others until the web of interconnected layers form an artificial neural network, much like the layers of a scrumptious cake. This layered organization allows for sequential information extraction, with each layer building on the output of the previous one. It’s a fantastic way to process high-dimensional and large datasets efficiently. A neural network has three main layers: the input, hidden layer(s), and output layers. Imagine an irresistibly indulgent, rich, and moist neural cake with layers you’ve been craving without realizing it!

1. Input layer: The first layer of a neural network is the input layer, which consists of neurons that receive data. The number of neurons in the input layer depends on a dataset's number of variables (features) and not the number of observations. Every neuron in the input layer corresponds to a specific variable of the input data and is responsible for processing that particular aspect of the data.

2. Hidden layers: There can be one or more hidden layers between the input and output layers. They are called “hidden” because they don’t interact with the external data directly, like data analysts in a statistical agency who might not interact directly with external stakeholders but whose work ensures the release of quality data to the public. The neurons in the hidden layers process the information received from the input or the preceding hidden layer and pass it along to the next layer. The number of hidden layers and the number of neurons in each hidden layer depend on the complexity of the problem, the amount of training data, and the desired performance. The higher the number of hidden layers, the higher the neural network's capacity for solving complex problems. But (you knew there was a but coming, didn’t you?) additional hidden layers could lead to overfitting without sufficiently large training data.

3. Output layer: The final layer of the neural network is the output layer, and, as its name suggests, it consists of neurons that produce the network’s output. The number of neurons in the output layer depends on the task. For example, in a binary classification task, the output layer would have a single output neuron representing the probability of the input belonging to a particular class. In a multi-class classification task, the output layer could consist of multiple output neurons, each representing the probability of the input belonging to a specific class. In regression tasks, the output layer typically has a single neuron representing the predicted continuous value.

Figure 2. Author’s drawing for a visual illustration of a simple neural network. The arrows represent the flow of information between the neurons and each have a weight associated with them.

Neural networks are excellent at learning patterns and making predictions. With their interconnected layers and neurons, these amazing networks can tackle problems that require human-like intelligence or even ones that are tough or time-consuming for humans. Remember the CAPTCHA challenge at the beginning? Well, neural networks can also be trained to solve CAPTCHAs! But, as CAPTCHAs evolve and become more complex, so must the neural networks designed to solve them.

Understanding the basics of neural networks is essential for appreciating the jaw-dropping advances in AI technology and how it’s transforming our world. But what’s even cooler is that learning about neural networks empowers people to join discussions about AI’s ethical implications on data privacy and algorithmic bias. I hope this introduction has tickled your curiosity and left you hungry for more! If you want to dive deeper and start part 2, just click here.

Neural Networks
Machine Learning
Data Science
Artificial Intelligence
Recommended from ReadMedium