Introduction to Feedforward Neural Networks
Picture yourself during rush hours in Chicago, needing to board or exit a packed train. You can’t politely wait for a clear path; instead, you shoulder through the crowd, carving out your own space. That assertive spirit is exactly what you need when diving into the study of machine learning and artificial intelligence. Yes, discussions about the use, regulation, and ethical implications of these rapidly advancing technologies are already crowded. But don’t wait for an invitation to join in. Demand space, pull up your own chair, and insert yourself into the conversation. But a little bit of preparation won’t hurt. In this post, I’ll guide you through an intuitive understanding of Feedforward Neural Networks. So, remember the rush hour hustle, channel that determination, and let’s journey together.

If you need a refresher on artificial neural networks, please review my two-part introductory posts on neural networks here: Part 1, and Part 2. Otherwise, if you’re ready to dive in, let’s explore Feedforward Neural Networks together.
Feedforward Neural Networks (FNNs):
In these networks, information always moves forward from the input layers through the hidden layers, and finally to the output layer. There are no cycles or loops in the network. Each neuron only receives input from the previous layer and sends output to the next layer.

To help you engage with this concept in a fun way, I’ll use baseball as a metaphor. Please keep in mind, this is a simplified comparison and does not capture the complexities of either baseball or neural networks.

[Metaphor]
Welcome, ladies and gents, to an imaginary day in Wrigley Field. The Chicago Cubs are head-to-head with their long-standing rivals, the St. Louis Cardinals. The Cubs’ die-hard fans in their white and blue Cubs jerseys and caps, have crammed into the stands, already waving their foam fingers and their iconic “W” flags. The stadium is buzzing with energy and chatter.
The air is perfumed with the sweetness of Garrett Popcorn, the savoury and smokey scent of all-beef hot dogs slathered in mustard and relish, and the unmistakable fragrance of craft beer. One whiff and there’s no denying it: America has an obesity problem! I’m kidding! Let’s try that again. One whiff and there’s no denying it: baseball is America’s favourite pastime — at least, if you’re in the Windy City.
But hey, Chicago is not just about hot dogs and home runs. We’re also about inclusivity here, and that means welcoming everyone — even the self-proclaimed “nerds” like yours truly. So, get ready folks, because your commentator for today’s game is about to sprinkle some neural network wisdom into this play-by-play coverage.
Imagine our game as a Feedforward Neural Network. Intriguing, no? It’s all about steps. Just like this ballgame, the process kicks off with inputs — for us, that’s the pitch. The pitcher’s aim, speed, spin, and pitch type, all make up our raw data.
[First Pitch]
And we’re off! Marcus Stroman, the Cubs’ star pitcher, dips his fingers into his rosin bag to ensure a firm grip and steps up to the mound. With his flexible shoulder and whip-like arm, he hurls a lightning-fast slider smack-dab in the middle. That pitch is our raw data. The moment that ball leaves his fingertips, we’re in play!
The input data — our pitch — gets processed through hidden layers. If we picture these hidden layers as distinct stages of a baseball game (e.g., early innings, middle innings, late innings, extra innings), then the nodes or neurons within these hidden layers are akin to the players on the field. Each neuron in a hidden layer processes the input data — much like a player fielding a ball. The neuron applies a weight — similar to a player’s skill or strategy — and adds a bias term — comparable to a player’s position in the game. Once an input is multiplied by a weight, summed, and a bias term is added to it, the result of the calculation is a number. The purpose of the activation function is to transform this number into a form that is useful for the next layer in the network. So, a neuron uses an activation function, similar to a player deciding to run or throw, to generate its output and pass it along to the next layer.
[End of Metaphor]
To summarize, a Feedforward Neural Network is one of the most basic types of artificial neural networks. As the term “feedforward” suggests, information flows in only a forward direction in this network. But this simplicity is their strength. These networks are easy to understand, implement, and interpret. They can also handle a large number of inputs so long as there is no time-series component to the data.