Crazy Simple Computer Science Series/1 🚀 How Do Computers Work? Understanding the Matrix World.
Welcome to the Crazy Simple Computer Science Series!
*This series introduces readers to the basics of computer science in a way that anyone can understand. This series, which aims to make computer science and working principles fun, aimed to give the basic logic of computer science and helpful information that you can use in daily life in an understandable language.
What is Computational Thinking?
Computational thinking is, after all, what computer science is all about. It’s not about programming, which is sometimes confused with computer science; while programming is a fantastic instrument for addressing problems, computer science is about solving problems in general.
It’s all about giving you a mental model, or set of concepts, as well as certain practical abilities that will allow you to apply answers to issues in completely different fields.
We’ve sort of implicitly agreed that we’ll represent our words in this particular language because I write in English and you comprehend English. Computers no longer use English; instead, they have their own system, with which you may be familiar even if you don’t speak it yourself, and there are other ways to express information.
Consider the simplest of challenges, such as counting the number of individuals in a room. This is something I could accomplish the old-fashioned way. I don’t need computers or English; I can simply use my physical hand and say, “I’m starting with zero people and now I’ll count one, two, three, four, five, and then — I’m out of fingers, but I can at least use my other hand, maybe a couple of feet and get as high as 10, maybe even 20 using this physical system.” Actually, this is pretty much the same as keeping score of the old-fashioned method with hash marks.
Why don’t I start with some patterns instead of counting up from 0 to 1 to 2 to 3 to 4 to 5?
So, I’ll start with 0 and 1, but now, let’s call it binary notation, where we’re actually considering the pattern. They can be either up or down, or just one of them can be up or down, or just one of them can be up or down.
So, if that’s the only option, it’s easy to picture whether you have your computer plugged in or not, whether electricity is flowing or not, whether you have a charge in your battery or not.
As a result, this binary world, in which something can be in one of two states — on or off, plugged in or not —1 or 0 — is ideal for employing binary to represent information in computers. After all, I could just flip on the lights to symbolize a number. So, my phone has a light built-in, and that light is currently turned off, but if I deem this to be turned off, we’ll call it a 0.

Binary Notation and Transistors
If I switch on this flashlight, I’ll be portraying a 1. I can represent two different values with a simple light bulb. Computers, on the other hand, don’t necessarily rely on a slew of tiny light bulbs; instead, they rely on a device known as a transistor. A transistor is a tiny little switch that may be turned on or off, allowing or preventing the flow of electricity. So, when you have a computer with a motherboard, CPU, and a bunch of other gear inside of it, one of the underlying components is these things called transistors.

And today’s computers include millions of them, each of which can be turned on or off and this is how a computer represents information. It can utilize that electricity to turn these switches on and off in distinct patterns as long as it has access to some physical supply of electricity, thereby indicating 0, 1, 2, 3, 4, , or even millions, billions, or beyond.
As a result, computers employ binary. Because it’s so well-conducive to the physical reality on which they’re ultimately built, computers speak binary — just 0’s and 1’s or off and on.
Meanwhile, we humans prefer to communicate not just in English and other spoken languages, but also in decimal numbers. So decimal, which means ten, has ten digits: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, whereas binary only has two: 0 and 1.
So, we’ll need the means to convert these 0s and 1s into numbers that we’re more comfortable with. So, what’s the best way to go about it?
For other articles in the series;
Crazy Simple Computer Sciences 2🚀/Understanding the Matrix World/Machine Language
Crazy Simple Computer Science Series/3🚀 Machine Languages: ASCII and UNICODE
For More Articles From DataBulls Writers: