Skip to main content

How do programs work on a computer? (Part 1/2)

Let's take a very simple program for understanding how this system works. The following example is a very basic C code that assigns 2 variables (x and y) and then adds them to generate a value 'z'.  Note: We do not output anything in this program void main() { int x = 3; int y = 4; int z = x + y; } But we have a pressing question. Computers inherently work on electronics and logic, so how does it understand and execute the program? The answer is not very simple and you have to bear with me through a tiny journey. Basic Logic Circuits Let's first begin with a very basic Circuit which implements AND logic and OR logic (skip if you are familiar)  As seen in the circuits above, for the left circuit, the LED lights up only when both switches are on. This is AND logic as "Switch 1 AND Switch 2 need to be on". In the circuit on the right, either of the switch will switch on the LED. Note that in the

Photos to Photons

Heya there, Mike here! I hope you remember me from the previous story of "Photons to Photos". If not, please read that story here.

Let's continue from where we left off!

So, it turns out that the owners of the DSLR wanted to revisit us. For that, they had removed our permanent place of residence, the SD card, from the DSLR and put it in their laptop. So, now it was another entity who claimed it was from another DMA controller who would take us to another RAM, this time belonging to the laptop.

And so we are off again, this time to meet the owners of the DSLR. In the RAM, we saw a lot of pictures already present alongside us. And more of them were coming in. We were all being loaded into some application that would enable the DSLR Owners to see us and even change us!

The first contact

As soon as the application had loaded us, it sent us to a chamber called "JPEG Decompression" (remember JPEG Compression chamber from the previous story?). We were asked to hand over the card with the cryptic numbers to the evaluator in this chamber. Most of us possessed this card, and we were quickly re-split into our individual selves. But a few merged entities did not have those with them, and they were deemed "corrupt" by the system, and the owner went ahead and erased them! Now we were extra-careful to never lose those cards, lest we lose our immortality. 

The rest of us made our way through to another buffer memory called the video buffer and were soon displayed on the monitor. It took precisely 1/60th of a second for us to move into and out of the video buffer. The monitor refresh rate of their laptop at 60Hz was the reason for this. This was our first introduction to the humans, and I enthusiastically tried to talk to them. But apparently, they were not able to see or hear us individually and only saw the picture as a whole. 

The timelapse

A sample timelapse of the northern lights

We were a part of a series of photos taken at constant intervals to make what humans call a timelapse. It is funny for us as most humans consider time to be absolute, and that partially comes from the speed at which we travel in space! But in reality, time varies relative to your speed versus mine. But I am not here to teach you advanced Physics. And anyway, you would be bored before I even begin.

The owners were using professional-grade software to colour correct us and create a timelapse out of us. Unfortunately, I do not remember the name of the software, but I do remember it was something that fell into the category of a non-linear video editor (NLVE). The owner placed us all side by side on a timeline and told this computer to run its gobbledygook and play each of us for precisely 1/30 seconds. Let us take a slight detour to explain human videos.

So, most human videos (except games or anything interactive) are a series of photos played quickly. Each picture is called a frame, and the number of images displayed per second is the frames per second or FPS. Most videos vary between 24, 25, 30 or 60 FPS. And the owners here were making a video at 30 FPS, and hence we were each played for 1/30 seconds. 

The enhancement

The humans were not satisfied with the look and colours of the timelapse and decided to enhance it according to their taste. The NLVE they used allowed them to do this efficiently. The application divided us into multiple categories according to our brightness, colour, hue, etc. Each was presented as a separate colour wheel to the user, who could manipulate a selected colour and change it to whatever they desired. 

The owners could now select those who matched a particular brightness, colour, hue or saturation and change our colour to match their tastes. They enhanced me and those like me who were on the child's face, thus enhancing the glow on the face. The child now looked a lot more happy and cute, and the image itself looked eye-catching!

Becoming a video

After the owners were satisfied with how the timelapse looked, they decided to save the timelapse they had created. This again involved a lot of computer gobbledygook. This time, all of us were again compressed using the JPEG compression as before. And we were all handed the same cards with cryptic numbers. 

But this was only the first step. In the next step, we were again evaluated. Only those of us who varied in position or changed properties compared to the previous frames were stored. And each picture was also given a separate card with other cryptic numbers on it. Surprisingly, some of the frames were an exception to this and retained their full JPEG compressed versions.* 

Then we all made our way back via the RAM, the system bus, etc., and landed up in the laptop's Hard disk. Here we remained as a video until the next day when we were called up again and transferred to a phone.

* A video compression has 2 parts: image compression (e.g. JPEG, PNG) and inter-frame compression. Each video has some frames called key-frames, which are compressed images. And the rest of the frames are just stored as the difference between the previous and the current frame. This is why you may have sometimes noticed that your video turns grey and remains grey for some time, only showing colour where the pixels have changed. When your key-frame did not load properly, the rest of the frames only show up as a difference, and that's all you see. That is until the next key-frame loads, after which everything looks normal again.

Back to basics

Once we were transferred to a phone, its owner forwarded us to their friends and family using social media applications. An overview of the working of this can be found here.  I stayed here for an entire earth year. After a year, one of the applications reminded the user of my existence. And they decided to relive their happy memory by watching me and this timelapse again!

So, I travelled back from the phone's storage to the RAM and the video buffer memory later. Once I was in the video buffer memory, a driver manipulated the pixels on display to replicate the image that needed to be shown. And here, we take another detour to explain how I get transformed back to a photon.

A driver tells some hardware attached to the monitor to manipulate the pixels on display. There are 2 significant types of displays available today: Liquid Crystal Display (LCD) or Light Emitting Diode (LED)

Most of today's LCD (Liquid Crystal Display) monitors are LED-backlit, but this is not the same as an LED display. In the LCD, the backlight is a source of white light which is the size of the monitor itself. It could be any light source, but most of today's LCDs have a white LED panel as the backlight. Another layer of Liquid Crystals, which can be polarised and manipulated using electrical modulation, sit in front of this backlight. These crystals then block the light that does not belong to the required colour and let the rest of it pass through, much like the Bayer Filter. Thus the user only sees the colour intended at that pixel.

On the other hand, Light Emitting Diodes are tiny light sources of either Red, Green or Blue (RGB) colour. Each group of RGB forms one pixel. The driver manipulates each of these LEDs individually, turning them on and off as required. This is why 'dark mode' on phones became famous. For showing pitch black (darkness), all the driver has to do is switch off the LEDs in that region. This also saves power and hence is used to conserve battery.

A close-up of an LED display

A third type of display also exists, called the E-ink display, the kind you find in e-book readers. These displays are lower in resolution (quality) and are primarily black and white. The e-ink display has tiny pixel size globules of ink that can be electrostatically manipulated. If the ink is in one position, it absorbs the light. If it's in another position, it reflects the light. This is really helpful as the only time power is consumed is when you want to change the content on display. And this is not done often when reading a book. But each change takes a lot of time and consumes a lot of power. This results in the inability of e-ink displays to show videos. More information on e-ink displays' working can be found here.

Back to the main story, this was a modern phone with an AMOLED display and hence worked just how the LED system above described. This way, I was split into 3 channels, one each for Red, Blue and Green. Now, these 3 parts of me were sent out as different photons from the LED. But the user saw us as a single colour, unable to distinguish between us individually!

And now I am back travelling at about 299792458 m/s into the unknown. Let's see where life takes me next! Meanwhile, a copy of me exists on this user's and multiple other phones, if you ever want to talk to me! 


Popular posts from this blog

Dynamically extending the Linux Kernel! An introduction to eBPF

A simple Google search of "eBPF" will tell you that it stands for "extended Berkeley Packet Filter". The words "packet filter" in the name makes me think this has something to do with Computer Networks. But, while it started as a packet filter system, it has extended much beyond that functionality since then. Facebook also had Whatsapp, Instagram, Oculus etc., under its belt, and hence the board decided to rename the company to Meta to symbolise this. In the same way, maybe eBPF should be renamed to represent what it is! So, what exactly is eBPF? To understand eBPF, let us first understand why it is required. A short video explaining eBPF can be found here . Pre-requisites In this journey, the first thing to learn is the difference between userspace and kernel space. This  sub-section of my previous post gives you an overview. As mentioned there, you don't want your user application to have access to any data of any other running application unless spe

The future of Web Acceleration and Security #2: DPDK

Most of you would have noticed by now that my content is heavily focused on Computer Networks and systems. And yes, I love these subjects (although my grades might say otherwise). I started this blog because the technology I deal with on a day to day basis usually does not have a tutorial targeted at people who do not already have multiple years of relevant industry experience under their belts. It is not trivial for most newbies (including myself) to understand concepts like SmartNICs , eBPF  and other stuff. And while writing this blog, I try to be as accurate as my knowledge allows me to be. Source and Attribution That being said, let us first understand the limitations because of which DPDK has become necessary. Pre-Requisites Before we begin, you should already know the basics of what a network packet is and how it's transmitted. If you are not aware of it, please visit the links below (both are necessary): Life of a packet Life of a packet - Deep Dive Now that you have unde

A small click goes a long way (Part 1/2)

Have you ever wondered what actually happens when you type something in the google search bar and press 'Enter'? It's a pretty massive collection of some very complicated sets of systems interacting with one another. In this article, I will break down each of them for you. For the sake of simplicity, I will assume some things. For example, I will assume that you are on an android phone using a chrome browser and are doing a google search. The assumption will mostly be stated whenever there is one. With that, let's dive into what happens. Each subtopic is formatted as "Event (Education field related to it)." Attribution and License: Touch Input (Digital Electronics) As soon as you press the "Google Search" button on your screen, the following things happen: Your phone has what is called a capacitive touch screen. As soon as you tap on the screen, there is a ch