dark_background2.png

CODE

CODE, by Charles Petzold

This was an extremely good book! It’s sort of like a textbook, because it teaches. While reading “CODE,” I had many “aha” moments where something just clicked in my brain. For example, I realized what RAM means and why it’s volatile. It also clicked that computers really are stupid and just a bunch of zeros and ones. It is one thing to hear people say that a computer only knows zeros and ones, but it is completely different when that fact actually clicks in your brain. This book made it click. And when things click, it’s an awesome feeling!


The author starts the book by explaining different binary code systems, such as braille and Morse code. The point is that humans have always used code systems for communication purposes. Then he talks about electricity and how to wire a lightbulb. In this book, the light bulb example persists throughout the entirety, and it works really well for demonstration purposes. For the most part, the author starts stupid simple by telling us about electricity and grounds and electrons and circuits. Very fundamental stuff. Slowly, he builds upon this foundation, always using the lightbulb example, introducing more and more complex circuits until we eventually create a computer. I really like how the book was organized, and how it slowly progressed to a full computer. It made sense. It made me say, “I understand that,” and it made me feel like I understand how computers work. Whether I actually understand or not is a different story. But the author was able to make me feel like I understand, which shows his writing skill. To confirm that I actually comprehend the circuits discussed in this novel, I think it would be a fun project to see if I can replicate the author’s computer, which he builds using light bulbs and switches, in a python code. I think that would be a fun project, and it would demonstrate that I actually understand the content in this novel, not just “feel” like I understand it. To be determined whether I pursue this project or not.


After fundamental circuits, we are introduced to the base 10 number system, base 8 number system, and binary number system. Lots of examples and illustrations make this content easy to understand. For me, I am familiar with most of the topics in this book. But it is always nice to receive a fresh, clear perspective on the fundamentals, such as base 10 versus base 8 versus base 2. I really enjoyed the part where he introduced UPC labels! These are the scan labels that we see on grocery store items and other merchandise. They are the little bars that we scan at the register when we pay for an item. I did not understand how these labels work, and it was interesting to realize that these UPC codes have built in protections to ensure that the scanner reads the code correctly. For example, the UPC code has a left-hand guard, a right-hand guard, and a center-guard. It can read forward or backwards; the left-hand side digits all begin with zero, whereas the right-hand digits all begin with one. And lastly, the UPC code has even and odd parity, which can be used to ensure that all digits of the code are properly scanned. All of these protections are used to avoid scanning errors.


In the next chapters, things get really exciting. We are introduced to logic gates and switches. We can rearrange the gates and switches in many configurations to get our lightbulbs to turn on and off in response to a set of inputs. We start with simple AND gates and simple OR gates. Then progress to 8-bit adders, 8-bit latches, flip flops, more complex “edge-triggered D-type flip flop with preset and clear,” 8-to-1 selector, and 3-to-8 decoder circuits. These types of circuits were entirely new to me, and were fascinating to learn about. I was also introduced to the concept of a “Clock.” The clock is basically an oscillator that is used to oscillate between on and off (zero and one). Computers use very fast clocks to iteratively read computer memory and consequently execute commands stored in the memory. Clocks are an essential piece for computers.

The logic gates were probably the most fascinating part of the book to me. I had never been introduced to these concepts, and after seeing them, I can now comprehend how a computer saves bits and bits of numbers. I also understand why this type of memory is referred to as “volatile” memory. When the computer loses power, all of the switches return to their default state, and all of the information stored within RAM is lost. It makes sense! Since RAM requires power to hold specific sets of switches open or closed.


Another new topic for me was the hexadecimal system, which is base 16. I knew that “hex” codes existed, but I didn’t understand where they originated from or what they meant. Since we only have numbers to represent a base 10 number system, we use letters to hold the place of higher order digits. Counting in hexadecimal goes:


0 1 2 3 4 5 6 7 8 9 A B C D E F10 11 12 13 14 15 16 17 18 19 1A 1B 1C 1D 1E 1F …

Perhaps the most challenging chapter was the “Automation” chapter, which is the chapter in which the author first refers to his creation as a “computer.” Prior to this chapter, the creation was simple called an “adder,” because it added binary numbers. The computer automates the process of adding and subtracting numbers. We use hex codes to reference specific addresses with the RAM, assign specific codes for adding, loading, and summing. We also introduce “halt” and “jump” commands, which provide the final pieces for building a computer. The “Automation” chapter slowly introduces keywords such as LOD, STO, and JMP, which by the end of the chapter are combined in a simple Assembly code. When the author introduced Assembly code, this was a moment where many of my brain circuits were completed. So many things clicked at this point for me. I understand what Assembly code means, and I see how it relates to the very low-level operations of the computer. I understand! If I had to point to the critical chapter of the book, this would be it. “Automation” ties all the pieces of the puzzle together, which were introduced in the preceding chapters, and then forms the foundation for the proceeding chapters.


After introducing the Assembly language, there is some historical information that talks about how computer technology has progressed. It would be cumbersome and slow to build a computer entirely out of electrical switches. The computer would be massive! It would need to contain literally millions of switches. Hence, the invention of the transistor was a ground-breaking. The transistor allowed us to create small, fast circuits. Using transistors, we could automate the manufacturing process for tying multiple transistors into complex circuits. Transistors made modern computers possible. Following transistors, we were able to develop microprocessors and chips, and computer development efforts skyrocketed. Technology progressed quickly.


Oh, I learned about little endian and big endian. Codes and numbers are stored in a computer’s memory as two bytes (1 byte = 8 bits, 1 nibble = 4 bits). Big-endian and little-endian refer to whether the upper byte or the lower byte is stored in memory first. These two different approaches for storing data evolved because Intel and Motorola were developing microprocessors independently of one another. Intel uses little-endian, whereas Motorola uses big-endian.

Thinking about computers boggles my mind. Once we learned how to build a small and fast circuit, human technology rapidly advanced. In the 80s, computers were a new technology and what existed was bulky and slow. Today, just 40 years later, computers have vast amounts of memory, are incredibly fast and not-bulky, and they dictate everything we do. Who knew that at the foundation of a computer, it is simply turning switches on and off. Just like braille and Morse code are binary communication systems, a computer is a binary communication tool. To get computers to perform complex jobs like they do today, it took millions of man-hours of development time over the past few years. It required the effort of thousands of people building off the work completed by their predecessors. It required people to trust the work completed by previous developers, build upon that work, and then share their developments with the rest of the world. The fact that so many people have worked collaboratively together to advance computers to their current state, from simple AND OR gates, is mind-boggling. It doesn’t seem possible. And yet here I am, typing on a computer. And it works! My computer accepts the input from my keyboard, displays text on the screen, and then saves my work when I’m done. I can’t believe all of this actually works! At the end of this book, the author briefly talks about keyboard inputs and graphical displays. This content was not as interesting to me because it was very high-level.


Something else that stood out to me. Our progress in developing computers is largely a result of just a few key people, supported by a few key organizations. IBM and MIT were essential institutions for fostering the required technologies. Without these institutions, we would not have the technologies that make modern computers possible. So, what makes organizations like IBM, Bell Laboratories, MIT Lincoln Labs, and DARPA so unique? These organizations are basically a collection of really smart people, who are given the freedom to explore ideas. Take all of the smartest people in the world, place them together, and give them the freedom and funding to explore new ideas. When this happens, we get world-changing technologies. I really like this concept, and hope that we never lose these types of institutions. Without them, we will never develop new technologies. In many ways, SwRI is similar to these organizations.


Recent Posts

Broadly speaking across all metrics of wealth and power, the US is declining, whereas China is exponentially rising. What does this mean?

Is it possible to reach net-zero greenhouse gas emissions by 2050? If not, then we face severe potential consequences. If so, how?