I noticed that while I’m at work, I tend to get disappointed whenever I see a PowerPoint slide without a picture. There’s something magical about being able to connect a theory, statement, or an idea to a visual symbol. And my laziness to actually take pictures of my books at memorable sections has been in stark contrast to my preferences. This is something that I want to fix from now. Hopefully, I’ll also take photos of my books before I throw their “protective coverings” away (like in the book below).
But I found out that I can skip a few steps using the WordPress App. I can take a picture on my phone and upload it to my website, skipping a few steps along the way. Thank you website!
Well, on to the actual book review: Programming the Universe.
The title does state most of what you would find within the book. It’s not necessarily about actually programming the universe (surprise), but the methodology in which one COULD simulate physics, life, and the underlying forces of nature.
That’s the easy answer, though. The book goes deep into the idea of what information is, how to define it at the quantum level, and how quantum information can be used for unique computational methods (possibly being able to outperform conventional computers in the future). An actual summary the book in two words: “Quantum Computing.” That’s the short answer that may need some explanation (hence the book review).
If you are vaguely familiar with quantum mechanics, you have probably heard a bunch of weird quotes, which are theoretically true. I could walk through the wall because a small part of me exists on the other side. A bouncing ball in your hand could pop out of existence and reappear nearby. The reasoning behind these thought experiments is that, deep down, everything exists as a “wave.” The smaller the unit, the larger an area its wave-like nature is in size. This wave, in theory, is “the probably of where that item’s existence is at any specific moment.”
Electrons are funny “particles,” if that’s an accurate way to state them. If given multiple paths to travel, it will take them all at the same time [The same thing goes for photons, atoms, etc.] Unless it decides to interact with something else, it stays like a wave until doing so. But when it does, all the energy in a wave “collapses” on a very specific location (which is pseudo-randomly chosen based the shape of it’s wave). That’s why we got pixels on old CRT TVs (remember those bulky displays?). The electron beam didn’t make a broad splash but typically hit a phosphor screen at a specified spot.
It’s also the reason why electrons exist in orbitals around the atom, instead of falling straight towards the positive nucleus like a meteor in the night sky. The wave “wraps” around the nucleus with a minimal energy level looking like a marble. Excited levels of the electron wave look like ripples in water; the more excited the particle is, the choppier the water appears.
The main line is that individual quantum objects, or qubits, (electrons and atoms included) act as waves until we “disturb” them. The neat thing about waves is that we can add multiple types of waves on top of each other, a term called “superposition” in quantum mechanics. Just like multiple notes played on a piano, multiple waves can exist in a single qubit (quantum bit), and they can be used for computations simultaneously.
The quantum computer, in theory, involves taking a bunch of qubits (particles or otherwise) and exciting them with their initial superposition of initial quantum states (lasers or Electromagnetic fields). Then these qubits will interact with each other and over a brief time will “collapse” into the final answer when “measured.” …..I think……
Unfortunately, the author Seth Lloyd, doesn’t relay many of the technical details on HOW to build a quantum computer (I mean, he helped build one). There’s a decent amount of theory involving the types of waves that can be used. This included atomic nuclear spins, electron excitation levels, and currents in superconducting rings. There’s a little bit of talk on interactions between lasers and atoms. However, this entire section only takes up 21 pages. And for a 200+ page document about quantum computing, wouldn’t you think it would be a larger chunk of the literature?
For example, there’s one page that talks about superconducting loops: Josephson junctions. I would have probably just skimmed this section, but I recently learned about them by browsing my Modern Physics textbook a few weeks ago. In superconductors, there’s a phenomenon call Cooper pairs (not mentioned), where the electrons all interact with each other in-sync and result in a single “wave” of electrons. Being a wave, the electron flow can go both clockwise AND counterclockwise simultaneously (which is briefly mentioned). By applying a magnetic field, the electrons can start to “spin” as desired (zero resistance means an indefinite flow of current without a voltage drop in the loop) within the circle and can the final results can be measured using an applied voltage (also not mentioned). In a nutshell, the book basically says that it was difficult to prove these devices initially, and then it was possible…… I mean, c’mon!
Being a quantum physicist in the mechanical engineering department at MIT, the author’s writing (and possibly most of his works) also seem to be more philosophical rather than technical. The first 100 pages (which doesn’t even touch the topic of quantum physics yet), is more of a pursuit of the terminology of bits, information, and the energy tied to the computation of data. There’s also an entire chapter in the back of the book on the topic of defining complexity (he currently has 30+ different measures of it)……
But seriously, I was definitely looking towards obtaining more technical information on this potential future technology. Not on how the universe is going to die, and on how we could live forever as a supreme being as large as the universe, fed off the scarce potential energy at the expanding edges of the universe and slow our internal processing down to the point where internal thoughts would take decades to compute……….. And that’s the last chapter?
As you can tell, I’m not really thrilled about this book. I fell asleep a lot reading it, hoping it would get more interesting. Yes, I did learn a few things, and cemented many more concepts I’m already aware of. But I don’t see myself recommending this book. I bought this book at a used bookstore in an airport (that I approve of!), and I was determined to finish it. The 3.7 rating on Amazon helps confirm my level of “excitement” towards this work.
I also have to say something about large numbers in a book. It’s one thing to put them in a text book, as if something you would reference or use in a computation. But almost every 5-6 pages, the author says something outrageous and attaches a number to it. Like 10^91 or 0.5^1024 or something outrageous. I’ve seen this done before in previous books, but it’s only happened like 2-5 times total in most, not >50 times (a possible exaggeration). There’s something distracting about these numbers, as if they have an importance that requires me to remember them. I have to force myself to step back and tell myself “It’s just a crazy large number, what’s the real idea that I want to take away from this paragraph!”
And they all are. 10^91, the number of bits in the universe? No one can fathom such a number! Yeah, sure it’s not as big as 10^122, the number of computations that a cosmological computer could have performed since the big bang….. but both numbers are still psychologically indistinguishable to a normal reader. It’s kind of like my rant over random people and quotes in my book review of “American Nations;” save the gritty details for the textbooks, please?
But …….. if you’re really into monkey-based analogies, this may be the book for you: