My Heroes in Computer Science - Part I
1936 - 1968 - The Pioneers
When I think about computer science, I don’t just see blinking lights and code; I see people. People who took ideas so abstract they bordered on madness and turned them into the foundations of our digital civilization. In this multi-part series, I am going to introduce you to my heroes. From wartime cryptanalysts to garage inventors, from mathematicians to hackers, and even dungeon masters, each of these computer scientists left fingerprints on the world we now take for granted. These are the people who have inspired me in my career and continue to delight, excite, and provide lessons; and here are their stories.
Alan Turing (28 May 1936) - The Mind That Dreamed in Logic
Alan Turing wasn’t just ahead of his time; he was ahead of any time. He formalized the concept of computation itself by introducing the Turing Machine, an abstract model that still underlies every CPU instruction cycle today and the entire idea of computation. During World War II, his work on the Enigma naval code in Hut 8 at Bletchley Park shortened the war by years and saved millions of lives. I’ve been inside the recreation of Hut 8, and it is as inspiring and moving as you can imagine.
While his contributions to computing and, for that matter, mankind cannot be overstated, it is his paper, “On Computable Numbers, with an Application to the Entscheidungsproblem,” presented to the London Mathematical Society in 1936, is quite literally the Genesis for Computer Science. If you are a computer scientist or aspire to be one and have not read it, one, you are out of your mind; read it, two, it is shockingly prescient. Alan was living on another plane of existence.
In it, he describes a Turing machine, a device that does computation on an infinite tape that is divided into cells, each containing a symbol acting as memory. In this device, there is a read/write head that can read the symbol or write new ones. Lastly, there is a set of instructions that specifies what to read or write. If this sounds familiar, this is exactly how the device you are reading this blog entry on works.
But Alan was more than just the father of Computer Science; he was a unique and charismatic individual about whom many have shared anecdotes over the years, helping us, eighty years hence, to identify with the man. He chained his tea mug to a radiator at Bletchley to keep it “safe.” I’m not sure what the risks were from the Germans in Hut 8, but he had his tea secured. He bicycled with a gas mask on to prevent hay fever. But my personal favorite is one of his jokes that has been passed down in time, “Two atoms were walking down the street, and suddenly one says, ‘I think I lost an electron!’ The other replies, ‘Are you sure?’ And the first one says, ‘Yes, I’m positive!’”.
I could write whole blog articles just on Alan and his accomplishments. His amazing efforts to help get NCR in the USA up-to-speed in code breaking in WW2, his co-creation of the Bombe that helped find Enigma settings, Bandurismus, and inventing encrypted telephone conversations . . . While his life ended tragically, his spirit and creations define our modern world, and we should always be thankful.
Tommy Flowers (8 December 1943) - The Engineer Who Embraced the Impossible
Imagine being given the task of building a computer, from scratch, in 1943. Tommy Flowers was not a highly educated man; instead, he was a working-class telephone engineer from the British Post Office (GPO). During the hectic time in Britain during the Second World War, when many people were doing the impossible, Tommy Flowers designed Colossus, the first programmable electronic computer, to help win the war.
He started at Bletchley Park in February of 1941 when his supervisor introduced him to Alan Turing, who was looking for help on Enigma. Alan was impressed by Tommy’s engineering abilities and introduced him to Max Newman, who was working on “Tunny”, the British code name for the Lorenz SZ 40/42 cipher machine.
The result, Colossus, became operational on 8 December 1943, the world’s first programmable electronic computer. It wasn’t built to compute numbers but to listen to the German Lorenz SZ-40/42cipher, used for encrypted teletype messages between Hitler’s high command (OKW) and their field generals. The Lorenz cipher used twelve spinning wheels that generated a pseudo-random key stream, with billions of possible wheel settings. In 1943, the mathematics looked hopeless, unless you had a computer.
Colossus generated the wheel patterns electronically, using more than 1,600 thermionic valves (think highly complicated lightbulbs) to emulate the cipher’s logic. Using paper “ciphertext tape” racing past optical sensors at over 5,000 characters per second, Colossus, with plugboards and switches implementing Boolean comparisons, was the first large-scale digital logic system ever built. And if it sounds like Alan Turing’s hypothetical Turing Machine, it is no coincidence. When the machine detected statistical correlations between the intercepted text and hypothesized wheel settings, it printed counts on a teletype, giving cryptanalyst Bill Tutte’s team the clues needed to recover the Lorenz keys and thus decipher the message.
It was an astonishing technical feat, a working programmable computer using 1943 technology. Most experts at the time believed such a valve-laden device would never stay operational, as valves were fragile and prone to failure and heat. Flowers countered that if you never turned them off, they would remain stable, and he was right. His machine ran for days without fault.
Tommy even paid for Colossus’s components himself when funding was denied.
Colossus cut the time to break a Lorenz message from weeks to mere hours. And those messages were invaluable in helping the Allied leadership know that no more German reinforcements were coming to Normandy in early June 1944, just before D-Day.
Sadly, after the war, he was sworn to secrecy; he couldn’t even put his creation on his résumé. It took 30 years before his contribution was acknowledged. When he sought a bank loan to build a commercial computer after the war, he was turned down because no one believed such a machine would work. It took three decades for his contribution to be declassified and honored. And yet, today, Colossus is universally acknowledged as the direct ancestor of every computer we use.
You can go and visit a reproduction of Colossus at Bletchley Park; it is beyond incredible. And yes, not only is it fully functional, but it is still running and has even participated in a cipher challenge.
John von Neumann (30 June 1945) — Computers that Remember
How long did John von Neumann think it would take to build the computer he envisioned, “A year, maybe two… if we stop talking and start wiring.”?
John was a Hungarian-born mathematician with a mind that spanned from quantum mechanics to economics. Von Neumann joined the Manhattan Project in the early 1940s. There, surrounded by physicists and engineers, he saw something no one else did: that the same logical principles used to compute a nuclear trajectory could be applied to any computation, provided the machine could store and modify its own instructions.
But how to do it? How to create a thinking machine that could remember? On 30 June 1945, just 17 days before the Trinity Test, John circulated his now-legendary “First Draft of a Report on the EDVAC”, a deceptively modest memo that quietly changed the world.
In it, he described an architecture in which data and program instructions would share the same memory, processed by a single control unit and arithmetic logic unit. And he divided the system into the four components we still teach today: memory, control, processing, and input/output.
The simple idea that a program could treat its own code as data, not patch cables plugged into countless photo jacks like Colossus, became the von Neumann architecture, the foundation of nearly every digital computer since.
As a trained mathematician myself, I always found his discussion with Fermi, the physicist who created the first sustained nuclear fission reaction, Chicago Pile 1, in Los Alamos during the Manhattan Project, interesting. John came by, saw what Fermi had on his blackboard in his office, and asked what he was doing. So, Enrico told him, and John von Neumann said, “That’s very interesting.” John then came back about 15 minutes later and gave Enrico the answer to the problem on the board. Fermi leaned against his doorpost and said, “You know that man makes me feel I know no mathematics at all.”
J. Presper Eckert & John Mauchly (14 February 1946) – The Builders Who Made Numbers Dance
If Alan Turing imagined what and how computers could think, and von Neumann showed how they could remember, J. Presper Eckert and John Mauchly built the first one that could act at scale and compute general-purpose problems.
In Wartime Philadelphia at the University of Pennsylvania’s Moore School of Electrical Engineering, the physicist John Mauchly and the engineer J. Eckert led a team to design a machine that could out-calculate any human or mechanical device before it. Their creation, the Electronic Numerical Integrator and Computer (ENIAC), was officially unveiled to the public on 14 February 1946, shocking onlookers with banks of glowing vacuum tubes and whirring panels of switches.
ENIAC contained 17,468 vacuum tubes, weighed 30 tons, and could perform about 5,000 additions per second, hundreds of times faster than anything else on Earth. To put that in comparison, the amazing Colossus had only 2,400 vacuum tubes. Originally commissioned by the U.S. Army to compute artillery firing tables, it quickly proved capable of far more: ballistic trajectories, weather prediction, atomic calculations, and the dawn of digital simulation itself.
Programming ENIAC, however, was an art form: a vast tangle of patch cables and switch panels that had to be rewired for each new task. It was engineering on an epic, cinematic scale and took days to do. And yet, it worked.
At its first demonstration in 1946, ENIAC was fed the problem of calculating the trajectory of a shell that would have taken human “computers” forty hours. It produced the answer in 20 seconds. Reporters gasped; the Army declared the machine a secret weapon of mathematics. And so it was.
Eckert, the meticulous electrical engineer, made ENIAC reliable. He realized that vacuum tubes didn’t fail constantly when left powered on. The same realization across the Atlantic that Tommy Flowers had discovered, but secrecy prohibited anyone from knowing. Mauchly, the visionary physicist, imagined how electronic speed could transform science. Together they balanced precision with imagination, forming one of computing’s great creative duos.
After the war, they founded the Eckert-Mauchly Computer Corporation, where they built the UNIVAC I, the first commercial computer sold in the United States (1951).
Eckert and Mauchly’s genius was to turn electronic calculation from an experiment into an industry; my industry. And I am damn thankful they did.
Grace Murray Hopper (9 September 1947) – The Mother of Modern Programming
Should you visit the Smithsonian Museum of American History, and it is on display, you might see something quite curious: an engineering logbook from 1947 with, of all things, a moth taped to the page. Under the taped-in moth is a note written by Grace Hopper: “First actual case of bug being found.”
This happened while debugging, and yes, this is where the term comes from, a literal bug, on the Harvard Mark II computer, where the poor moth in question was found trapped in one of the computer’s relays.
Born in New York City in 1906, she was a brilliant mathematician who earned her Ph.D. from Yale in 1934. When World War II broke out, Hopper joined the U.S. Navy Reserve and was assigned to the Bureau of Ordnance Computation Project at Harvard, where she worked on the Mark I, one of the earliest electromechanical computers. Incidentally, one of the first programs to run on the Mark I was initiated on 29 March 1944 by my previous hero, John von Neumann, to determine if an implosion was a viable means to detonate the plutonium-based atomic bomb.
Grace was endlessly curious about how machines could be made to understand humans. To her, programming should be closer to English than to wiring diagrams or raw binary. In 1952, she designed the A-0 Compiler, the first program to translate human-readable instructions into machine code automatically. It was the missing bridge between human intent and silicon precision, language that humans and computers could understand, the ancestor of every modern compiler. A few years later, her work shaped the creation of COBOL (Common Business Oriented Language), which brought computing into business, government, and finance.
She ended her career in the US Navy as a Rear Admiral and was known for carrying an 11.8-inch wire in her pocket to demonstrate to doubters that even abstractions have a physical meaning. “That’s a nanosecond. That’s how far light travels in a billionth of a second.”
Grace Hopper didn’t just write programs; she taught computers to have a common language with humans. And after that, things really took off.
Claude Shannon (July 1948) – Communications can be Math
Claude Shannon was a free spirit, famous for riding his unicycle through Bell Labs’ corridors while juggling or demonstrating a mechanical mouse that learned to navigate a maze. He built a mind-reading machine for fun and wired up a flaming trumpet that played itself. Like many of my heroes, he knew intuitively that fun was a big part of innovation.
Born in 1916 in Gaylord, Michigan, Shannon grew up tinkering with radios, telegraph lines, and homemade gadgets. After studying electrical engineering and mathematics at the University of Michigan, he went to MIT, where his 1937 master’s thesis became one of the most influential documents in the history of engineering. Titled, A Symbolic Analysis of Relay and Switching Circuits, the thesis proved that Boolean algebra, the logic of true and false, could describe any electronic circuit. Certainly a far more impactful master’s thesis than mine! With that single insight, he turned logic into hardware, making digital computing possible.
But where Claude shocked the world was in a two-part paper published in July and October 1948 titled “A Mathematical Theory of Communication.” It was the birth of information theory. Shannon proposed that messages, no matter their form, letters, sounds, images, or even video, could be represented as sequences of bits, and that information could be measured in units he called “the bit” (which I do believe has managed to pass the test of time).
I have read A Mathematical Theory of Communication several times. It is a revolutionary idea: communication was no longer about words and meanings but about probability and entropy. Claude derived the limits of how much information could be transmitted reliably through any noisy channel and proved that with the right coding, you could get arbitrarily close to perfect accuracy. Every modern technology that compresses, encrypts, or transmits data, ZIP files, Wi-Fi, fiber optics, and streaming video rests on those equations.
Claude Shannon didn’t just explain how to send a message; he revealed what information really is.
John Backus (20 September 1954) — Turning Math into Code
My first programming language was FORTRAN IV, which I learned from my mother, who was a computer scientist, teacher, cook, and mom, all tough jobs she managed to do simultaneously. She would take me to the local community college in Avon Park, Florida, where there was a brand-new IBM Series 1 computer, and we would program together for fun in the evenings. The joy I feel coding and in creating things in computer science all started from these twilight trips to work on the IBM Series 1 and code in FORTRAN IV. And all of that is because of John Backus.
A mathematician turned reluctant programmer, somewhat like me but way smarter, Backus joined IBM in the late 1940s after earning his master’s degree in mathematics from Columbia University. He never meant to become a computer scientist; he was opposed to it. He had built a hi-fi amplifier and impressed an IBM recruiter with his knack for circuitry. Yet within a decade, he would lead the team that created FORTRAN (short for Formula Translation), the first high-level programming language, released in 1957 but running internally in IBM starting on 20 September 1954.
Before FORTRAN, programming a computer meant writing in raw machine code, long sequences of 1s and 0s, or in cryptic assembly instructions; fun for a hardcore programmer but incomprehensible to 99.999% of the human population. Debugging even simple arithmetic operations could take days. Backus hated it. “I didn’t like writing programs,” he later admitted, “and so I started work on a program to help me avoid having to write them.” His distaste for tedium became the spark for one of the greatest breakthroughs in computer history, FORTRAN.
Under Backus’s direction, IBM’s small team of mathematicians and engineers set out to create a system that could translate human-readable mathematical expressions, such as A = B + C * D, into efficient machine code automatically. It was a radical idea: letting the machine write its own machine language. Many skeptics in the scientific community, let alone inside IBM, dismissed it outright. They didn’t believe a compiler could generate code as fast as hand-written assembly. When FORTRAN proved them wrong, running nearly as efficiently as expert-crafted code, it revolutionized programming.
It is a testament to my former employer that they allowed a wild duck like John to pursue such an effort, funded it, protected it, and invested more when they saw the results.
FORTRAN didn’t just make coding easier; it made computing scalable. For the first time, scientists and engineers could express problems symbolically rather than mechanically. It became the language of physics simulations, engineering calculations, and the emerging world of computer-aided design. Even today, nearly seven decades later, FORTRAN quietly powers weather models, nuclear research, and financial systems.
In 1959, when journalists asked Backus what he thought he had really invented, he replied, “Freedom.” But as for me, John gave me my career and some very precious moments with my Mom.
Ivan Sutherland (7 January 1963) – The Man Who Taught Computers to Draw
Imagine being back in the beginning of 1963, John Kennedy is President of the United States, and the Beatles’ second single, Please Please Me, was about to be released in the UK. But no one had ever seen anything like what they were about to see, an interactive computer graphics program that let you draw shapes, doodles, whatever you liked, on a screen. And it all came together in a doctoral thesis on the 7th of January, 1963.
Born in 1938 in Nebraska, Ivan Sutherland grew up with a mechanical curiosity. After earning his bachelor’s degree from Carnegie Tech and a master’s from Caltech, he pursued his Ph.D. at MIT under Claude Shannon, another hero on this list. It was at MIT that Sutherland produced one of the most influential dissertations in computer history, and he called it Sketchpad.
Lest you think that this was a simple drawing program, Sketchpad was a revelation. For the first time, a computer responded to human gestures, your digits could provide digital input, and you could see it all in real-time. Running on MIT’s TX-2 computer, a computer with all of 64 kilobytes of memory, SketchPad allowed users to draw directly on a screen using a light pen. You could even define constraints, such as “this line must remain vertical”, that the program would enforce automatically.
During his doctoral defense, one examiner reportedly asked Sutherland what Sketchpad was really for, and he replied, “It’s for making drawings of anything.” That “anything” concept became everything: CAD systems, graphical user interfaces, games, computer animation, and virtual reality, all of them can trace their lineage to Sketchpad.
Sutherland didn’t stop there. He co-founded Evans & Sutherland, which pioneered real-time 3D graphics, flight simulators, and later developed the first head-mounted display, the ancestor of today’s VR headsets.
Ivan Sutherland didn’t merely invent computer graphics; he invented the idea that computers could be visual partners in human imagination. I doubt even Ivan could have imagined where that would go.
Donald Knuth (1 February 1968) — The Philosopher
Alan’s immortal work, “On Computable Numbers, with an Application to the Entscheidungsproblem” is the genesis of Computer Science. But The Art of Computer Programming (TAOCP) by Donald (Don) Knuth made Computer Science into that most unique of sciences, one that it has had since the publication of TAOCP: science with a splash of art.
Of all my heroes, Don is the first on this list that I have met. And he is all that mixture of science and philosophy that you can imagine. I told him I wrote a linear sorting algorithm based on his work and showed it to him, and he commented not on the result but on the elegance of my implementation. That is Don.
Born in Milwaukee in 1938, Knuth combines a mathematician’s logic, a writer’s obsession for precision, and a philosopher’s instincts of looking at greater meaning. By the late 1950s, computers were fast enough to do remarkable things, but the programs running on them were often crude, inefficient, and undocumented. Knuth saw programming not just as engineering but as a form of art, a craft of clarity and structure.
While teaching at Stanford, Knuth began writing lecture notes to explain the principles of efficient computation. Those notes grew into what became his life’s magnum opus: The Art of Computer Programming (TAOCP), the first volume published in 1968. His goal was audacious and unprecedented: to catalog the entire field of algorithms, analyze them mathematically, and make that analysis readable and understandable - and most importantly, learnable.
In TAOCP, Don created MIX, a virtual and hypothetical computer, along with its own respective assembly language, MIXAL. This allowed him to be completely agnostic to device or manufacturer, letting the reader of his work focus on the code itself and how algorithms work at the machine level.
While finishing the first volume of TAOCP, Don’s typesetter mangled the manuscript’s mathematical equations. And of course, Don didn’t complain; he built a new system to fix it. That project became TeX, a typesetting language so precise that it remains the standard for mathematical and scientific publishing worldwide. And to this day, he offers anyone who finds a typo in his books a reward check for $2.56, that’s one hexadecimal dollar. Thousands have received them; most never cash them. I certainly didn’t cash mine.
John Kemeny & Thomas Kurtz (1 May 1964) – Anyone Can Code
I got my first computer in 1981, a brand new Commodore VIC-20 running at a blazing 1.02 MHz with five kilobytes of RAM. I got it on Christmas Day 1981, connected it up, and was confronted with something I had never seen before, even though I had been programming with my Mom in FORTRAN for years, a prompt that said “CBM BASIC V2” and “READY”. READY? Ready for what? I was used to classic business mini-computers of the era, not something that was just saying READY to me. And then I realized that BASIC was a programming language of some kind, and the new VIC-20 was READY for me to type a program.
In the early 1960s, computers were rare, expensive, and intimidating. Programming was reserved for specialists who spoke the language of punch cards and assembly code. But at Dartmouth College in Hanover, New Hampshire, two professors believed that computing should be as universal as mathematics, that students in every discipline should be able to harness the power of a computer without needing to be engineers. They were the pioneers of what would eventually be called the citizen developer movement.
Kemeny, a mathematician who had served as Albert Einstein’s research assistant at Princeton, and Kurtz, a physicist and systems thinker, set out to democratize computing. Their goal was to build a time-sharing system that allowed multiple users to work simultaneously, and design a simple programming language to leverage that multiple-user system, which anyone could learn in an afternoon.
On the morning of 1 May 1964, at 4:00 a.m. (and as a computer scientist, I can appreciate that they were working all night, some things never change), they succeeded. In a small lab at Dartmouth, a student typed a few lines of code into a teletype terminal and entered a single command: RUN. For the first time, two programs executed simultaneously on the same mainframe using code that anyone could understand. That language the two of them created is called BASIC, or Beginner’s All-purpose Symbolic Instruction Code, and it democratized programming.
That first BASIC program, running on a GE 255 computer, printed a simple arithmetic result. John watched as the teletype clattered out the correct answer, then turned to his team and said, “It worked.” That understated moment marked a revolution. Within a decade, BASIC would spread from college labs to the first generation of microcomputers, inspiring young programmers like Bill Gates, Microsoft’s first program was BASIC for the Altair 8800 computer, Steve Wozniak, and little me, on my Commodore VIC-20.
BASIC’s power was in its simplicity. Commands like PRINT, INPUT, and IF/THEN mirrored natural language, and the syntax was forgiving enough for beginners to explore and understand. Combined with Dartmouth’s pioneering Dartmouth Time-Sharing System (DTSS), students from any college major could now write and run programs instantly, artists, sociologists, and poets right alongside engineers and physicists.
Kemeny and Kurtz were building a future where computers were partners in learning and innovation. Kemeny later became president of Dartmouth and continued to advocate for computer literacy as a civic right. Thomas led efforts in computing education, ethics, and open access, ensuring that the pair’s vision endured beyond the mainframe era, and so it did.
When we talk about the personal computer revolution, we often begin with garages in Silicon Valley, or the windswept desert of New Mexico, perhaps even in the rolling hills of Valley Forge, Pennsylvania, but it really began in a quiet New Hampshire lab at four in the morning. This will not be the last time in this series that New Hampshire comes up.
Douglas Engelbart (9 December 1968) – The Man Who Showed Us the Future
It was literally the “Mother of all Demos”, 9 December 1968, at the Fall Joint Computer Conference in San Francisco. In front of a stunned audience of over 1,000 engineers, Douglas Engelbart sat at a custom-built console connected by a 30-mile link to Stanford Research Institute (SRI)’s mainframe.
On a large projection screen, he moved a small device he called a mouse, which was nothing more than a wooden block with two perpendicular wheels, to control a cursor. He opened and rearranged text in multiple windows, clicked hyperlinks, edited documents collaboratively with a colleague on another terminal, and even conducted an early form of video call. In 1968! He left the audience stunned to the point of utter silence in the room when he was done. I cannot think of any other computer science demo in history that has its own Wikipedia Page and inspired a musical.
Born in 1925 in Portland, Oregon, Engelbart served as a radar technician in World War II, serving in the Philippines, before earning an engineering degree from Oregon State and a Ph.D. from Berkeley. Like most of my heroes, he was inspired by a simple idea, and Doug’s was that computers should not just compute, they should help people think better together.
Doug envisioned something radical: an interactive workspace where people could collaborate in real time, share information, and navigate knowledge as fluidly as thinking. Working at SRI in Menlo Park through the 1960s, he and his team at the Augmentation Research Center quietly built prototypes of nearly every modern computing concept, the mouse, hypertext, windows, video conferencing, real-time editing, and networked collaboration.
Doug’s ideas didn’t immediately take off. For years, many dismissed them as impractical or utopian. Yet the young researchers who saw that demo, many of whom later joined Xerox PARC, Apple, and Microsoft, and will be in future articles in my list of heroes, took his concepts into the age of personal computing. It turns out, “The Mother of all Demos” was actually the future.
Douglas Engelbart didn’t just predict the future of computing or write about some processing utopia; he demoed it, live. A lesson I learned well in my career, seeing is believing.









