Computers have become so ubiquitous in our daily lives that we tend not to think about their inner workings very much, except perhaps when they vex us with the occasional crash or other glitch. We never see the incredible complexity that lies far beneath our handy user interfaces and high-resolution screens. From clunky pieces of equipment that filled entire rooms and contained hundreds of vacuum tubes, 21st century computers have evolved into machines that fit millions, even billions, of microscopic electronic components on a single sliver of silicon of ever-increasing power and capacity.
The heart of every computer, from a mainframe to a smartphone, is that bit of silicon—the microprocessor that crunches the ones and zeroes that constitute digital data. One way to increase processing speed is to use multiple processors working in parallel. The basic organization of any microprocessor is called its microarchitecture and finding ways to improve and perfect that microarchitecture is both an art and a science. This is where Yale Patt has built his career, creating new designs and concepts for microprocessors.
Patt earned his B.S. in electrical engineering at Northeastern University in 1962, then completed his master's and Ph.D. work at Stanford University, earning his doctorate in 1966. After a brief period teaching at Cornell University, he served in the U.S. Army during the Vietnam War. Upon his return to civilian life, he taught at a number of universities, specifically North Carolina State; San Francisco State; the University of California, Berkeley; the University of Michigan, and the University of Texas at Austin, where he has been the Ernest Cockrell, Jr. Centennial Chair in Engineering and professor of electrical and computer engineering since 1999.
Throughout, Patt has been working on ways to improve the basic structure of a computer system, a scheme that still follows the concepts originally devised by scientist John von Neumann in 1945, consisting of a central processor and a memory that stores programs and data. But such a structure results in a bottleneck because the processor must access the stored program instructions and the data upon which to execute those instructions in sequence, thus slowing down the computer's operations. As demands are placed upon computers to perform ever-more complex tasks at ever greater speeds, such limitations become major issues.
Patt found ways around the bottleneck, inventing a new computer microarchitecture that allowed the central processor to execute program instructions in parallel and not in the order intended by the programmer, while still reporting the results of these instruction executions in the order intended by the programmer. Patt took the first important step on this path in the mid-1960s, when he invented the WOS module, which was the first complex digital logic circuit on a single silicon chip.
But microelectronics technology was still catching up to Patt's ideas by the 1980s, when he developed his theories of "out of order" processing and conditional branch handling. He put together his concepts into a new computer microarchitecture called HPS or "High Performance Substrate," in which the potential for processing instructions in parallel and out of order could increase computer efficiency. Finally, by the 1990s, microprocessor chips had advanced to the point where millions of transistors could fit on a single chip and designers could begin to fully implement Patt's ideas. Key among these is the idea of branch prediction, where the processor uses past history to intuit which pathways in the program will be executed so that it can better decide which instructions it can perform in parallel.
Apart from his contributions in computer engineering, Patt is one of the most influential educators in the field, both on the undergraduate and graduate levels. Along with Sanjay Patel, one of his former doctoral students, he authored a freshman textbook, Introduction to Computing Systems: From Bits and Gates to C and Beyond, which implements his revolutionary motivated bottom-up approach to introducing computing to new students. Since its first edition in 2001, it has been adopted by more than 100 universities worldwide, and that number continues to grow every year.
Patt's work in both research and education has been recognized with a broad range of awards and professional honors. From the Institute of Electrical and Electronics Engineers alone, he has received multiple medals, including the highest honor in computer architecture, the Eckert-Mauchly Award in 1996. In 2011, he was the inaugural recipient of the IEEE Rau Award. He has also received the IEEE’s Emanuel R. Piore Medal, Wallace W. McDowell Award, and Harry H. Goode Award. In 2014, he was elected to the National Academy of Engineering. Added to these are numerous teaching awards, including the Association for Computing Machinery’s Karl V. Karlstrom Outstanding Educator Award and induction into the University of Texas Academy of Distinguished Teachers, and an honorary doctorate from the University of Belgrade.
Working in a relatively new discipline that never stands still but is ever expanding in its scope, importance, and complexity, Yale Patt is suitably restless, always looking and building toward the long-term future development of computers instead of concentrating on short-term, potentially commercial goals. It is a philosophy he follows both in his engineering work and in his teaching, which he views as mentoring not only the next generation of computer designers and engineers but those that will follow. The large number of his students who are already recognized as industry leaders and noted educators ensure that Yale Patt's contributions to computer science will be recognized as long as human beings continue to build and rely on computers.
Information as of March 2016