Born in the Far Rockaway section of New York City, on May 11, 1918, Feynman was the descendant of Russian and Polish Jews who had emigrated to the United States late in the 19th century. He studied physics at the Massachusetts Institute of Technology, where his undergraduate thesis (1939) proposed an original and enduring approach to calculating forces in molecules. Feynman received his Ph.D. at Princeton University in 1942. At Princeton, with his adviser, John Archibald Wheeler, he developed an approach to quantum mechanics governed by the principle of least action. This approach replaced the wave-oriented electromagnetic picture developed by James Clerk Maxwell with one based entirely on particle interactions mapped in space and time. In effect, Feynman's method calculated the probabilities of all the possible paths a particle could take in going from one point to another.
During World War II Feynman was recruited to serve as a staff member of the U.S. atomic bomb project at Princeton University (1941-42). During his time here Feynman married Arlene Greenbaum, the girl of his dreams. In 1942, they set out for Los Alamos, N.M, as he was to begin work at a new secret laboratory (1943-45). During this time, Arlene entered the hospital in Albuquerque because she was dying of tuberculosis.
At Los Alamos he became the youngest group leader in the theoretical division of the Manhattan Project. With the head of that division, Hans Bethe, he devised the formula for predicting the energy yield of a nuclear explosive. Feynman also took charge of the project's primitive computing effort, using a hybrid of new calculating machines and human workers to try to process the vast amounts of numerical computation required by the project. He quite litrally observed the first detonation of an atomic bomb on July 16, 1945 at Alamogordo, N.M. as the radios issued to warn everyone not to look didn't quite work properly, he looked up just as the bomb went off, he saw the incredible flash and formation of the now very familiar mushroom cloud Although his initial reaction was euphoric, he later felt anxiety about the force he and his colleagues had helped unleash on the world.
While Feynman was working in Los Alamos, it became clear that he was at the level with the intellectual giants of his day. Whilst here he made the patent for an atomic submarine and an atomic airplane, although at the time he was only suggesting ideas for applications of the work being done to a patent officer, only some time later did he discover that the patents were actually attributed to him.
In 1945, Arlene died in the hospital in Albuquerque. Feynman was very distraught about this event.
At war's end Feynman became an associate professor of theoretical physics at Cornell University (1945-50) and returned to studying the fundamental issues of quantum electrodynamics (QED), the quantum mechanical description of the interaction between light and matter. In the years that followed, his vision of particle interaction kept returning to the forefront of physics as scientists explored esoteric new domains at the subatomic level. In 1950 he became professor of theoretical physics at the California Institute of Technology (Caltech), where he remained the rest of his career.
In 1952, Feynman married Mary Louise Bell. She was a university instructor in the history of decorative art. However, they were divorced in 1956. In 1960, he married for the final time to Gweneth Howarth. Between 1962 and 1968, they had a son, Carl, and adopted a daughter, Michelle.
In the early 1950s Feynman provided a quantum-mechanical explanation for the Soviet physicist Lev D. Landau's theory of superfluidity--i.e., the strange, frictionless behaviour of liquid helium at temperatures near absolute zero. In 1958 he and the American physicist Murray Gell-Mann devised a theory that accounted for most of the phenomena associated with the weak force, which is the force at work in radioactive decay. Their theory, which turns on the asymmetrical "handedness" of particle spin, proved particularly fruitful in modern particle physics. And finally, in 1968, while working with experimenters at the Stanford Linear Accelerator on the scattering of high-energy electrons by protons, Feynman invented a theory of "partons," or hypothetical hard particles inside the nucleus of the atom, that helped lead to the modern understanding of quarks.
Feynman remade quantum electrodynamics and thus altered the way science understands the nature of waves and particles. He was co-awarded the Nobel Prize for Physics in 1965 for this work, which tied together in an experimentally perfect package all the varied phenomena at work in light, radio, electricity, and magnetism. The other cowinners of the Nobel Prize, Julian S. Schwinger of the United States and Tomonaga Shin'ichiro of Japan, had independently created equivalent theories, but it was Feynman's that proved the most original and far-reaching. The problem-solving tools that he invented--including pictorial representations of particle interactions known as Feynman diagrams--permeated many areas of theoretical physics in the second half of the 20th century.
Five particular achievements of Feynman stand out as crucial to the development of modern physics. First, and most important, is his work in correcting the inaccuracies of earlier formulations of quantum electrodynamics, the theory that explains the interactions between electromagnetic radiation (photons) and charged subatomic particles such as electrons and positrons (antielectrons). By 1948 Feynman completed this reconstruction of a large part of quantum mechanics and electrodynamics and resolved the meaningless results that the old quantum electrodynamic theory sometimes produced. He introduced simple diagrams, now called Feynman diagrams, that are easily visualized graphic analogues of the complicated mathematical expressions needed to describe the behaviour of systems of interacting particles. This work greatly simplified some of the calculations used to observe and predict such interactions. (See also Feynman diagram; quantum electrodynamics.)
Feynman's stature among physicists transcended the sum of even his sizable contributions to the field. His bold and colourful personality, unencumbered by false dignity or notions of excessive self-importance, seemed to announce: "Here is an unconventional mind." He was a master calculator who could create a dramatic impression in a group of scientists by slashing through a difficult numerical problem. His purely intellectual reputation became a part of the scenery of modern science. Feynman diagrams, Feynman integrals, and Feynman rules joined Feynman stories in the everyday conversation of physicists. They would say of a promising young colleague, "He's no Feynman, but . . ." His fellow physicists envied his flashes of inspiration and admired him for other qualities as well: a faith in nature's simple truths, a skepticism about official wisdom, and an impatience with mediocrity.
Feynman's lectures at Caltech evolved into the books Quantum Electrodynamics (1961) and The Theory of Fundamental Processes (1961). In 1961 he began reorganizing and teaching the introductory physics course at Caltech; the result, published as The Feynman Lectures on Physics, 3 vol. (1963-65), became a classic textbook. Feynman's views on quantum mechanics, scientific method, the relations between science and religion, and the role of beauty and uncertainty in scientific knowledge are expressed in two models of science writing, again distilled from lectures: The Character of Physical Law (1965) and QED: The Strange Theory of Light and Matter (1985). ( J.Gl.)
In the 1980s, Feynman became a great public figure. This was the last decade of his life. In 1985, a friend of his, Ralph Leighton, wrote "Surely You're Joking, Mr. Feynman!" which became a surprise bestseller. Three years later, the book was followed by a second volume entitled, "What Do You Care What Other People Think?" also by Ralph Leighton.
On January 28, 1986, the Challenger accident happened. NASA asked Feynman, as well as others, to help investigate the accident. Feynman figured out what was wrong, and announced in during a nationally televised hearing of the commission. It turned out that the gasket material lost its resiliency at freezing temperature.
Feynman's last lecture took place on Friday, December 4, 1987. The lecture was on curved spacetime. Richard P. Feynman died two months later, on February 15, 1988.
Al Seckel was a close friend of Richard Feynman's during the 1980s. Seckel is currently trying to determine the neuronal correlates of visual and other sensory illusions in the Koch Laboratory, Division of Computational and Neuronal Sciences, California Institute of Technology. He is also writing a book on this subject for The MIT Press. He has also amassed the world's largest collection of illusions on the internet at Illusionworks.
Feynman on Hawking
Surely You're Joking, Mr. Gell-Mann!
A Visit to Penn and Teller
A Physicist for Lunch
The Nobel Prize
A Chance Meeting with an Acquaintance
Penrose and Feynman
Murray and Richard
The Amazing Randi Meets the Chief
The Supernatural Clock
Because I am Richard Feynman
A Kooky Phone Call
A Bump on the Head
The Johnny Carson Show
It's All a Blur
Feynman on Hawking
Several conversations that Feynman and I had involved the remarkable abilities of other physicists. In one of these conversations, I remarked to Feynman that I was impressed by Steven Hawking's ability to do path integration in his head. Ahh, that's not so great, Feynman replied. It's much more interesting to come up with the technique like I did, rather than to be able to do the mechanics in your head. Feynman wasn't being immodest, he was quite right. The true secret to genius is in creativity, not in technical mechanics.
Back to the top
Murray and Richard
Once I made the mistake of inviting both Feynman and Murray Gell-Mann over for dinner with a couple of other guests. Almost the entire evening was spent with the two of them sparing back and forth, You don't have to say that, I already know that. Then the other one would say something, and again, You don't have to say that, I already know that. Back and forth it went. My wife Laura turned to Gweneth Feynman, and said, Why do they even bother? Gweneth responded, We try to keep them apart as much as possible. Later that evening, we were all sitting around the table talking when someone said something and Murray Gell-Mann remarked, Oh, that's a pleonasm. Everyone went, What? It's a sentence with a triple redundancy, Gell-Mann stated. Gell-Mann is well known among his associates for his pedantic knowledge of language and facts. Feynman and I sneaked into my library where we looked it up in the dictionary. Gell-Mann was right. Feynman hit his fist on the table, and exclaimed, DAMN IT! He's always GODDAMNED right, always! Let's see if we can catch him tonight, I replied. Later in the evening, the subject of antiquarian books on witchcraft came up and Gell-Mann said, Do you know the Malleus Maleficarum written by James I in 1623? No, Murray, the Malleus Maleficarum was written by Sprenger and Kramer in 1486, James I wrote the Demonology in 1597, I said authoritatively. Gell-Mann, looking very astonished, turned and said, WHAT? At that moment there was the great beginnings of a smile on Dick's face, but it hadn't been proved yet. Gell-Mann again said, WHAT? I pulled out my Encyclopedia on Witchcraft and verified the titles, authors, and dates. Feynman slid under the table laughing, and roaring, Let the trumpets roar and the angles sing! I'll never let you forget this, Murray! I knew it was an act all along!
Back to the top
Surely You're Joking, Mr. Gell-Mann!
This took place just after the publication of Surely You're Joking, Mr. Feynman. We were all sitting together at lunch talking about the success of the book, when one of the other graduate students remarked that they had not seen Murray Gell-Mann lately. I thought he had gone and started writing his own book of anecdotes. The other student remarked, Yeah, and I know what he is going to call it too, 'Damn it Murray, You're right again!' At this remark, Feynman lost it, and slid under the table laughing. There was another lunch conversation that involved Gell-Mann that took place right after the publication of Surely You're Joking. Gell-Mann, who I was quite friendly with too, confided to me that he was very upset with Feynman's written account of their joint discovery of the theory of beta decay. He felt that Feynman had not reported the account accurately and was giving himself undue credit. At lunch the mood was jovial, and I took the occasion to pass along Gell-Mann's feelings about the controversial passage. Feynman's smile immediately disappeared. He looked rather sad and hurt. This was the first time he had heard Gell-Mann's reaction to the book. You know, I tried extra hard, very hard in fact, in the passages I wrote about Murray. I was especially careful. Apparently, Gell-Mann was indeed upset, and there are published accounts of various explosions on the fourth floor of the physics building where they had offices close together. Feynman did indeed change the passage to suit Gell-Mann's wishes.
Back to the top
The Amazing Randi Meets The Chief
Back in 1984 Feynman attended a lecture at Caltech given by James Amazing Randi, a well known magician and debunker of psychics. At this lecture, Randi performed a very good mental trick involving a newspaper and a prediction contained in an envelope pasted to the blackboard. The next evening, Randi and Feynman were at my house for dinner. It was a delightful and fun evening with lots of jokes and laughter all around. At about 1:30 a.m., Feynman and Randi still going strong, Feynman decided to figure out how Randi did his mental trick. Oh, no. You can't solve that trick. You don't have enough information! Randi exclaimed. What do you mean? Physicists never have enough information, Feynman responded. Feynman began to stare off into space with Randi muttering on how he would not be able to solve it. Step by step, Feynman went through the process out loud and told Randi how the trick must have been done. Randi literally fell backwards over his chair and exclaimed, You didn't fall off no apple cart! You didn't get that Swedish Prize for nothing! Feynman roared with laughter. Later, on another visit to Caltech, Randi once again joined us for lunch. He did another trick for Feynman, this time a card trick. I DELIBERATELY misled you this time! Randi stated. Feynman paid him no attention. In less than three minutes, Feynman solved the trick. I'm never going to show you another trick again! declared a frustrated Randi.
Back to the top
Almost without fail, whenever Feynman's name came up in private conversation, Murray Gell-Mann would inevitably remark, He's always concerned with generating anecdotes about himself. In fact, there was some truth to Gell-Mann's remarks. On one occasion Feynman and I attended a physics lecture by a visiting professor. We got there early and took the front row seats. Feynman noticed that the lecturer had left his notes on the seat beside him. Feynman proceeded to look through the notes, and I could see that he was registering what he was reading. He put the notes back down and the professor came back in. During the course of the lecture, the professor stated, I have spent a considerable time working out the derivation of this particular formula... Feynman stated, Ahh, the solution is obvious! It's..... The professor, and the rest of the audience for that matter, was dumbfounded as Feynman, who appeared to be giving an answer off the cuff, gave the solution. As we left the lecture, I turned to Feynman and gave him a knowing look. He smiled back.
Back to the top
The Supernatural Clock
Once we were talking about the supernatural and the following anecdote involving his first wife Arline came up. Arline had tuberculosis and was confined to a hospital while Feynman was at Los Alamos. Next to her bed was an old clock. Arline told Feynman that the clock was a symbol of the time that they had together and that he should always remember that. Always look at the clock to remember the time we have together, she said. The day that Arline died in the hospital, Feynman was given a note from the nurse that indicated the time of death. Feynman noted that the clock had stopped at exactly that time. It was as the clock, which had been a symbol of their time together, had stopped at the moment of her death. Did you make a connection? I asked NO! NOT FOR A SECOND! I immediately began to think how this could have happened. And I realized that the clock was old and was always breaking. That the clock probably stopped some time before and the nurse coming in to the room to record the time of death would have looked at the clock and jotted down the time from that. I never made any supernatural connection, not even for a second. I just wanted to figure out how it happened.
Back to the top
One evening Feynman and I arrived early at a lecture at Caltech. We were sitting in the seats and gossiping about things of really no consequence whatever, when he heard these students come in and whisper, Hey, look it's Feynman! I bet they are discussing something really important!
Back to the top
Because I Am Richard Feynman
Feynman and I would sometimes go camping together. On these occasions he would drive his van, which had Feynman diagrams painted all over it and a license plate that said Quantum. (Murray Gell-Mann had a license plate that said Quarks.) I asked Feynman if anyone ever recognized the diagrams. Yes. Once we were driving in the midwest and we pulled into a McDonald's. Someone came up to me and asked me why I have Feynman diagrams all over my van. I replied, 'Because I AM Feynman! The young man went "Ahhhhh....."
Back to the top
A Visit to Penn and Teller
Penn and Teller are well known as The Bad Boys of Magic and are among today's most popular magicians appearing on many light night TV shows, acts in Las Vegas, and so forth. There was a time, however, when they were still unknown and they had a little known stage show in Los Angeles. I thought they were really funny and clever. They had a hard time getting people to come to their show and, being friends with them, they would call me up and tell me to bring my friends. I convinced Feynman, Al Hibbs, and Tom Van Sant (Feynman's artist friend) to accompany me to the show. The show was excellent and we all had a good time. I tried to get Penn to understand who Feynman was and that he should pay him some attention, but Penn didn't realize who I had brought at that time, so there was not much interaction. This was too bad, as Feynman really enjoyed the show. After the show, we went out to a nearby cafe and all of us tried to figure out the tricks. There was one that involved a cutting of a rose's shadow that had everyone going for some time, but this time, the solution was not arrived at.
Back to the top
A Kooky Phone Call
On another occasion I was with both Gell-Mann and Feynman and the subject of kooky letters and phone calls came up. Feynman started relating the story of how one crazy woman called the office about some ridiculous theory of magnetic fields. He just could not get her off the phone. Gell-Mann responded, Oh, I remember that woman. I got her off the phone in less than a minute. How'd you do that? Feynman asked. I told her to call you. That you were the resident expert in the topic!
Back to the top
A Physicist for Lunch
One thing that Feynman did not suffer gladly was fools, especially smart fools. He was very tolerant of those who could not understand, but extremely intolerant of those who refused to understand. One day a physicist friend of mine Ron Unz asked if he could be introduced to his hero - Richard Feynman. Ron had an impressive list of credentials behind him - winner of the prestigious Westinghouse Science Award, degrees from Harvard and Cambridge, and a former graduate student of Steven Hawking. (A little aside, Ron was later to briefly gain some fame after he became a multimillionaire and ran briefly against Pete Wilson in the race for the California Senate). In addition to Ron's impressive credentials, he had developed a rather controversial theory that charge was not conserved. He had published a paper about it in Physical Review and he wanted to discuss his idea with Feynman. I agreed to invite him to one of our private lunch sessions. On the day in question, Ron made a terrible mistake. First of all he showed up in a suit. That was certain to give a bad impression to Feynman. Then I made a mistake, I spilled the beans to Feynman just before lunch about Ron's ideas. Feynman roared, and declared that he would refuse to eat with anyone that stupid. Feynman turned and walked away. I went back to Ron and told him what had happened. Ron was terribly disappointed, but I told him that I would persist. I went back to Feynman and convinced him to still have lunch with us. Feynman said, Ok, as long as we don't talk physics. I don't want to hear anything about it. So I got Ron to join us. No sooner than five minutes into the conversation, Feynman turns to Ron and says, OK, what's this dopey idea you have in physics? Ron, who is an extremely confident guy, turned and started to explain his theory. Feynman, booming loudly declared, Did you think about this....? Did you think about that...? The response was almost inevitably, No. On it went. I must say, I have never seen such a quick and merciless massacre of another individual in my life. It was sad.
Back to the top
A Bump on the Head
In the beginning part of 1984, Feynman was teaching a course on computing at Caltech. The course was also co-jointly taught by Gerald Sussman from MIT. On one occasion Feynman was lecturing at the blackboard, but this time Sussman kept coming up and correcting him. Later that week Feynman was supposed to come over for dinner. On the night in question, Feynman's wife Gweneth called to say that Feynman was in the hospital and that they would not be able to come over. She told me not to tell anyone, as she didn't want the word to get around. Apparently Feynman in his excitement to purchase a new computer tripped on the sidewalk curb and hit his head. This caused some internal complications and bleeding. In a week or so, Feynman was back on his feet and returned to class. At lunch Feynman related what had happened. After bumping his head, he paid little or no attention to it. He was bleeding when he entered the computer store. What was interesting is that he gradually began to loose his sense of what was happening around him without internally realizing it. First, he couldn't locate his car. Then he had a very strange session with one of his artistic models. And on another day, he told his secretary Helen Tuck that he was going home, and proceeded to undress and lie down in his office. He forgot that he was to give a lecture at Hughs aircraft, and so on... Everything was just rationalized away. But you know, he said, NO ONE told me I was going crazy. Now why not? I said, Come on. You are always doing weird stuff. Besides there such a fine line between genius and madness that it sometimes difficult to tell! Listen, ape, the next time I go crazy around here, you be sure to tell me!
Back to the top
The Nobel Prize
Feynman was quite publicly critical of the Nobel Prize. And he has made many public pronouncements on his dislike of receiving it and that people just like him and attend his lectures because of it. This is an argument that I just didn't buy. I would argue with him. Hey, look, there are many people who have won the Nobel Prize and don't have the following that you do. Roger Sperry at Caltech does not get huge audiences every time he lectures. You Messenger lectures, which were delivered before you had such fame, had great audiences, so that argument doesn't hold. And so the argument went. I thought his popularity was due to other things, like his lecture notes, his colorful lectures, personality, and so forth. I thought that the Nobel Prize had very little to do with his fame. Nevertheless, I respected his views. On one weekend we were to go camping up in the mountains for some stargazing. I had arranged for a number of my friends to come along who did not know physics or who Feynman was. (This was before the publication of his two anecdote books made him famous outside the world of physics). I specifically did not tell my friends who Feynman was or anything about him. A group of us arrived early at the campground and we decided to take a walk. Feynman had not yet arrived. When we returned Feynman was sitting talking with my friends. I found out a little later that they had not been with him more than a few minutes before Feynman relayed the fact that he had won a Noble Prize and so forth. I wasn't privy to the conversation, so I really don't know what was said, but my friends all of sudden knew. I was irritated. I went through all this trouble to shield this fact, and Feynman himself let it out. I thought this was gross hypocrisy. (A number of other close Feynman acquaintances also felt this way). Later when we were alone with Manny Delbruck (her husband also won a Nobel prize), the subject came up. I tried to argue with him and express my irritation at how often the Prize comes up in conversation. That if he really didn't like the thing, then he should just be quiet about it. Boy were there explosions from his end. I don't think I ever said anything that irritated him as much as that.
Back to the top
The Johnny Carson Show
After Surely You're Joking became a best-seller, Feynman was invited to do an appearance on the well-known Johnny Carson show. A number of us were sitting around at dinner when the topic of the invitation came up. Feynman stated that was unfamiliar with the show and was debating whether or not he should go on. Everyone there started putting on the hard sell. Al Hibbs discussed the excellence of the show and that he had appeared on it several times discussing various space exploration missions by the Jet Propulsion Laboratory. Carson is a science buff, he exclaimed. Others joined in the choir of approval. I turned to Feynman and said, Watch it first. Watch it before you make any sort of commitment. He turned to me and said, That's the first wise thing I have heard on this topic. A few days later Feynman spotted me walking across the campus and demanded I come over. What? YOU WERE RIGHT! I watched that show and it was the most idiotic program I have ever seen. I would have walked off it in the middle.
Back to the top
A Chance Meeting With An Acquaintance
A while back I was invited to a strange, but nevertheless interesting party. At this party there were all sorts of people from various professions. During the course of the evening, one very buxom woman came up to me and introduced herself. It turns out that she was a well-known stripper and actress in adult movies by the name of Candi Samples. When she found out that I studied physics she asked whether I knew a guy by the name of Dick Feynman. Yes, I replied,. I must admit I was rather astonished to hear his name in this connection. He is one of my biggest fans... she said. A few days later I am in Feynman's office and we are talking when I say to him, Hey, I ran into an interesting acquaintance of yours at a party the other night. Her name is Candi Samples. Feynman immediately smiled and said, Hey, Al, look at this! He went over to his file cabinet, which I thought contained all of his most important and intellectual works. It didn't take him long to pull out a black and white autographed nude shot of Candi Samples, inscribed, To Big Dick, Love from Candi!
Back to the top
It's All A Blur
Once we were out driving in his van in downtown Pasadena when he averted his attention to a beautiful girl walking down the sidewalk. He instantly slowed down the van and narrowly missed another car, which gave out an angry honk. Geeze, I said, Didn't you see that guy? No, I only see the women, the rest is all a blur.
Back to the top
Penrose and Feynman
Not long ago I gave a lecture at Oxford University. While I was there I had the good fortune to have a long lunch with physicist/mathematician Roger Penrose, who is responsible for much of our understanding of black holes. The topic of Feynman came up and Penrose related the following story: A while back he was visiting Caltech with Steven Hawking. Hawking asked Penrose if there was anyone at Caltech that he wanted to meet. The choice obviously came down to either Feynman or Gell-Mann. Penrose decided they should try to get a hold of Feynman. Hawking called up the office, but Feynman wasn't in. He was on vacation. It turns out, however, he was vacationing at his home. Hawking called Feynman at home and Feynman reluctantly agreed to come over the next day. The subject of quantum gravity came up and Penrose and Feynman got into a heated argument. Penrose said, Feynman was so quick, he was usually about five steps ahead of me at any given point. Sometimes he didn't listen to what I was saying. The whole thing was mentally exhausting. I was completely drained at the end of the session. I have never encountered anyone so quick before. What Penrose and many other physicists didn't realize was the reason that accounted for Feynman's quickness on many matters in physics. Feynman thought about some of these areas in great depth and for long periods of time. A topic like quantum gravity would be one that Feynman had spent countless hours thinking about. It wasn't all off the cuff.
Back to the top
The Mysterious 137
If you have ever read Cargo Cult Science by Richard Feynman, you know that he believed that there were still many things that experts, or in this case, physicists, did not know. One of these 'unknowns' that he pointed out often to all of his colleagues was the mysterious number 137.This number is the value of the fine-structure constant (the actual value is one over one-hundred and thirty seven), which is defined as the charge of the electron (q) squared over the product of Planck's constant (h) times the speed of light (c). This number actually represents the probability that an electron will absorb a photon. However, this number has more significance in the fact that it relates three very important domains of physics: electromagnetism in the form of the charge of the electron, relativity in the form of the speed of light, and quantum mechanics in the form of Planck's constant. Since the early 1900's, physicists have thought that this number might be at the heart of a GUT, or Grand Unified Theory, which could relate the theories of electromagnetism, quantum mechanics, and most especially gravity. However, physicists have yet to find any link between the number 137 and any other physical law in the universe. It was expected that such an important equation would generate an important number, like one or pi, but this was not the case. In fact, about the only thing that the number relates to at all is the room in which the great physicist Wolfgang Pauli died: room 137. So whenever you think that science has finally discovered everything it possibly can, remember Richard Feynman and the number 137.
Dr. Bill Riemers writes: classical physics tells us that electrons captured by element #137 (as yet undiscovered and unnamed) of the periodic table will move at the speed of light. The idea is quite simple, if you don't use math to explain it. 137 is the odds that an electron will absorb a single photon. Protons and electrons are bound by interactions with photons. So when you get 137 protons, you get 137 photons, and you get a 100% chance of absorption. An electron in the ground state will orbit at the speed of light. This is the electromagnetic equivalent of a black hole. For gravitational black hole, general relativity comes to the rescue to prevent planets from orbiting at the speed of light and beyond. For an electromagnetic black hole, general relativity comes to the rescue and saves element 137 from having electrons moving faster than the speed of light. However, even with general relativity, element 139 would still have electrons moving faster than light. According to Einstein, this is an impossibility. Thus proving that we still don't understand 137.
It appears that there are enormous differences of opinion as to the probability of a failure with loss of vehicle and of human life. The estimates range from roughly 1 in 100 to 1 in 100,000. The higher figures come from the working engineers, and the very low figures from management. What are the causes and consequences of this lack of agreement? Since 1 part in 100,000 would imply that one could put a Shuttle up each day for 300 years expecting to lose only one, we could properly ask "What is the cause of management's fantastic faith in the machinery?"
We have also found that certification criteria used in Flight Readiness Reviews often develop a gradually decreasing strictness. The argument that the same risk was flown before without failure is often accepted as an argument for the safety of accepting it again. Because of this, obvious weaknesses are accepted again and again, sometimes without a sufficiently serious attempt to remedy them, or to delay a flight because of their continued presence.
There are several sources of information. There are published criteria for certification, including a history of modifications in the form of waivers and deviations. In addition, the records of the Flight Readiness Reviews for each flight document the arguments used to accept the risks of the flight. Information was obtained from the direct testimony and the reports of the range safety officer, Louis J. Ullian, with respect to the history of success of solid fuel rockets. There was a further study by him (as chairman of the launch abort safety panel (LASP)) in an attempt to determine the risks involved in possible accidents leading to radioactive contamination from attempting to fly a plutonium power supply (RTG) for future planetary missions. The NASA study of the same question is also available. For the History of the Space Shuttle Main Engines, interviews with management and engineers at Marshall, and informal interviews with engineers at Rocketdyne, were made. An independent (Cal Tech) mechanical engineer who consulted for NASA about engines was also interviewed informally. A visit to Johnson was made to gather information on the reliability of the avionics (computers, sensors, and effectors). Finally there is a report "A Review of Certification Practices, Potentially Applicable to Man-rated Reusable Rocket Engines," prepared at the Jet Propulsion Laboratory by N. Moore, et al., in February, 1986, for NASA Headquarters, Office of Space Flight. It deals with the methods used by the FAA and the military to certify their gas turbine and rocket engines. These authors were also interviewed informally.
An estimate of the reliability of solid rockets was made by the range safety officer, by studying the experience of all previous rocket flights. Out of a total of nearly 2,900 flights, 121 failed (1 in 25). This includes, however, what may be called, early errors, rockets flown for the first few times in which design errors are discovered and fixed. A more reasonable figure for the mature rockets might be 1 in 50. With special care in the selection of parts and in inspection, a figure of below 1 in 100 might be achieved but 1 in 1,000 is probably not attainable with today's technology. (Since there are two rockets on the Shuttle, these rocket failure rates must be doubled to get Shuttle failure rates from Solid Rocket Booster failure.)
NASA officials argue that the figure is much lower. They point out that these figures are for unmanned rockets but since the Shuttle is a manned vehicle "the probability of mission success is necessarily very close to 1.0." It is not very clear what this phrase means. Does it mean it is close to 1 or that it ought to be close to 1? They go on to explain "Historically this extremely high degree of mission success has given rise to a difference in philosophy between manned space flight programs and unmanned programs; i.e., numerical probability usage versus engineering judgment." (These quotations are from "Space Shuttle Data for Planetary Mission RTG Safety Analysis," Pages 3-1, 3-1, February 15, 1985, NASA, JSC.) It is true that if the probability of failure was as low as 1 in 100,000 it would take an inordinate number of tests to determine it ( you would get nothing but a string of perfect flights from which no precise figure, other than that the probability is likely less than the number of such flights in the string so far). But, if the real probability is not so small, flights would show troubles, near failures, and possible actual failures with a reasonable number of trials. and standard statistical methods could give a reasonable estimate. In fact, previous NASA experience had shown, on occasion, just such difficulties, near accidents, and accidents, all giving warning that the probability of flight failure was not so very small. The inconsistency of the argument not to determine reliability through historical experience, as the range safety officer did, is that NASA also appeals to history, beginning "Historically this high degree of mission success..."
Finally, if we are to replace standard numerical probability usage with engineering judgment, why do we find such an enormous disparity between the management estimate and the judgment of the engineers? It would appear that, for whatever purpose, be it for internal or external consumption, the management of NASA exaggerates the reliability of its product, to the point of fantasy.
The history of the certification and Flight Readiness Reviews will not be repeated here. (See other part of Commission reports.) The phenomenon of accepting for flight, seals that had shown erosion and blow-by in previous flights, is very clear. The Challenger flight is an excellent example. There are several references to flights that had gone before. The acceptance and success of these flights is taken as evidence of safety. But erosion and blow-by are not what the design expected. They are warnings that something is wrong. The equipment is not operating as expected, and therefore there is a danger that it can operate with even wider deviations in this unexpected and not thoroughly understood way. The fact that this danger did not lead to a catastrophe before is no guarantee that it will not the next time, unless it is completely understood. When playing Russian roulette the fact that the first shot got off safely is little comfort for the next. The origin and consequences of the erosion and blow-by were not understood. They did not occur equally on all flights and all joints; sometimes more, and sometimes less. Why not sometime, when whatever conditions determined it were right, still more leading to catastrophe?
In spite of these variations from case to case, officials behaved as if they understood it, giving apparently logical arguments to each other often depending on the "success" of previous flights. For example. in determining if flight 51-L was safe to fly in the face of ring erosion in flight 51-C, it was noted that the erosion depth was only one-third of the radius. It had been noted in an experiment cutting the ring that cutting it as deep as one radius was necessary before the ring failed. Instead of being very concerned that variations of poorly understood conditions might reasonably create a deeper erosion this time, it was asserted, there was "a safety factor of three." This is a strange use of the engineer's term ,"safety factor." If a bridge is built to withstand a certain load without the beams permanently deforming, cracking, or breaking, it may be designed for the materials used to actually stand up under three times the load. This "safety factor" is to allow for uncertain excesses of load, or unknown extra loads, or weaknesses in the material that might have unexpected flaws, etc. If now the expected load comes on to the new bridge and a crack appears in a beam, this is a failure of the design. There was no safety factor at all; even though the bridge did not actually collapse because the crack went only one-third of the way through the beam. The O-rings of the Solid Rocket Boosters were not designed to erode. Erosion was a clue that something was wrong. Erosion was not something from which safety can be inferred.
There was no way, without full understanding, that one could have confidence that conditions the next time might not produce erosion three times more severe than the time before. Nevertheless, officials fooled themselves into thinking they had such understanding and confidence, in spite of the peculiar variations from case to case. A mathematical model was made to calculate erosion. This was a model based not on physical understanding but on empirical curve fitting. To be more detailed, it was supposed a stream of hot gas impinged on the O-ring material, and the heat was determined at the point of stagnation (so far, with reasonable physical, thermodynamic laws). But to determine how much rubber eroded it was assumed this depended only on this heat by a formula suggested by data on a similar material. A logarithmic plot suggested a straight line, so it was supposed that the erosion varied as the .58 power of the heat, the .58 being determined by a nearest fit. At any rate, adjusting some other numbers, it was determined that the model agreed with the erosion (to depth of one-third the radius of the ring). There is nothing much so wrong with this as believing the answer! Uncertainties appear everywhere. How strong the gas stream might be was unpredictable, it depended on holes formed in the putty. Blow-by showed that the ring might fail even though not, or only partially eroded through. The empirical formula was known to be uncertain, for it did not go directly through the very data points by which it was determined. There were a cloud of points some twice above, and some twice below the fitted curve, so erosions twice predicted were reasonable from that cause alone. Similar uncertainties surrounded the other constants in the formula, etc., etc. When using a mathematical model careful attention must be given to uncertainties in the model.
During the flight of 51-L the three Space Shuttle Main Engines all worked perfectly, even, at the last moment, beginning to shut down the engines as the fuel supply began to fail. The question arises, however, as to whether, had it failed, and we were to investigate it in as much detail as we did the Solid Rocket Booster, we would find a similar lack of attention to faults and a deteriorating reliability. In other words, were the organization weaknesses that contributed to the accident confined to the Solid Rocket Booster sector or were they a more general characteristic of NASA? To that end the Space Shuttle Main Engines and the avionics were both investigated. No similar study of the Orbiter, or the External Tank were made.
The engine is a much more complicated structure than the Solid Rocket Booster, and a great deal more detailed engineering goes into it. Generally, the engineering seems to be of high quality and apparently considerable attention is paid to deficiencies and faults found in operation.
The usual way that such engines are designed (for military or civilian aircraft) may be called the component system, or bottom-up design. First it is necessary to thoroughly understand the properties and limitations of the materials to be used (for turbine blades, for example), and tests are begun in experimental rigs to determine those. With this knowledge larger component parts (such as bearings) are designed and tested individually. As deficiencies and design errors are noted they are corrected and verified with further testing. Since one tests only parts at a time these tests and modifications are not overly expensive. Finally one works up to the final design of the entire engine, to the necessary specifications. There is a good chance, by this time that the engine will generally succeed, or that any failures are easily isolated and analyzed because the failure modes, limitations of materials, etc., are so well understood. There is a very good chance that the modifications to the engine to get around the final difficulties are not very hard to make, for most of the serious problems have already been discovered and dealt with in the earlier, less expensive, stages of the process.
The Space Shuttle Main Engine was handled in a different manner, top down, we might say. The engine was designed and put together all at once with relatively little detailed preliminary study of the material and components. Then when troubles are found in the bearings, turbine blades, coolant pipes, etc., it is more expensive and difficult to discover the causes and make changes. For example, cracks have been found in the turbine blades of the high pressure oxygen turbopump. Are they caused by flaws in the material, the effect of the oxygen atmosphere on the properties of the material, the thermal stresses of startup or shutdown, the vibration and stresses of steady running, or mainly at some resonance at certain speeds, etc.? How long can we run from crack initiation to crack failure, and how does this depend on power level? Using the completed engine as a test bed to resolve such questions is extremely expensive. One does not wish to lose an entire engine in order to find out where and how failure occurs. Yet, an accurate knowledge of this information is essential to acquire a confidence in the engine reliability in use. Without detailed understanding, confidence can not be attained.
A further disadvantage of the top-down method is that, if an understanding of a fault is obtained, a simple fix, such as a new shape for the turbine housing, may be impossible to implement without a redesign of the entire engine.
The Space Shuttle Main Engine is a very remarkable machine. It has a greater ratio of thrust to weight than any previous engine. It is built at the edge of, or outside of, previous engineering experience. Therefore, as expected, many different kinds of flaws and difficulties have turned up. Because, unfortunately, it was built in the top-down manner, they are difficult to find and fix. The design aim of a lifetime of 55 missions equivalent firings (27,000 seconds of operation, either in a mission of 500 seconds, or on a test stand) has not been obtained. The engine now requires very frequent maintenance and replacement of important parts, such as turbopumps, bearings, sheet metal housings, etc. The high-pressure fuel turbopump had to be replaced every three or four mission equivalents (although that may have been fixed, now) and the high pressure oxygen turbopump every five or six. This is at most ten percent of the original specification. But our main concern here is the determination of reliability.
In a total of about 250,000 seconds of operation, the engines have failed seriously perhaps 16 times. Engineering pays close attention to these failings and tries to remedy them as quickly as possible. This it does by test studies on special rigs experimentally designed for the flaws in question, by careful inspection of the engine for suggestive clues (like cracks), and by considerable study and analysis. In this way, in spite of the difficulties of top-down design, through hard work, many of the problems have apparently been solved.
A list of some of the problems follows. Those followed by an asterisk (*) are probably solved:
Many of these solved problems are the early difficulties of a new design, for 13 of them occurred in the first 125,000 seconds and only three in the second 125,000 seconds. Naturally, one can never be sure that all the bugs are out, and, for some, the fix may not have addressed the true cause. Thus, it is not unreasonable to guess there may be at least one surprise in the next 250,000 seconds, a probability of 1/500 per engine per mission. On a mission there are three engines, but some accidents would possibly be contained, and only affect one engine. The system can abort with only two engines. Therefore let us say that the unknown suprises do not, even of themselves, permit us to guess that the probability of mission failure do to the Space Shuttle Main Engine is less than 1/500. To this we must add the chance of failure from known, but as yet unsolved, problems (those without the asterisk in the list above). These we discuss below. (Engineers at Rocketdyne, the manufacturer, estimate the total probability as 1/10,000. Engineers at marshal estimate it as 1/300, while NASA management, to whom these engineers report, claims it is 1/100,000. An independent engineer consulting for NASA thought 1 or 2 per 100 a reasonable estimate.)
The history of the certification principles for these engines is confusing and difficult to explain. Initially the rule seems to have been that two sample engines must each have had twice the time operating without failure as the operating time of the engine to be certified (rule of 2x). At least that is the FAA practice, and NASA seems to have adopted it, originally expecting the certified time to be 10 missions (hence 20 missions for each sample). Obviously the best engines to use for comparison would be those of greatest total (flight plus test) operating time -- the so-called "fleet leaders." But what if a third sample and several others fail in a short time? Surely we will not be safe because two were unusual in lasting longer. The short time might be more representative of the real possibilities, and in the spirit of the safety factor of 2, we should only operate at half the time of the short-lived samples.
The slow shift toward decreasing safety factor can be seen in many examples. We take that of the HPFTP turbine blades. First of all the idea of testing an entire engine was abandoned. Each engine number has had many important parts (like the turbopumps themselves) replaced at frequent intervals, so that the rule must be shifted from engines to components. We accept an HPFTP for a certification time if two samples have each run successfully for twice that time (and of course, as a practical matter, no longer insisting that this time be as large as 10 missions). But what is "successfully?" The FAA calls a turbine blade crack a failure, in order, in practice, to really provide a safety factor greater than 2. There is some time that an engine can run between the time a crack originally starts until the time it has grown large enough to fracture. (The FAA is contemplating new rules that take this extra safety time into account, but only if it is very carefully analyzed through known models within a known range of experience and with materials thoroughly tested. None of these conditions apply to the Space Shuttle Main Engine.
Cracks were found in many second stage HPFTP turbine blades. In one case three were found after 1,900 seconds, while in another they were not found after 4,200 seconds, although usually these longer runs showed cracks. To follow this story further we shall have to realize that the stress depends a great deal on the power level. The Challenger flight was to be at, and previous flights had been at, a power level called 104% of rated power level during most of the time the engines were operating. Judging from some material data it is supposed that at the level 104% of rated power level, the time to crack is about twice that at 109% or full power level (FPL). Future flights were to be at this level because of heavier payloads, and many tests were made at this level. Therefore dividing time at 104% by 2, we obtain units called equivalent full power level (EFPL). (Obviously, some uncertainty is introduced by that, but it has not been studied.) The earliest cracks mentioned above occurred at 1,375 EFPL.
Now the certification rule becomes "limit all second stage blades to a maximum of 1,375 seconds EFPL." If one objects that the safety factor of 2 is lost it is pointed out that the one turbine ran for 3,800 seconds EFPL without cracks, and half of this is 1,900 so we are being more conservative. We have fooled ourselves in three ways. First we have only one sample, and it is not the fleet leader, for the other two samples of 3,800 or more seconds had 17 cracked blades between them. (There are 59 blades in the engine.) Next we have abandoned the 2x rule and substituted equal time. And finally, 1,375 is where we did see a crack. We can say that no crack had been found below 1,375, but the last time we looked and saw no cracks was 1,100 seconds EFPL. We do not know when the crack formed between these times, for example cracks may have formed at 1,150 seconds EFPL. (Approximately 2/3 of the blade sets tested in excess of 1,375 seconds EFPL had cracks. Some recent experiments have, indeed, shown cracks as early as 1,150 seconds.) It was important to keep the number high, for the Challenger was to fly an engine very close to the limit by the time the flight was over.
Finally it is claimed that the criteria are not abandoned, and the system is safe, by giving up the FAA convention that there should be no cracks, and considering only a completely fractured blade a failure. With this definition no engine has yet failed. The idea is that since there is sufficient time for a crack to grow to a fracture we can insure that all is safe by inspecting all blades for cracks. If they are found, replace them, and if none are found we have enough time for a safe mission. This makes the crack problem not a flight safety problem, but merely a maintenance problem.
This may in fact be true. But how well do we know that cracks always grow slowly enough that no fracture can occur in a mission? Three engines have run for long times with a few cracked blades (about 3,000 seconds EFPL) with no blades broken off.
But a fix for this cracking may have been found. By changing the blade shape, shot-peening the surface, and covering with insulation to exclude thermal shock, the blades have not cracked so far.
A very similar story appears in the history of certification of the HPOTP, but we shall not give the details here.
It is evident, in summary, that the Flight Readiness Reviews and certification rules show a deterioration for some of the problems of the Space Shuttle Main Engine that is closely analogous to the deterioration seen in the rules for the Solid Rocket Booster.
By "avionics" is meant the computer system on the Orbiter as well as its input sensors and output actuators. At first we will restrict ourselves to the computers proper and not be concerned with the reliability of the input information from the sensors of temperature, pressure, etc., nor with whether the computer output is faithfully followed by the actuators of rocket firings, mechanical controls, displays to astronauts, etc.
The computer system is very elaborate, having over 250,000 lines of code. It is responsible, among many other things, for the automatic control of the entire ascent to orbit, and for the descent until well into the atmosphere (below Mach 1) once one button is pushed deciding the landing site desired. It would be possible to make the entire landing automatically (except that the landing gear lowering signal is expressly left out of computer control, and must be provided by the pilot, ostensibly for safety reasons) but such an entirely automatic landing is probably not as safe as a pilot controlled landing. During orbital flight it is used in the control of payloads, in displaying information to the astronauts, and the exchange of information to the ground. It is evident that the safety of flight requires guaranteed accuracy of this elaborate system of computer hardware and software.
In brief, the hardware reliability is ensured by having four essentially independent identical computer systems. Where possible each sensor also has multiple copies, usually four, and each copy feeds all four of the computer lines. If the inputs from the sensors disagree, depending on circumstances, certain averages, or a majority selection is used as the effective input. The algorithm used by each of the four computers is exactly the same, so their inputs (since each sees all copies of the sensors) are the same. Therefore at each step the results in each computer should be identical. From time to time they are compared, but because they might operate at slightly different speeds a system of stopping and waiting at specific times is instituted before each comparison is made. If one of the computers disagrees, or is too late in having its answer ready, the three which do agree are assumed to be correct and the errant computer is taken completely out of the system. If, now, another computer fails, as judged by the agreement of the other two, it is taken out of the system, and the rest of the flight canceled, and descent to the landing site is instituted, controlled by the two remaining computers. It is seen that this is a redundant system since the failure of only one computer does not affect the mission. Finally, as an extra feature of safety, there is a fifth independent computer, whose memory is loaded with only the programs of ascent and descent, and which is capable of controlling the descent if there is a failure of more than two of the computers of the main line four.
There is not enough room in the memory of the main line computers for all the programs of ascent, descent, and payload programs in flight, so the memory is loaded about four time from tapes, by the astronauts.
Because of the enormous effort required to replace the software for such an elaborate system, and for checking a new system out, no change has been made to the hardware since the system began about fifteen years ago. The actual hardware is obsolete; for example, the memories are of the old ferrite core type. It is becoming more difficult to find manufacturers to supply such old-fashioned computers reliably and of high quality. Modern computers are very much more reliable, can run much faster, simplifying circuits, and allowing more to be done, and would not require so much loading of memory, for the memories are much larger.
The software is checked very carefully in a bottom-up fashion. First, each new line of code is checked, then sections of code or modules with special functions are verified. The scope is increased step by step until the new changes are incorporated into a complete system and checked. This complete output is considered the final product, newly released. But completely independently there is an independent verification group, that takes an adversary attitude to the software development group, and tests and verifies the software as if it were a customer of the delivered product. There is additional verification in using the new programs in simulators, etc. A discovery of an error during verification testing is considered very serious, and its origin studied very carefully to avoid such mistakes in the future. Such unexpected errors have been found only about six times in all the programming and program changing (for new or altered payloads) that has been done. The principle that is followed is that all the verification is not an aspect of program safety, it is merely a test of that safety, in a non-catastrophic verification. Flight safety is to be judged solely on how well the programs do in the verification tests. A failure here generates considerable concern.
To summarize then, the computer software checking system and attitude is of the highest quality. There appears to be no process of gradually fooling oneself while degrading standards so characteristic of the Solid Rocket Booster or Space Shuttle Main Engine safety systems. To be sure, there have been recent suggestions by management to curtail such elaborate and expensive tests as being unnecessary at this late date in Shuttle history. This must be resisted for it does not appreciate the mutual subtle influences, and sources of error generated by even small changes of one part of a program on another. There are perpetual requests for changes as new payloads and new demands and modifications are suggested by the users. Changes are expensive because they require extensive testing. The proper way to save money is to curtail the number of requested changes, not the quality of testing for each.
One might add that the elaborate system could be very much improved by more modern hardware and programming techniques. Any outside competition would have all the advantages of starting over, and whether that is a good idea for NASA now should be carefully considered.
Finally, returning to the sensors and actuators of the avionics system, we find that the attitude to system failure and reliability is not nearly as good as for the computer system. For example, a difficulty was found with certain temperature sensors sometimes failing. Yet 18 months later the same sensors were still being used, still sometimes failing, until a launch had to be scrubbed because two of them failed at the same time. Even on a succeeding flight this unreliable sensor was used again. Again reaction control systems, the rocket jets used for reorienting and control in flight still are somewhat unreliable. There is considerable redundancy, but a long history of failures, none of which has yet been extensive enough to seriously affect flight. The action of the jets is checked by sensors, and, if they fail to fire the computers choose another jet to fire. But they are not designed to fail, and the problem should be solved.
If a reasonable launch schedule is to be maintained, engineering often cannot be done fast enough to keep up with the expectations of originally conservative certification criteria designed to guarantee a very safe vehicle. In these situations, subtly, and often with apparently logical arguments, the criteria are altered so that flights may still be certified in time. They therefore fly in a relatively unsafe condition, with a chance of failure of the order of a percent (it is difficult to be more accurate).
Official management, on the other hand, claims to believe the probability of failure is a thousand times less. One reason for this may be an attempt to assure the government of NASA perfection and success in order to ensure the supply of funds. The other may be that they sincerely believed it to be true, demonstrating an almost incredible lack of communication between themselves and their working engineers.
In any event this has had very unfortunate consequences, the most serious of which is to encourage ordinary citizens to fly in such a dangerous machine, as if it had attained the safety of an ordinary airliner. The astronauts, like test pilots, should know their risks, and we honor them for their courage. Who can doubt that McAuliffe was equally a person of great courage, who was closer to an awareness of the true risk than NASA management would have us believe?
Let us make recommendations to ensure that NASA officials deal in a world of reality in understanding technological weaknesses and imperfections well enough to be actively trying to eliminate them. They must live in reality in comparing the costs and utility of the Shuttle to other methods of entering space. And they must be realistic in making contracts, in estimating costs, and the difficulty of the projects. Only realistic flight schedules should be proposed, schedules that have a reasonable chance of being met. If in this way the government would not support them, then so be it. NASA owes it to the citizens from whom it asks support to be frank, honest, and informative, so that these citizens can make the wisest decisions for the use of their limited resources.
For a successful technology, reality must take precedence over public relations, for nature cannot be fooled.
Physics Main Page Home Page