In 1956, Werner Heisenberg participated in the second Lindau meeting on physics with a lecture related to his first one, given in 1953. The topic in both lectures is the quantum theory of elementary particles. But the audiences at the two meetings were very different! In 1953, the audience consisted mainly of professional physicists, while the audience in 1956 was dominated by students and young researchers. The change in the audiences reflects the impact of Count Lennart Bernadotte as chairman of the organizing committee. It was his strong belief that most participants should be students and young researchers. This was pointed out in the letters of invitation to the Nobel Laureates, so for his 1956 lecture, Werner Heisenberg was given a task different to the one of 1953. Listening to the lecture, one can hear how Heisenberg solves the problem of addressing both the younger part of the audience and the older physics colleagues in a brilliant way. Very clearly, he first describes the problems that appear when one wants to quantize the relativistic wave equations resulting from a unification of quantum mechanics with the special theory of relativity. Some of the problems of the resulting quantum field theories appear as infinities and Heisenberg tells the story of how these infinities were mastered by the technique of renormalization by the trio of Tomonaga, Schwinger and Feynman, the three recipients of the 1965 Nobel Prize in Physics. After having introduced the scattering matrix and some of its properties, he then turns to his own research group in Göttingen and their attempts to formulate a unified theory of elementary particles. Even though we know today that Heisenberg never reached his goal, it is interesting to listen to his description of the on-going work in this lecture and to compare it with both his 1953 lecture and the two lectures that he gave on the subject later on, in 1959 and 1962.

Anders Bárány

- Mediatheque
- Meetings

Ladies and Gentlemen, Over the last years, a great deal of new experimental material has been collected about elementary particles. Along with the long-known building blocks of matter: electrons, protons and neutrons, which we have already known about for more than a quarter or half a century, today we are familiar with a host of similar structures, we can say around 25 new, different types of elementary particles which in many of their properties appear similar to the elementary building blocks of matter, but which are generally capable of existing for only a very short time, and which frequently decompose radioactively already after, shall we say, a hundred-millionth of a second. This wealth of experimental material about elementary particles presents theoretical physicists with the task of investigating and finally formulating in mathematical terms the natural laws that determine the structure, in other words, the existence, and properties, of elementary particles. I would first like, in my lecture today, to deliver a critical analysis of the developments in this field during the last approximately two and a half decades, and in the second part of my lecture I will then address an attempt that our institute in Göttingen has made over recent years to clarify such phenomena relating to elementary particles. I would like to come straight to the point here and say that, with this very complicated field, we are not concerned for now with finding definitive solutions, but that at best we can locate the place where such solutions should be sought. And this is to be the task of the second part. But let us first consider the historical development: already from a very early stage, from around the time when quantum mechanics was completed, towards the end of the 1920s, it was very clear that we could only arrive at an understanding of elementary particles if quantum theory could be combined with the theory of wave equations, in other words, if one can quantise so-called field equations. The fact that this is so can actually already be derived from Einstein's works, from the early period of quantum theory, which showed that the application of quantum theory, for example to the Maxwell electromagnetic waves area, results in the existence of light quanta. Thus, this example showed that the application of quantum conditions to a field theory explains the existence of the related particles. So, the path ahead seemed clear; quantum theory needed to be applied to wave equations, and specifically to the various wave equations that were known at that time, with the hope of gradually moving forward to an understanding of elementary particles. At the start, this path was also pursued without – or so it seemed – major difficulties. As in the first works – I cite for example work done by Jordan, Klein and Wigner, and then by Pauli and Jordan, and by Dirac. In these works, the application of quantum theory to field equations appeared to be a clear mathematical process that initially also appeared to lead to reasonable results. But then quite rapidly, early in the 1930s, major difficulties emerged. Weisskopf was able to prove that, when consistently applying this quantisation rule to waves, infinite self-energies, namely infinite electron masses, emerged, in other words, mathematics diverges here, and fails to deliver any meaningful results at all. Despite this very major difficulty, a very large number of rational physics results were gained from these theories. If, for example, one structured the theory as a perturbation theory, and then discontinued this development at a sufficiently early stage, that is, the calculation was so to speak consciously made imprecise, then rational physics results emerged. I am thinking of Dirac's theory of radiation, theory of resonance lines, dispersion and so on. Especially with the application of quantum theory to electromagnetism, lots of good results were achieved that have stood the test of experience, while at the same time it was clear that consistent application of these mathematics would lead to nonsense, namely to infinitely large equations, in other words, to diverging equations. And at the start, it was naturally assumed that these divergences, these mathematical infinities, were somehow due to clumsiness on the part of the calculation. For instance, it was hoped that a better perturbation theory or a complete move away from perturbation theory could result in convergent, that is, to rational results. In fact, however, during the 25 years that have since elapsed, no real progress has been achieved as far as this point is concerned, and we now have every reason to assume that we are dealing here with a very fundamental difficulty that can be solved only by fundamentally new methods, in other words, by moving away from the fundamental preconditions of physics that have applied to date. The fact that we are dealing here with a fundamental problem is something that we can perhaps now clarify in the following manner. We can say that a conflict arises here between quantum mechanics, whose characteristic feature can perhaps be regarded as the uncertainty principle, and the space-time structure of the special theory of relativity about which we heard yesterday in the lecture given by Mr. von Laue. And we are concerned here with the following conflict: the uncertainty principle of quantum mechanics states that if one wishes to very precisely determine the location of a particle or a system, you have to accept a major uncertainty in terms of determining the impulse, or if you wish to precisely determine the point in time, you have to accept a very major uncertainty in terms of energy, and vice versa. In the space-time structure of relativity theory, however, the most important postulate is the idea that all effects are propagated at only the speed of light. That, so to speak, a clear barrier exists between those events over which one can exert no influence as far as the future is concerned, and those events over which it is certain that one can no longer exert an influence, precisely because a light wave can no longer reach the related event point. Or vice versa in the past: about certain events, we can generally know something because a light wave could still reach us from this point. About others it is quite certain that we cannot learn anything because in this case the light wave could no longer reach us. In other words, the distinction in this manner between future and present, or between past and present, is absolutely clear in the theory of relativity. And this was already expressed 50 years ago when the theory of relativity was being developed, when it was said: in the theory of relativity, we cannot assume any actions at a distance , we can allow only for so-called locality, in other words, an effect from a point to a neighbouring point, so that this type of effect is propagated at the speed of light. Now, it is precisely such an absolutely clear distinction between the present and the future, for example, or between the present and the past, that necessitates, according to the uncertainty principle of quantum theory, an infinite lack of clarity of the impulse or the energy, resulting in infinite impulses and energies. So this without any doubt correctly describes the root of all these problems. Of course, such a simple argument is insufficient to prove that these things cannot really be sorted out. But the fruitless efforts of the last 25 years have made it very likely that it is impossible to eliminate the contradiction that exists here by, so to speak, a mathematical sleight of hand. So this difficulty has simply continued to exist for the time being, although the approximation methods of perturbation theory have nevertheless allowed many good results to be derived from this quantum field theory. I want to outline just a few important steps from the subsequent development. For example, this quantum theory of waves was applied to beta decay, which was the Fermi Theory of Beta Decay, thereby providing a good explanation of the distribution of energy, the relationship between energy and lifespan in the case of beta decay. This theory of beta decay then also provided a very important distinction for the subsequent period between two different types of interactions. And since I wish to return to this point later, I would like to briefly characterise these two types of interaction for the time being, and simply introduce a name for now. I want to refer to one interaction as being of the first type, and to the other interaction as being of the second type. The first type of interaction is characterised by the fact that when several elementary particles, or, let us say, two elementary particles, collide, so that the interaction either declines with the growing energy of these particles, or remains constant, or at least doesn't increase. This also means, at the same time, that the interaction varies with these particles' wavelength so that, given decreasing wavelength of the related waves, it then either remains constant or diminishes. The interaction of the second type has the opposite property, meaning, if the energy of the interacting particles increases, the interaction also grows. And this initially has the following important consequence in terms of physics: assume that, in the case of small energies, such interaction is also small. This then means that within the mathematical formalism of quantum theory, for example, in the case of the collision or deflection of particles, a new particle, for example a new light quantum, is always generated, as one particle occurs in the first approximation in perturbation theory, two in the second, three in the third approximation. As the approximations now converge in a good way, however, almost always only one particle is emitted. And this would also be the case however high the energies were in the colliding particles, because in the case of interactions of the first type, the interaction does not increase with growing energy. Only individual particles could then be generated in the emission. But if the interaction increases with growing energy, then it is quite certain that, given high energy, we will eventually reach a point where the interaction has become so great that a perturbation calculation no longer converges, meaning that the first and the second, the third and the hundredth approximation will all be approximately equally great. And this means that it is also equally likely that as many as 20 particles are generated at once, and not just one. This depends on the energetic conditions of course. So the interaction of the second type results in multiple particle generation. Meanwhile, a number of experiments have shown that these interactions of the second type do occur in nature, for example when generating so-called pions from the collision between nucleons, between nuclear particles. Now I would like to mention a further, slightly different type of result. In other words, this differentiation between the two interactions, this was still before the war, whereas the one that I am about to talk about now came somewhat later. One had, as I have already said, the impression that this fundamental contradiction between relativity theory and quantum theory could be bridged only through modifications to the theoretical fundamentals. This naturally gave rise to the question: well, what will then remain at all of the previous theory if we modify the fundamentals? And then it became clear that certain mathematical quantities do exist that one will always need to utilise in order to describe the experiments, and which consequently, so to speak, form a stable component of existing theory. Physicists refer to this mathematical structure as the S-matrix or the scattering matrix, and this quantity can be explained as follows: when elementary particles collide, each collision potentially generates new particles, and the collided particles are deflected and so on. And it is probably very difficult to describe in detail what occurs during each collision. It is quite certain, however, that one will have to describe which particles enter, and which particles exit, the collision. In other words, the asymptotic behaviour of the waves in infinity, namely both the incoming and outgoing waves, we need to present this in a secure mathematical form, and the mathematical measurement that performs this was referred to already in the earlier quantum theory as the scattering matrix. One could therefore study the mathematical properties of this scattering matrix, and from the relationships that resulted one could learn something about the behaviour of the quantities that really occurred in experiments, including, for example, the cross sections, the deflection of particles, the forces and so on. With this, one had a mathematical basis for a theory of which we still had no knowledge, insofar as we knew that this theory would include at least such a scattering matrix, so there must be a general mathematical formalism that allows one to derive this scattering matrix. And vice versa, the experimental physicist who studies the collision of elementary particles will, in the final analysis, be able to analyse his experiments in such a way that he writes down as the result the so-called matrix elements of this scattering matrix. Then, in the period after the war, during the first years after the war, the following discovery brought about important progress in this whole area: already before the war, Kramers for the first time proposed the idea that the mass and the charge, for instance of the electrons in quantum electrodynamics, are affected by the interaction between the matter field and the radiation field. And this means that if one characterises the interaction between matter field and light field in the equations, and if one originally utilises in this equation a mass m0 or a charge E0 for the electron, then these equations, if the entire calculation is performed, will generate another mass or another charge. In fact, it even became clear that, if you calculate correctly, this mass and charge then become infinite, in other words, actually meaningless in terms of physics. Kramers had once proposed the idea that this entire mathematical formalism should be reversed so that, so to speak, the final mass and final charge that emerge at the end be identified as constants in terms of physics, and that the quantities m0 and E0 that one starts with should be left indeterminate for the time being. Now this renormalisation program, as we refer to it, was taken up with the greatest success after the war by Bethe, and Bethe was able to show that, if you turn things around so that the infinities that had previously hampered any progress in the theory disappear for the time being, then you get convergent results, and also results that were extremely interesting in terms of physics. Meanwhile, the American physicists Lamb and Rutherford had namely shown that the old formulas for the fine structure of hydrogen lines were not exactly precise, and that characteristic deviations existed here between experiment and the earlier theory, in other words, between the Sommerfeld formula and the Dirac theory. Precisely these deviations could now be clarified through Bethe's renormalisation with the utmost accuracy, in fact with quite astounding precision, to a number of decimal points, the fine structure of the spectral lines could be calculated and found to be in accordance with the experiment. This was a very major success, because it showed that these quantum electrodynamics were actually much better than had originally been believed, or could have been hoped originally. And from this, from this really major success, a very far-flung hope now arose. The hope arose namely that this renormalisation process would now rid the theory of all of the infinities that had proved so disruptive to date. Roughly the following idea was proposed: one will be able to distinguish between two types of theory The latter, however, were from the outset not taken into consideration for the elementary particles. And it then emerged that the differentiation between these two types of theory fell directly into line again with the distinction between the two types of interaction, in other words, what I referred to previously as interaction of the first and the second type. Theories with an interaction of the first type can be converted into convergent theories through renormalisation, theories of the second type, with interactions of the second type, cannot. So, for the time being, one simply shrugged off back then the fact that this interaction of the second type clearly occurs in nature. People were, nevertheless, not so sure about this, and formulated the hope that only theories of the first type could exist, that the theory of the first type reflects nature correctly, so to speak, and that this would remove the infinity and the mathematical problems. As a result of this development, the mathematical structure of this quantum field theory was investigated very precisely. I recall work done by Tomonaga, Schwinger, Feynman and many others. And we now have a much better overview of the structure of these entire theories than back then. But the result was that the hopes of which I've just spoken, that they were quite unjustified. And it is particularly thanks to the work of Pauli and Källén, who proved that this renormalisation process does not get us past the fundamental mathematical problems that we have known since 1930 in quantum field theory. It became clear namely in the works of Pauli and Källén, that if you conduct this renormalisation process, at least in a simple case where you can have a real overview of the mathematics, the renormalisation process then results in a deviation from previous quantum theory due to the fact that the interaction that stands in the Hamilton function has actually become imaginary. This then meant that the S-matrix, in other words, the scattering matrix, is not unitary, which in un-mathematical language means that one would be forced to introduce negative probabilities, and this is so to speak logically nonsense, and can't be done. So theories of this type cannot be interpreted in terms of physics. Another way of expressing this problem – as set out in the work of Pauli and Källén – is that this type of renormalisation results in new states that Pauli und Källén refer to as "ghost-states", because they do not behave rationally vis-à-vis physics. And these states modify the metrics in the space of these states, in the so-called Hilbert space, in such a way that only the metrics become indefinite, and this again results in negative probabilities. So in other words: although the renormalisation process removed the infinities from the theory, it introduced negative probabilities, which is absurd in terms of physics, and is insufficient to interpret experience. The question as to whether this is now the case in all such renormalisation theories, such as in the simple Lee model, which Pauli and Källén dealt with, is still open. Here, it has not yet been possible to develop the mathematics far enough. But it is probably at least likely. Now the result that emerged from these mathematical investigations was actually also quite pleasing in terms of physics insofar as more recent experiments had meanwhile shown that these quantum electrodynamics proved incorrect in the case of large energies. Processes have meanwhile been observed where, for example, many gamma quanta arise from a single collision process. But if this is how quantum electrodynamics are, or are how they would be, in other words, with interactions of the first type alone, convergent mathematics and so on, then this shouldn't occur at all. So the fact that, at times, many gamma quanta are generated in collision processes, that is many light quanta are generated at once, already proves that quantum electrodynamics are not at all how one previously imagined. So, to this extent, the results of mathematical analysis and the results of the experiment fit well together, but only in the negative sense, whereby there is something a bit wrong with quantum electrodynamics. We can now ask: is this actually a very unsatisfactory situation, or is this a satisfactory situation of theoretical analysis? Now, if we assume that this hope was to be justified, could we get from there to a theory of elementary particles? And here we must actually reply straight away with a "No". Because then we would need to say something along these lines: in order to one day understand the masses of elementary particles, one would jot down – as the physicist says – something like a big Hamilton function, in other words, a big expression for interactions where one introduces a wave function for each type of elementary particle. And then you have some complicated interactions. And then you would prove that the theory can be renormalized only for quite particular masses of these elementary particles and for certain quantities of these interactions, thereby generating the entire masses of the elementary particles. So this is how they would be explained. Now we can see immediately that you can only reach this through a detour through a quite monstrous set of mathematics, because we already know about 25 different types of elementary particles. And what the mathematics would look like when 25 different wave functions are introduced, anyone who has previously had anything to do with such mathematics can imagine. So, this is clearly a hopeless start. And I would like for this reason …, yes, perhaps I should firstly also say the following: over the past years, particularly after these problems occurred in renormalisation, the increasing tendency has consequently been to be basically interested in only the scattering matrix, in other words, in only that which the experimental physicist directly delivers. And a number of interesting mathematical relations for this scattering matrix have been found, partly in connection with the causality requirement as it occurs in the theory of relativity, partly in connection with certain dispersion relations. But I don't wish to go into these details here. I would like to just say by way of criticism concerning this whole type of treatment of physics, that it roughly looks as if the following situation were to have existed in 1900: let us assume that although the principles of quantum mechanics were known in 1900, that nothing was known about the Bohr atomic model, and that nothing was known about the Coloumb forces between electrons and proteins and nuclei. Then one would have probably proceeded similarly to now, one would have introduced a wave function for the oxygen atom, another for the carbon atom, a third for the hydrogen atom and so on, in other words, constantly new wave functions for various atom types. One would have then been able to study the S-matrix, the scattering matrix for the collisions, thereby sorting out the gas kinetics. One would have been able to discuss the Ramsauer effect as a resonance phenomenon and so on. But it is quite clear that all such efforts would not have brought us any closer to the crux of the problem at all. Because the actual problem was to understand the structure of these atoms or, as we now know, to separate them into a statement about electrons and nucleus. So in other words: in order to break through to the crux of the problem, it is clear that something else needs to be done, which brings me on to the second part of my lecture. I would first like to discuss some principles according to which, as I believe, we now need to proceed. First, it seems to me quite certain that in such a theory one cannot start by introducing wave functions as primary quantities for any of these elementary particles. So, for example, the wave function for protons or neutrinos or electrons or the like. Because in this real theory of elementary particles, the particles should emerge as the solution of a system of equations, meaning, you cannot plug them in here. In other words: if the wave function can be written out at all in this theory, which is naturally the question, but if this is the case, then it can only be as a wave function for matter, whereby it is entirely undecided whether this matter manifests later as a proton, meson, electron etc. It must therefore relate to a wave function and a wave equation simply for matter, and not remain for a particular type of elementary particle. Secondly: the mass of the elementary particles must be a consequence of the interaction. In any case, it must be closely connected with the interaction. Thus, it certainly makes no sense to write a linear-character equation that expresses absolutely no interaction. Because mass is a consequence of the interaction. So, it will be most correct if we write a formula in which there is only interaction, and where we can hope that the mass arises as a consequence of the interaction. In other words, we will have to write a nonlinear wave equation for the matter that looks as simple as possible. As simple as possible – for now, one has no real argument for this in terms of physics. We can only say that, it has always been the case, in the final analysis, that in physics to date, the basic equations were simple. Why nature has arranged things in this way is something that perhaps no physicist has ever fathomed out, but the entire science of physics is based on the hope or conviction that this is possible in the end. And now to turn to a solution for this type, we have, in Göttingen, investigated a particular equation which, although it is certainly simpler than the equation that leads to the real system of elementary particles, should already contain many elements of the real theory of elementary particles – this is the hope in this case. Now, matter …, the wave function of matter that is utilised here, it is the so-called spinor type, meaning it belongs to the half-integral spin, and this is required because nature includes particles with half-integral spins and whole integer spins. If one wishes to explain both particle types from one equation, one consequently needs to start with a half-integral spin. So, it must be an equation for a spinor wave function. And perhaps the simplest equation of this type is what I have written here on the blackboard. This equation is of the familiar Dirac type: Gamma^My * d Psi / d x^My, this would be so to speak the Dirac equation, and then comes a simple, nonlinear term. Now, this equation as it stands here would initially be, as we would say, a classic wave equation, and it only becomes a quantum theory equation through the process of quantising, in other words, through the introduction of commutation relations. And this is where the actual difficulties start: for as I have already said, when we quantise according to the normal rules of the game, then this is where the infinities arise, or ghost-states, or other mathematical contradictions. So, clearly, if we wish to lend such an equation any type of quantum theory meaning at all, we need to modify the commutation relations, and do something that isn't done in normal quantum theory. And here we have been guided by the following point of view: this commutation relation is first written one line below, perhaps not readable for all, but I can say in words what its essential aspect is. This commutation relation, namely the commutation between the wave functions at the two points – x and x' – if the causality requirement of the special theory of relativity is to be satisfied, this commutation relation must have the property that it disappears for space-type distances between the points x und x'. Because no effects should be propagated here. It should differ from zero for time-type distances, and now the question arises as to what should be on the light cone itself. And this is where we immediately encounter difficulties. If we are applying normal quantum mechanics, in other words, always observing only the transitions, for instance from the vacuum until the creation of a particle and back again, then one can prove that this commutation function becomes infinite on the light cone in such a way that the integral above it becomes finite. In other words, as physicists say, a Dirac delta function arises on the light cone. And in the case of this type of nonlinear theory, this consequence of quantum theory already results in contradictions and difficulties. And for the following reason: this commutation function, whose general character I have already depicted, should actually behave similarly, for reasons I cannot go into here, to a solution for the wave equation, the classic wave equation, which assumes a singular point. Because if I start at some point with a singularity, then according to the classic wave equation and according to the theory of relativity, effects arise only in the future and past cones, while the wave function is zero for space-type distances. So the commutation function should actually behave for reasons, which I cannot cover in greater detail here, similarly – in other words, in correspondence with – this propagation function. We often express this with simple words: "The commutator should be identical to the propagator". Propagator refers to this propagation function of the classic wave equation, and commutator refers to the commutation function. Now, if in the case of the nonlinear equation, one requires that these two should harmonise, and I believe that one should require this conclusion, then it is impossible to harmonise this through the commutation relations … with the commutation relation of normal quantum mechanics, then namely the Dirac delta functions should not occur on the light cone, and one is then prevented from doing anything except, in addition to the states that exist in terms of physics, introducing non-physics states, in other words, to a certain extent ghost-states, which ensure that this delta function does not occur on the light cone.

# Werner Heisenberg (1956)

## Problems in the theory of elementary particles (German presentation)

# Werner Heisenberg (1956)

## Problems in the theory of elementary particles (German presentation)

Comment

Cite

Specify width: px

Share

COPYRIGHT

Cite

Specify width: px

Share

COPYRIGHT

Related Content

Werner Karl Heisenberg | Physics | 1932 | ||

6th Lindau Nobel Laureate Meeting | Physics | 1956 |