J. Hans Jensen (1965) - Change in meaning of the term 'elementary particle' (German presentation)

Ladies and Gentlemen! My colleague, Mr. Mecke, put me somewhat on the spot with his phone calls, because I was now in the difficult situation of having to prepare a lecture for an auditorium in front of which I was not accustomed to speak. I was not able on such short notice to draw up an adequate report from my research field, so I had no choice but ask Mr. Mecke for permission to reach into my desk drawer and largely base my lecture on the manuscript of a presentation I held a few weeks ago at the anniversary celebration of the Heidelberg Academy. I must therefore ask the physicists among you who attended the Heidelberg lecture to bear with me today should I mention too many trivialities. Back then I also thought it would be better to focus not so much on the newest facts and realizations that came to light over the last few years in the world of elementary particles, but instead to reflect on how these concepts were formed. And I find this to be a particularly impressive example of how all of our concepts in the field of physics, and not just the concepts, but also our ways of thinking, must constantly be corrected by and adapted to experience, and how little a priori knowledge has in fact remained in today’s description of nature. The topic I had chosen was “The change in meaning of the term ‘elementary particle’”, and this change in meaning is caused by the fact that in the development of coining terms in physics, it is common practice to keep outdated names and words, but at the same time to give them a different, more precise definition, and to free all of these thus used terms of subtle associations to the greatest extent possible. Naturally, I had to, when discussing such a topic, spend a large part of my lecture on what you could call historical observations. However, there are many reasons why I have to forego explaining the origin of the terms “element” and “particle” in ancient thinking, not least because we are short on time. Unfortunately also because I never had the fortune of receiving a classical humanistic education, which is why in spite of it all, my efforts to fully grasp the ways of thinking in ancient times remain those of an amateur. Terms such as particle or atom most likely arose from conclusions inferred by reflecting on very simple observations. Meaning that when you take apart complicated structures, e.g. when you break up a piece of fruit or a sacrificial animal, you will end up with individual parts that fundamentally differ from each other and also differ from the structure as a whole. Yet when you make a drop of mercury burst into smaller droplets, all of these droplets, apart from their size, appear to be completely identical to the original drop. That gave rise to the obvious question whether it was possible to continually perpetuate this division process and keep ending up with identical particles, or whether this division process ends when you reach minute, non-divisible particles, the atomoi. In the ancient world, the art of experimenting of course and most likely also the ability to formulate questions was not yet sophisticated enough to allow thinkers to actively test such questions. It was not until the second half of the previous century that scientists began to provide substantial answers, develop processes to count atoms, and particularly in the first half of this century, to determine their diameters. It might be helpful to point out that in the ancient world the conceptualization of atoms gave rise to some very peculiar speculations. Namely, if matter is supposedly made up of atoms, then these atoms must be separated from each other by a void. This question of emptiness kept resurfacing in antiquity, the question of how this “void”, the “pure nothingness” or the “non-existing” could possibly exist, meaning how it could be of importance to us, to the world that surrounds us. We will later see how modern-day physics has learned to give a very substantial and concrete and highly unexpected reply to this dilemma, if you would call it that. I have even less time to elaborate on the development that led to the modern term chemical element. It seems that in today’s concept of elementary particles, the term “elementary” is once again converging in a sense with ancient ways of thinking. I will also revisit this topic later on in my lecture. In my opinion, one of the fundamental, most important steps with regard to more precisely defining the concept of atoms in the physical understanding of matter took place in the age of Isaac Newton, and please allow me to read out some passages from his book entitled “Opticks” published in the year 1704. This is one of the few works he wrote in English. And in the roughly 60-page annex to this work, which he called “Queries”, Newton discusses a very large number of different phenomena, which we would nowadays assign to the field of physical chemistry, and then he summarizes his conclusions in a few sentences, which I - well, should I read them out in English – I could perhaps start by reading out the abridged translation, meaning the German version. So he discusses all of these facts, and then he continues as follows: massy, […] impenetrable, movable particles, […] as most conduced to the end for which he formed them; and that these primitive particles […] are incomparably harder than any porous bodies compounded of them; even so very hard, as to never wear or break in pieces; no ordinary power being able to divide what God made one in the first creation.” are to be placed only in the various separations and new associations and motions of those permanent particles.” Up to this point, these are still largely concepts adopted from ancient thinking. But then the great discoverer of celestial mechanics and the law of gravity has his say. I will have to read out this part in English. accompanied with such passive laws of motion as naturally result from that force, but also that they are moved by certain active principles.” That is what forces used to be called back then. And now comes the important sentence: He then explains this by emphasizing his opposition to the conventional understanding of his time, which still used the concept of “occult qualities” as adopted and passed on by scholasticism and the Renaissance. and to be the unknown causes of manifest effects […] uncapable of being discovered.” And then Newton continues as follows: which it acts and produces manifest effects, is to tell us nothing: but to derive two or three general principles of motion from phenomena, and afterwards to tell us how the properties and actions of all corporeal things follow from those manifest principles, would be a very great step in philosophy.” With these sentences, Newton, fully in line with his celestial mechanics, formulated a program which researchers in the field of physics systematically pursued in the subsequent centuries. Basically, just like gravitational forces act between inert celestial bodies, Newton’s “massy particles” also do not possess specific, inexplorable occult qualities that cause the variety of their behaviour, but that instead there are universal, explorable forces acting between them and determining their motion and their cohesion. Discovering these laws became the task of physicists researching matter. Newton himself, by studying the phenomena discussed above, had already concluded that the forces between atoms over a shorter distance should be much stronger than the gravitational forces, but that they rapidly become weaker the greater the distance becomes. Now, this diagram of matter is based on a dualism that is also characteristic of Newtonian celestial mechanics. We have, on the one hand, the massy atoms, on the other, the forces between them acting in the void, which determine the movement of the atoms, and thus a peculiar dualism, which in the previous century was described using keywords like force and matter, and was heavily discussed even in non-scientific literature. This dualism was resolved in a highly unexpected way around the turn of the century, on the one hand by Faraday’s field theory, and later by the concept of a complementary description of Nature developed by Born, Heisenberg and Bohr. And even though these things have already been mentioned here today, and will also be addressed tomorrow by much more qualified experts, I do have to quickly touch upon some aspects. Not until the 19th century did researchers ascertain that the forces Newton assumed acted between the atoms could be ascribed exclusively to the electromagnetic phenomena, and that we must therefore build on the terms derived from the art of experimentation and from the genius of Faraday and Maxwell, namely Faraday’s concept of force fields, which was certainly one of the most promising terms in physics. It was not until the start of this century, I believe, that this concept gradually gained acceptance on the continent, while the term “action at a distance” remained dominant under Neumann and Gauss for a long time. The basic principle, as you all know, is that an electrical charge does not act on another through a void, but instead acts via an agent, albeit not a material one in the traditional sense of the word. Namely, every charge must be understood as the source of an electrical force field that suffuses space, and that its effect can be detected by another charge. And yet this field must suffuse the space in reality, even when its presence cannot be detected by another charge at a particular moment. The most important finding of this experiment was that if one moves the source of such a field through space, more specifically accelerating it, then these fields can even become detached from their sources and travel through space and thereby - by studying this motion, it was also established that changes in the electrical fields are linked to changes in magnetic fields in the same location, and vice versa. However, this statement that these fields exist in reality, only made sense after showing new, verifiable physical consequences. These consequences emerged in the finding that the fields continually distributed in space carry both energy and momentum; properties that, up until the turn of the century, were commonly only ascribed to material particles. And, as you know, all of these consequences were proven by the discovery made by Heinrich Hertz and all of the scientific work that followed. That is how experience gained through experiments as perceived by the ancients turned the dichotomy of the filled, “to pleres”, and the void “to kenon”, into a picture of no-longer-material yet real fields existing everywhere in space, and their sources connected to atomic matter. However, as you all know and as we will hear tomorrow, this picture was subject to yet another fundamental change, this time initiated by Max Planck. That is because more subtle experiments showed that the electromagnetic fields travelling through space as waves do in fact transport energy and momentum in a quantum-like manner during certain experiments, and that at the same time these fields also showed traits hitherto commonly described solely in the corpuscular picture. Well, this conflict was long considered to be an irritating paradox, until the researchers mentioned earlier taught us to better adapt this term to the empirical world in the sense that both the concept of corpuscles and the concept of a space-filling field continuum are, on their own, not suitable for describing natural phenomena right down to the last detail. They are both necessary, both useful in the sense that they grow together as mutually limiting but also as complementary terms in a closed description of the phenomena. For this Bohr coined the term “complementary description of Nature”. As you know, since then it has also been the reverse, that not only the electromagnetic fields show corpuscular properties, but that precisely what we used to define as corpuscles, such as electrons, also possesses inherent field properties. Tomorrow you will hear more on this topic, so I can be brief. What I find peculiar is that these formulations of complementary description apparently appear more familiar and plausible to the younger generation of physicists than Newtonian celestial mechanics with its action at a distance, and that one keeps getting asked: For our purpose, now that I will be talking about the actual topic of this lecture – unfortunately 20 minutes later than planned – suffice it for us to note that whenever the subject of elementary particles is addressed in future, these particles are not only sources of force fields with reciprocal interaction, but that at the same time they themselves, in their function as elementary particles, also possess field properties as far as their laws of motion are concerned, and that these field properties are even dominant in suitable experiments. Furthermore, it does not matter whatsoever whether we are talking about elementary fields or elementary particles. They mean the same thing in Bohr’s sense of the word. I would now like to revisit the subject of atoms. Allow me to once again repeat the peculiar phrase in Newton’s work: If by “ordinary power” he meant the technical means available in his time, then he was certainly right. It was not until the end of the past century that researchers in the field of electrical engineering managed to harness electric voltage in a laboratory, such as the one that occurs between a storm cloud and the Earth, that we were forced to realize that the atom is not the last unit either, but that it in fact has a structure. And I heard that this very development was illustrated in an impressive manner just yesterday, so that I can once again be brief. We still electrons and about atomic nuclei to talk about, the volume of which is trillions of times smaller than that of the atom as a whole. Well, but that still means that the more than 100 chemical elements known today correspond to more than 100 different atomic nuclei. And today it seems natural to ask ourselves if these nuclei could possibly have a structure of their own and if they are comprised of elementary building blocks. And what is interesting is that long before the atomic hypothesis was experimentally consolidated, Prout, an English natural scientist and also highly renowned doctor, Prout put forth in the year 1850, albeit cautiously and using a pseudonym an essay presenting the hypothesis that – in fact two papers, published in 1815 and 1816 in the “Annals of Philosophy” – he hypothesized that all chemical elements are derived from a primitive substance which he called “prote hyle”. He supported his hypothesis by bringing up the apparently integer relations between atomic weights, which at that time were not yet precisely known. Now, as you know, this hypothesis has been fully confirmed. The first indication of a structure within the atomic nucleus was given to us by Nature itself, in the form of natural radiation discovered and studied in the 70 years ago by Becquerel and the Curies. But from experiments with electromagnetically accelerated particles we then learned that the atomic nucleus consists of two building blocks. One is the nucleus of the hydrogen atom, the proton. The other an electrically neutral particle with an almost identical mass, the neutron. And now we come to the point: Do we want to call the neutron an elementary particle? I still vividly recall in the first few years following the discovery of the neutron a wide-spread debate that came up during my university days: Is perhaps the neutron in fact made up of an electron and a proton? This was supported by the fact, after all experimental discrepancies were eliminated when determining mass, that the neutron was lighter than the proton. It was also seemingly supported by the fact that a neutron could also disintegrate into an electron and a proton. But then we learned, especially by studying artificial radioactivity, that the opposite can also occur. A proton can just as well turn into a neutron and a positron. It does not make sense to discuss whether a neutron is made up of an electron and a proton, or whether a proton is made up of a neutron and a positron. Instead, we need to say that in these disintegration processes, these new, meaning these light-weight particles, the electron, and, as you also know, I will soon revisit this topic, that the neutrino only really come into existence, are only really created in this act of decay. Accordingly, we can rightly consider the proton and the neutron as simply being two manifestations of one and the same particle, an elementary particle, of which one of these manifestations in unstable. During this decay, electrons are created in the same way as we always assumed, that light, meaning its quanta, the photons, are surely not present in a glowing body before they are emitted in the illumination. We have thus distanced ourselves quite a bit from the conceptualizations of elementary particles, because originally – which is why I quoted the Newtonian laws earlier – the stability was the primary feature traditionally associated with the concept of anything elementary. Later on we will see that there are particles that are much more short-lived than the neutron, which after all has a lifespan of 1000 seconds, and we would like to consider these particles as being elementary. The next step leading us away from the naïve concept of particles then made us come to the realization that we must ascribe certain intrinsic properties to the elementary particles if we wish to characterize them. This wording sounds at first, so - it is reminiscent of the occult qualities that Newton rebuked so strongly. We will however see that this is not the persuasion, but rather that these intrinsic properties can be very precisely defined, measured. Namely, apart from the electric charge, that distinguishes a proton from a neutron, we also need to attribute a size to these particles, as an intrinsic property. Namely one that can be dynamically characterized and that is described using the term spin in classic mechanics or electrodynamics, similar to the effect of a billiard ball. Without there being any point in saying that the particle really rotates in space. Because it is no longer possible to attach markers or something similar to these particles in order to observe any rotation. This is an external argument. The important thing is that the structure, the mathematical structure of these intrinsic properties, is such that this rotation principally cannot be observed. Incidentally, this intrinsic spin is pretty much the only characteristic of a neutrino, which I mentioned earlier in connection with beta decay. This uncharged particle without mass has such low interaction with the matter of which our measuring equipment is made up, that it could not be verified empirically until about ten years ago, even though Pauli had already proposed its existence back in 1930 in a famous letter addressed to his colleagues attending a session in Tübingen, so as to bring order into radioactivity-related phenomena, which were largely unexplained at that time. It is only thanks to the advances made in experimentation technology thereafter that it became possible to finally prove its real existence. Now, in the case of the neutrino, the intrinsic spin was pretty much the only characteristic of this elusive particle, which Pauli was able to positively predict - apart from the fact that it transported energy. Almost all other characterizations can only be expressed as negations. It has no mass, it has no charge, no electromagnetic effect, etc. And we could thus, with regard to the neutrino, allude to the slightly altered version of a verse penned by Christian Morgenstern: “It is a spin, nothing more.” The reason I have told you this is to highlight the different opinions on intrinsic properties. These intrinsic properties also made it subsequently possible to develop the concept of antiparticles, how the positron is the antiparticle of an electron. They can both, when joined, completely cancel each other out. All of the particles’ intrinsic properties disappear along with the particles themselves, and are replaced by the mechanical and energy-momentum, angular-momentum properties that now resurface in the emitted radiation thereby. These antiparticles are also created in pairs. I do not believe it is necessary to analyse the numerous, rather confusing accounts that have been portrayed in the media over the past few weeks following the discovery of the antideuteron, or the detection of the antideuteron. So these antiparticles of protons, first predicted by Dirac, were discovered roughly ten years ago by a group of researchers in Berkeley, and in the meantime there were many, many other particles, almost too many other particles, that would become hotly debated and now already had antiparticles. And the only possibly exciting thing about all of this is that two antiparticles were simultaneously created in the antideuteron, which requires a large amount of energy and momentum. And that the transfer of energy and momentum to these two particles, the created antiproton and antineutron, was such that they even stayed together and could move through the measuring apparatus as a deuteron. That they did not immediately break apart again into an antiproton and an antineutron. Yes, then let us move on to the main problem, namely that up until the mid-‘30s, the known building blocks of matter were the nucleon in its two manifestations, as well as the electron, which itself acted as a source of the electromagnetic field. Just like the charged forms of the nucleon. In addition, we knew of the quanta of electromagnetic radiation, the photons, and could finally rightly also define the neutrino as an elementary particle. Back then, the only problem that seemingly still needed to be solved was the question regarding the nature of forces that hold the protons and the neutrons together, and use these building blocks to form the nucleus. And scientists hypothesized that these forces do not in fact act at long distance between the nucleons, but that they are transmitted through a field instead. This hypothesis and all its consequences will be presented tomorrow by Mr. Yukawa. Researchers knew that these forces - or Yukawa also decided that these forces must be significantly stronger than the electromagnetic force when the nucleons are within close proximity of each other. And because of this short distance, it became necessary to finally ascribe a finite mass to the quanta of this new force field that was now expected to complete the picture of matter, around 1/7 of the nucleon mass. And Yukawa called the quanta of this field, which transfers the interaction of the nucleons, the “mesons”. And back then, physicists were actually convinced that by conducting experiments to study these mesons, these very quanta which could be created in sufficient numbers using the right accelerators, it would be possible to obtain all the needed information about the nuclear forces, and that they would thereby be able to truly complete the picture of the structure of matter. Now, as you know, these mesons were initially discovered at a location from where the universe itself sends us very highly energetic projectiles, in cosmic radiation. And I do not wish to elaborate on the slight confusion that existed with regard to correlating the particles discovered in this cosmic radiation with these quanta of nuclear forces. Many years later, it finally turned out that the particles scientists initially believed to be Pi mesons, the quanta of the nuclear force field, in the cosmic radiation were not Pi mesons at all, but in fact only still the decay product of this Pi meson. Yet later the Pi meson itself was also discovered in cosmic radiation, and I believe that the whole program in which physicists all over the world, as long as sufficient funds were available and other conditions were met, then began investing great efforts into building particle accelerators in the post-war years in order to study these mesons, it was founded on a firm belief: Once we know these mesons, we can then complete our picture of matter, we can then calculate all of the interactions, define them quantitatively, because in principle all chemical effects are described by quantum mechanics and electrodynamics. Looking back now, I am always reminded of a verse from Goethe’s Faust. When Faust journeys up the Brocken Mountain on Walpurgis Night and delivers the famous line: To which Mephistopheles icily replies: “But many a puzzle’s knotted so.” That was exactly the result of developing particle accelerators to experiment with radiation. Namely that in addition to this meson, the Pi meson, which was believed to be transmitted by the nuclear forces, it turned out that there was still a large, large number of other particles in Nature that we had now already seen under these conditions, some of them also in cosmic radiation, and that this opened up a whole new field of activity. First, back to the meson. As you know, the meson can exist as a neutral, as a positively and as a negatively charged meson. That is its simplest intrinsic property. Furthermore, experiments have clearly shown that it does not have an intrinsic spin. In contrast, it does have a different intrinsic property, which was initially a source of peculiar unease for the physicists conducting the experiments. Namely that these properties can simply be described not in terms of the particles, but in terms of the properties of the field corresponding to that particle. The Yukawian force fields. This is the so-called intrinsic parity. Allow me to use this opportunity to tell you a funny story about how, actually by a hair’s breadth, if Dirac had been right about his hypothesis that free magnetic monopoles could possibly exist, if he had been right or perhaps even is right, that this question of the intrinsic parity of particles then really was already present in the classical physics of fields and their sources. Namely the magnetic pole, the pole, not the dipole, but the pole, must have an opposing intrinsic parity, such as the electric elementary charge. Because you all know that in the case of parity transformations, the magnetic field reacts differently than the opposing like the electrical field. Furthermore, you know that magnetic dipoles react like the magnetic fields. Meaning that their plus/minus sign does not change when transitioning from the right-handed to the left-handed coordinate system. Yet if you consider the dipole a position vector and multiply it with the pole strength, then you will see that the sign of the position vector changes, meaning that the sign of the magnetic monopole must also change when transitioning from the right-handed to the left-handed coordinate system. So, basically, all of this would have already been present in classical electrodynamics if there had been not only magnetic dipoles but also magnetic poles. Now, we were forced to ascribe this property to the Pi meson. That, too, had to behave like a magnetic monopole would have to behave. And in turn imposed by a series of experiments, the details of which I cannot go into at this time. So as I said, another one of the setbacks with regard to structuring the concept of elementary particles is the meson’s short lifespan. Of this quantum that transmits the nuclear forces, namely, it has nothing to do with the nuclear forces. It has nothing to do with the interactions with the nucleons, that it can decay again. A neutral one decays within 10^-16 seconds when exposed to hard x-rays. A charged meson has a lifespan of 10^-8 seconds and, strangely enough – and this is where the confusion starts – it has two possible forms of decay. Namely very rarely: in one in ten million of all cases, almost in one in one hundred million of all cases, the end product is not an electron and a neutrino. And otherwise, in the vast majority of cases, the end product is either a particle itself, that in many, many respects behaves exactly like an electron and has a mass that is only 207 times greater, but can in turn decay into an electron and a neutrino and an antineutrino. This intermediate product, the Mu meson, surely remains one of the most mysterious particles, and one of our most difficult tasks is to understand its role in the plan of creation. I don’t know, Mr. Heisenberg might disagree somewhat. It did reach a factor of 207. Yet as I just said, apart from this Mu meson puzzle, the hopes in line with the Newtonian program of now using the electron of the neutrino and the Pi meson to obtain all the data needed to complete the picture of matter did not come true, because it brought up refined forms of experimentation and so many other particles. What was very peculiar was that the first - or rather, what was very remarkable was that the first of these new particles was not in fact detected using artificial accelerators, but by a Manchester-based group studying cosmic radiation. most of them not in the bubble chamber but in the Wilson chamber. And then Butler and his colleagues from the Blackett group in Manchester were the first to observe cases in which either a charged particle or an uncharged, high-energy particle collided with a nucleus and underwent a nuclear transformation, yet then at a certain distance (though apparently in direct correlation), they also detected traces of two charged particles, one proton and one Pi meson, for example. And if you take the total momentum carried by these two particles and extend it backwards, you would end up exactly at the centre of this reaction. This shape is the reason, hmm, I should have drawn this diagram the other way around, why these particles are called V particles. Just to give it a name to work with. I still remember very clearly that I happened to take part in a seminar in Pasadena, at which Anderson’s colleagues, or people from Anderson’s group in Pasadena, had systematically studied the generation of these particles and the relative frequency with which this group of particles, which leaves such a V-shaped trace, was detected in predefined conditions, such as known intensities of cosmic radiation, known material thickness, in which these particles can be produced, etc. And I remember so very vividly how Feynman, a very temperamental man, would suddenly leap up and say: And this process of creation had to be triggered by nucleons or available Pi mesons interacting in such a way as to create such an uncharged particle, which in turn could then decay into a Pi meson and a proton. And that brings us to the following: If these particles could have disintegrated into a proton and a Pi meson, then the probability of their creation would have to be determined by the decay rate. That was the assumption. Then it turned out, however, that they are created so frequently, that if you were to conversely calculate the decay rate, the particle would already have to decay up here in the nucleus and not travel a long distance of several centimetres. Feynman was extraordinarily temperamental and exclaimed: “This is impossible!” It is simply inconsistent. Then there was a long discussion – I mean, of course Feynman was not the only one to voice this opinion – that lasted almost six months, as far as I know, until Pais suggested completely decoupling this decay process from the creation process by means of a very peculiar postulation, which, however, later proved to be completely true. Namely, occasionally there were other, not one, but two such scenarios, in which the second particle would decay into Pi+ and Pi-, for example. Once again a second V particle, which did not decay into a proton and a pion, however, but into two pions. And Pais was the first to formulate the hypothesis that the creation process is perhaps coupled to a specific condition, that these two particles must always be created in pairs. That, if they were together, they could still meet again, that they could destroy each other again Yet once they are created, they drift far apart, so these are two different types of particles that do not find a partner with whom they could fulfil this condition of interacting in pairs in order to decay again, so that the decay is completely decoupled from the creation process. And then it is probably typical for modern-day physics to take a close look at this postulation. These particles were eventually named V particles, or later it was turned around and they were called Lambda particles, so particles that are always neutral and decay into a least a proton. And the other particle was called a K particle. In this case K0, that can only decay into particles that do not have a spin, or ultimately into light quanta. So that the total spin is an integer, in any case. Now, it is typical that scientists immediately tried quantitatively expressing the phrase: And it was one of Mr. Heisenberg’s colleagues, Nishima, or at least a temporary colleague of Mr. Heisenberg, as well as Gell-Mann in Pasadena who said that if this is supposed to be such a clearly formulated principle, then we could probably characterize this law with another property that is intrinsic to all elementary particles; we could give them simple numbers, positive and negative. So that the common particles that we already know of, that have this intrinsic property, which was named “strangeness” which is selected in such a way that it is first of all an integer, positive or negative for all hitherto known particles, for the protons, for the pions, for the nucleons this strangeness should be equal to zero. And then comes a random assignment, for example by giving this particle a strangeness of -1 and this other particle a strangeness of +1, so that a law of conservation applies to such intrinsic properties, just like there is a law of conservation for the charges. In the meantime, if the null is slightly shifted, this intrinsic property is often directly characterized as a hypercharge. Now, I fear I am almost out of time, and it turns out that among these different, frequently studied, newly discovered particles, with regard to these two new intrinsic quantum numbers that, on the one hand, by the parity, the transformation behaviour of the fields or the assigned particles when transitioning from a right-handed to a left-handed coordinate system, and secondly by this peculiar quantum number, the strangeness, one would end up with a schematic outline into which you could initially insert all the reactions in such a way that no internal contradictions arise. No internal contradictions of the sort that I characterized earlier, and which Feynman reacted to in a truly distinctive way by exclaiming: that is by no means an occult quality any longer. Well, I could go on and on in this vein. So there are meanwhile particles that need to be characterized using such quantum numbers This means that we now have a large number of elementary particles, a number that is now almost as large as the number of chemical elements known at the turn of the century, but which, using these few terms that can be formulated so precisely, can already be organized in a classification scheme in a manner similar to the first attempts at classifying chemical elements. There is hope that by using these terms, as well as this classification scheme now (which is still largely qualitative in nature), it will become possible to make some quantitative predictions about the relative frequency with which this or that particle is created when one bombards a nucleus with a particular type of particle, etc. This means that thanks to the theorem of nucleons and antinucleons, we got rid of the diversity of nuclei, yet all of these forces between the particles, also between the nucleons, that make up the nucleus are not only determined by the this force field, the Yukawian pion force field, but also by the abundance of force fields that all correspond to these “strange particles”. These forces are, furthermore - first I would like to point out that precisely with this concept of complementarity, that there are always particles, that forces act between the particles, that these forces, according to our program, are apparently transmitted through fields, yet that these fields have quanta, and that these quanta lead to the creation of new particles between which interactions can once again exist, so that this could lead to an infinite regress. Now, the fact that very strong interactions do in fact also exist between the pions themselves, meaning between the Pi mesons themselves, and also between the strange particles, has been experimentally verified. Especially since it is possible to temporarily create conditions which later However, in accordance with the energy- and momentum principle, it is possible to subsequently say with certainty that, temporarily, these pions were once particles. If they only break apart as pions, then that must have therefore been caused by forces present between the pions. So these forces do exist. It could therefore be feared that now there is no stopping it. These forces must be transmitted through fields again and again, these fields contain quanta, and there are forces between these quanta. And this is precisely the point of the matter, namely that in Bohr’s concept of complementarity, the possibility of closing this regress in itself already exists. Because among the numerous particles discovered this way, there are always already some whose assigned fields can transmit the effect between other particles. And that is why it is not necessary for this regress, this search for new particles, to be infinite. In addition perhaps we should also embrace the tradition of the Ionian natural philosophers, the new way in which questions were formulated back then in the Ionian Age. Unfortunately, I can, there are, among other things, philological barriers that lie before me So now, naturally we cannot adopt these early Ionian answers, meaning the various answers provided by the Ionian natural philosophers. Nevertheless, their dream has remained alive everywhere in physics, especially in light of the abundance of particles, searching for the primitive field, which would help us understand all these other fields according to one unified principle. But I will leave it to the more qualified people present here today to talk about that topic in more depth.

J. Hans Jensen (1965)

Change in meaning of the term "elementary particle" (German presentation)

J. Hans Jensen (1965)

Change in meaning of the term "elementary particle" (German presentation)

Comment

Jensen, who studied physics, mathematics, physical chemistry, chemistry and philosophy in Hamburg and Heidelberg, later worked as a theoretical physicist, investigating the structure of atomic nuclei amongst other things. In his 1965 Lindau lecture, which he gave merely two years after receiving the Nobel Prize, he gives an overview of the historical change of meaning of the term “elementary particle”. This change of meaning was very profound indeed, despite the fact that the definition of the term remained practically unchanged since ancient times: an elementary particle cannot be divided or broken up any further and hence represents a basic building block of the matter in our universe.
However, since the beginning of the 20th century, scientists developed impressive skills in breaking up particles which were considered elementary before. And so, quite a few particles had to be disqualified from the “elementary” group. The first one of them was the atom itself (the word atom translates to “the indivisible” in ancient Greek), which was shown to consist of electrons, neutrons and protons in the first decades of the 20th century. What seems to be unfortunate from the perspective of the particle was usually very fortunate for the involved scientists: various discoveries in the field of elementary and subatomic particles were rewarded with Nobel Prizes (for an overview please refer to the Mediatheque Topic Cluster Subatomic Particles). And the quest is still ongoing. Just one year before Jensen’s talk (unmentioned by him in his lecture), the Higgs-boson was postulated as a new elementary particle. In fact, to date, the Higgs-boson has remained the only widely accepted elementary particle, which has never been detected experimentally. Currently (02/2012) more than 10.000 international scientists are working on the ATLAS and CMS experiments of CERN’s Large Hadron Collider (LHC) to change that.
Looking at the scientific developments since Jensen’s Lindau lecture, the rather philosophical questions he raises towards the end of his talk seem even more relevant. Will there be a “never-ending regress”? Will we, by building more and more powerful particle accelerators, go on to discover heavier and heavier elementary particles without limits and bounds? Jensen obviously harboured doubts concerning the more optimistic beliefs of some of his contemporaries, who were hopeful that a comprehensive and closed theory of elementary particles could be found. From a today’s perspective, however, the optimists appear to have ended up being right, since the Standard Model of only three families of quarks and leptons remains so successful.

David Siegel

Cite


Specify width: px

Share

COPYRIGHT

Cite


Specify width: px

Share

COPYRIGHT