by Joachim Pietzsch

„A new inhabitant of the heart of the atom was introduced to the world of physics today”, the New York Times reported from the AAAS meeting in Pasadena in June 1931, “when Dr. W. Pauli Jr. of the Institute of Technology of Zurich, Switzerland, postulated the existence of particles or entities which he christened ‘Neutrons’”.[1] At a time, when only protons, photons and electrons were known, this was the first public appearance of the particle, which - after the discovery of the real neutron in 1932 by James Chadwick (Nobel Prize in Physics 1935) - would become the neutrino. With a smaller circle of acquainted scientists Wolfgang Pauli had already shared his suggestion half a year earlier, in a letter from Zürich on 4 December 1930: “Dear Radioactive Ladies and Gentlemen”, he wrote to his colleagues who had gathered at a meeting in Tübingen, „I have hit upon a desperate remedy to save the ... law of conservation of energy...namely, the possibility that in the nuclei there could exist electrically neutral particles, which I will call neutrons...The continuous beta spectrum would then make sense with the assumption that in beta decay, in addition to the electron, a neutron is emitted such that the sum of the energies of neutron and electron is constant.“ [2]

A Daring defense of the Law of Energy Conservation

The problem Pauli “desperately” strived to solve with his proposal had been puzzling physicists for almost two decades. It had to do with a major difference between the three types of radioactive emission, which Chadwick had first demonstrated in 1914. While the spectra for the decay of alpha particles (equivalent to helium nuclei) and gamma rays (highly energetic electromagnetic waves) were discontinuous as expected, beta decay (emitting electrons) resulted in a continuous energy spectrum. That meant that the emitted electrons did not have the same energy every time – a phenomenon that contradicted the fundamental law of energy conversation. Was this law perhaps not valid in the special case of beta decay? Even Niels Bohr seriously considered such a partial breakdown of energy conservation – much to the anger of the very outspoken Wolfgang Pauli who struggled hard for a solution, as Rudolf Mößbauer (Nobel Prize in Physics 1961) vividly recalled in his Lindau lecture in 1982:


Rudolf Mößbauer (1982) - The world is full of neutrinos (German presentation)

Dear students, ladies and gentlemen. I would like today … I would like today to talk a little about neutrinos, a topic which has been very much at the focus of interest in physics in recent years. And would like, as someone who comes from Germany, to start with a few historical remarks. I won’t begin with the ancient Greeks, but somewhere around the year 1930 with Wolfgang Pauli, who is really the inventor of the neutrino. Inventor in the best sense of the word, at the time it was just a stopgap which later turned out to be correct. Let me quickly outline the problems that faced us in the 1930s. Radioactive beta decay had been observed, in which a nucleus of atomic number Z emits an electron and is transmuted into a new nucleus of atomic number Z+1. Since the original and the final nucleus are both characterised by a well-defined energy, one should assume that the electron, apart from its own mass, carries this energy difference, in other words that the emitted electron would have a sharply defined energy. But that was not the case, it was known at that time that the electron spectrum did not have a sharply defined energy, but that it was a whole spectrum ranging up to a maximal energy which corresponded with the expected sharply defined energy. That was a puzzle, understanding this spectrum, because it really looked as though the conservation of energy no longer applied, and in fact no less than Niels Bohr showed himself prepared to sacrifice the law of the conservation of energy in this special case. Wolfgang Pauli did not think much of this at all and he racked his brains a lot, and I would like perhaps to quote a letter which he wrote at the time, also to show the students something, that scientific progress mostly consists of false starts, retreats and a zigzag course, and that we only occasionally have the luck to find, or, I would almost say, to guess the right one. Wolfgang Pauli initially had assumed that, besides this electron here, something else was also emitted, that is a normal gamma quantum, and he believed that it had been overlooked in the experiment and he expressed himself on this in the following way, and I now quote from a letter that he wrote to Klein in February 1929. So, I quote: that gamma rays must be the cause of the continuous spectrum of the beta rays, and that Niels Bohr is completely on the wrong track with his remarks in this connection about a violation of the conservation of energy. I also believe that the experimenters who measured the energy made some kind of mistake and that they have so far missed the gamma rays simply as a result of their lack of skill. But I understand experimental physics too little to be able to prove this view and so Bohr is in the pleasant position, for him, to exploit my general helplessness in the discussion of experiments; he can invoke Cambridge authorities, incidentally without bibliographical references, and is able to fool himself and us however he likes.“ So much for the quote from Wolfgang Pauli. Now, in the following year Wolfgang Pauli had gained a little more faith in the assertions of the experimenters, who said they simply did not see any gamma rays, and then he invented the neutrino. Invented in the sense that he said that, besides the electron, another particle is emitted, a particle which he initially named the neutron. Later the neutron was used for something else and it was then called the neutrino, from the Italian for “little neutron”. So this neutrino was, so to speak, invented to save energy conservation, and this invention later turned out in practice to be right. But it still took until the year 1956 before Reines and Cowan provided direct evidence that this neutrino exists, in a direct experiment, in a direct experimental reaction that I have written down for you here and that I will discuss still further. Reines simply fired electron-neutrinos – this slash means antineutrinos, that is not so important now – at protons and then produced a neutron and a positron and showed, using the evidence of these two particles, there really is such a thing, such a thing as a neutrino exists. Today we believe we know three sorts of neutrinos, the electron-neutrinos, the muon-neutrinos and probably also the tau-neutrinos. The tau has already been seen, the associated neutrino not really yet, but we do believe that it exists. Neutrinos have, as I said earlier, a great significance at the moment in physics and I have briefly assembled the principal reasons for this here. First of all, they are very interesting, since they are solely subject to what is known as the weak interaction. Almost everything that we measure in physics today is subject either to the strong interaction, the interaction which is responsible for the stability of atomic nuclei, or at least the somewhat weaker electromagnetic interaction, whenever charges are involved then there is an electromagnetic interaction. These are very strong interactions compared with what is known as the weak interaction which is responsible for this radioactive decay up here and which one normally cannot see because the strong interactions I mentioned overshadow it completely. Only with the neutrinos is it the case that these other interactions are not there and that we can then study the weak interaction in isolation. A second important reason that we are so interested in neutrinos in these years is that the neutrinos are responsible for the reactions which take place in the Sun, that they are essential for these reactions. In the Sun, protons, hydrogen nuclei, are somehow fused together and you finally end up with helium. This kind of fusion of protons, which includes a transformation of protons into neutrons, only works if neutrinos exist. This means that our existence is ultimately dependent on the existence of neutrinos. Further reasons are that after the great successes that Glashow, Weinberg and Salam had a few years ago, efforts are being made today to unify the weak, excuse me, the electromagnetic interaction with this weak interaction. Efforts are now being made to extend this schema and place the strong interaction under a common roof with these other two interactions. Here, too, the neutrinos could play an important role, insofar as a mass scale plays a role with these unifying principles, a role about which we know very little for certain even today. That focuses attention on the problem of what mass the neutrinos have, if they have one at all, and if so, which. And finally, really briefly to conclude, the astrophysicists and the cosmologists, in particular the astrophysicists are very interested in the neutrinos because they could help them to solve and to understand a whole range of problems which are causing them great difficulties at the moment. There is not only the problem whether our universe is an open or closed one, we of course assume, as we just heard in Herr Alfvèn’s talk, that our universe is expanding, but we do not know whether that is so forever, or if it expands at a decreasing rate, if it finally slows down so much that it turns around and then, so to speak, contracts again. Something like that would require sufficient mass to be present in this universe. We can make a whole range of statements about the masses, we can deduce them, the so-called visible masses which we see today are not sufficient to make the universe into a closed one. But it could still be that there are very many neutrinos present, which I will come back to later. And that these neutrinos have sufficient mass and the universe could be a closed one. Perhaps a more aesthetically satisfying aspect, but otherwise perhaps not so enormously important. A much more important aspect is that there is a whole range of indications in astrophysics that a large amount of hidden mass is present in the universe. Mass that we do not see. There is a whole range of phenomena, I just want to mention, for example, that the clusters of galaxies, this accumulation of galaxies, is hard to understand, what holds the whole thing together. How gravitation can hold it together, there is not enough mass present there in what we can see. One supposes that very much more mass must be present, which we don’t see. And this mass in turn could, that would be the simplest possibility, be present e.g. in neutrinos, and neutrinos would then be of decisive significance for the understanding of this phenomenon. There is a whole range of further astrophysical clues about such hidden masses, and there are many people who would like to attribute this hidden mass to the neutrinos. Now let me say a few words about the neutrino sources we have available today, we want after all to perform experiments, in the end we have to prove what we may be supporting here as a hypothesis, prove experimentally and for this we need neutrino sources for our measurements. One of our most important neutrino sources, one of our most interesting neutrino sources, as I have already mentioned, is the Sun. In the Sun, fusion processes take place, like the fusion of protons, the transmutation of protons into neutrons, we need neutrinos for that. And since we assume that all the Sun energy which we receive here essentially relies on such fusion processes, these neutrinos must participate in this, and these neutrinos, of course, come to us here on Earth. They have such a weak interaction that they don’t see the Earth at all, they go in at the front and out again at the back. Almost nothing happens there, the Earth hardly exists for this weak interaction of the neutrinos, but one can still, if one has a sufficiently larger number, and the numbers really are large, 6 times 10 to the power 10 per cm^2 and second, just from this actual fusion process extraordinarily high numbers of neutrinos. One can, in principle, measure them with very sensitive detectors. I have written down for you here two groups of neutrinos. One corresponding to the main fusion reaction and another group which comes from a small side reaction, but which has the advantage that it delivers neutrinos of very high energy, 14 MeV, with very much lower fluxes, nearly 10,000 times lower that this main chain. This little side chain is still very interesting at the moment since it is the chain which can be measured with terrestrial methods. These are the famous experiments of Davis and colleagues with headquarters in Brookhaven, and which were performed in a deep mine in the USA, and where the attempt was made to measure this solar neutrino flux which actually supplies us with the only direct expression of what goes on in the internals of the Sun, we only see the external Sun after all when we observe it optically. So the attempt was made to measure this solar neutrino flux, in order to see whether our ideas are correct about what is happening in the interior of the Sun and it is one of the really great puzzles at the moment that this flux was not found. The flux that is found is a factor of at least 3 lower than what was expected, and recently it almost looks, and of course that all has to be experimentally much more exactly checked, as though everything that is measured may be explicable with underground measurements. That is really a great puzzle and it would really be a catastrophe if this flux would not be there, or if we don't understand why it is not there. There are then all sorts of other possible excuses. I will have something to say about one of these excuses. It would be a catastrophe if this flux were not there, since that would then mean that we don't understand how the Sun produces its energy. We believe that we understand this very, very well. But as I said, we have to be sceptical, and one of the great puzzles is this missing solar neutrino flux, at the moment. There are new efforts, in part German laboratories are also involved here, to address this low energy area here, these are very expensive experiments, the BMFT is already groaning about it. Experiments, which are planned to be performed in a few years’ time with the help of gallium. There the thresholds for the reactions are very much lower and one can hope to study these reactions. Very expensive experiments, but extremely important experiments, for this missing solar neutrino flux is one of the really great puzzles that we have to live with at present. It is highly probable that a further large source of neutrinos is those remaining from the Big Bang which many people believe in. Our world came into being in a Big Bang, originally these neutrinos, which were in equilibrium at this Big Bang with the other particles at very high temperatures, were present in large numbers, and then, as the temperature moderated, fell, because the whole thing had expanded, they were no longer in equilibrium, but they should still be here today, with very low energies, but in very large numbers. We assume, we have not seen them so far, we assume that there are around 500 photons per cm³ here on the Earth, in the whole universe on average. That has been measured. One assumes that about the same number of neutrinos is present per cm³, a little bit less for reasons of statistics. So you can see that the world is full of neutrinos, around 400 such neutrinos per cm³ from this Big Bang with very low energies, but from the Sun too come gigantic numbers of neutrinos which pass through us continually from all sides. The environmental lobby has not yet noticed this and so they have not yet done anything about it. Now something about the artificial sources of neutrinos. Here is the most important source of neutrinos, the nuclear reactor. The fusion – not the fusion reactor, the fission reactor - which sends us so-called electrons antineutrinos, whether they are anti- or neutrinos, that is a matter of definition. In this case, as it is defined, here they are antineutrinos, with energies of a few MeV and quite remarkable fluxes. The flux strengths which I have quoted here are realistic insofar as I provide them at the location of the experiment. One obtains many powers of 10 higher fluxes if one goes to the centre of a nuclear reactor, but one cannot survive there, neither can our apparatus, in the places where we can station detectors there are fluxes of this order of magnitude, and you can see that is much more than what comes from the other sources. Then there are also accelerators, meson factories etc. which supply such neutrinos with higher energies, but considerably lower fluxes. First of all I will talk about the neutrinos produced in nuclear fission reactors and which, as I said, are completely harmless for our human health, as far as we know – the Earth is absolutely transparent and we of course, since we are puny in comparison with the diameter of the Earth, are all the more transparent for these neutrinos. It is extraordinarily difficult to prove their existence. The only hope is to detect them in the laboratory because their number is so extraordinarily high. Now I have already indicated several times the important aspect of the mass, the rest mass of the neutrinos, and would like to say a few words on what we know about that. Physics has lived quite happily for around 40 years with the opinion that the rest mass of the neutrino is zero. There were, in other words, no experiments that contradicted this, and the theoreticians appreciated it very much to include the zero rest mass in their theorems, since the theories were thus extraordinarily simplified. Now, critical, as we must be as physicists, in recent years, also for the reasons that I mentioned earlier, the idea arose that there is actually no good reason for the rest mass to be zero. At least there would have to be a new principle there which we do not understand, and one should therefore measure what the rest mass of this neutrino is. Now, nobody had been able to date really to measure this neutrino mass. I can only give you limits here. We know that that the mass of an electron-neutrino is something less than 35 electron-volts, how small it really is, no one knows. The muon neutrinos are lower than 510 KeV, and the tau neutrinos are lower than 250 MeV. You can see that these are enormous energies we are talking about here, and about which one can in principle say nothing at all. Again, because this weak interaction is so incredibly weak, because these neutrinos manifest themselves so enormously badly. The reason why it is so difficult to measure these masses is that the kinetic energies with which we normally work, especially with the electron neutrinos, are extraordinarily high compared with the rest mass, if there is one, and that this small fraction here is very difficult to measure next to this large fraction. There is a Russian measurement by Tretyakov and colleagues who believe they have seen the neutrino mass, they give values for the electron neutrino mass of between 14 and 46 electron-volts. But this experiment is a single experiment, it should definitely be verified by other laboratories or also by the same group. It is very difficult to make definite statements here, whether solid state effects might not be influencing matters and playing a dirty trick here, before these masses have been measured on a range of solid bodies. With the use of various solid bodies it is really too early to say that this is a valid measurement. Finally I should mention that the cosmologists provide limits on the mass of neutrinos. They say the sum of all neutrino masses of the different types should be less than about 50 electron-volts. Then there is also the possibility that very heavy neutrinos exist, but they would probably not be very long-lived. All that is still very much open, in any case that is a limit which the cosmologists believe in, and some of us more or less, that it is correct. So one has to look below this range if one wishes to find neutrino masses. Now there is an interesting possibility, first pointed out by Pontecorvo and a Japanese group, which is that the neutrinos we produce in the laboratory with beta decay, that these neutrinos are not single states of the weak interaction, in the context of which they were produced, but that these neutrinos may have more fundamental neutrinos behind them, in other words that the neutrinos produced in the context of the weak interaction are not stable, but can transform themselves into each other. That would lead to the possibility of so-called neutrino oscillations, and we have in fact conducted experiments in this direction, started a search for such oscillations in recent years. That would mean that neutrinos, say electron neutrinos, which are produced, in the course of time, as they fly, with practically the speed of light, transform into muon neutrinos and then e.g., that is a particularly simple two-neutrino model here, change back into electron neutrinos, back into muon neutrinos, that such neutrino oscillations take place. In concrete terms, that means that if I have a fission reactor here in the centre of my circle producing electron neutrinos or antineutrinos for me, and they shoot out in some direction or other, in all directions of course, e.g. in this one, then, after a particular time of flight or distance of flight, say at this red position, they would transform into muon neutrinos. Somewhat later they would again be electron neutrinos, a bit later again they would be muon neutrinos and so on and so forth. Thus you have such oscillations here, along this region here. And if you set up a detector here, that e.g. only reacts to the green sort, then you would find such neutrinos here, here you see nothing, here again you find these neutrinos, so you would observe an oscillation in intensity in this detector and therefore be able to establish the existence of these neutrinos directly. Now, what can we learn from such oscillations? What we in principle can learn from that is whether these oscillations appear at all, that means whether these neutrinos of the various types can transform into each other at all, whether they are mixed and I can express this mixing with what is known as a mixing angle, that is one quantity, and we also learn something about this neutrino mass, since the length of these oscillations depends, as can be shown quite simply, I don't have the time for this here, on the mass, more precisely on the difference in mass of the neutrinos involved. So if I assume I have, say, an oscillation between electron and muon neutrinos, then this involves the difference of mass between electron and muon neutrinos, or the fundamental neutrinos behind them. So I can learn something about the neutrino masses and I can learn something about the mixing of these neutrinos. Now, we have been performing such experiments for several years, firstly in an experiment, which was an American-French-German cooperation at the research reactor of the Institut Laue-Langevin in Grenoble. We took measurements at a fixed distance of 8.76 m from the reactor and essentially studied the energy dependence of our neutrinos, so we absorbed neutrinos and had a look to see whether this absorption behaved the same at all energies or whether it is energy-dependent. If it is energy-dependent, that would suggest neutrino oscillations. We now have, in the meantime, I told you three years ago that we planned this experiment, this experiment has now been carried out, a second experiment in turn has also been completed, an American-Swiss-German joint project at the power reactor of the nuclear power station in Gösgen in Switzerland where we carried out a new oscillation experiment at a distance of about 38 m, and I would like to tell you a few details of this experiment just so as to give you a feel for how such experiments run in detail. We use this reaction that I have already mentioned, that we fire electron antineutrinos at protons. So we have a detector which contains very many hydrogen nuclei. In our case it is a liquid which contains a lot of hydrogen and which serves us both as a detector and also as evidence for the neutrinos received, so the protons undergo these reactions, they are transformed here into neutrons and positrons and we measure these two particles together in coincidence, that is in both temporal as well as spatial coincidence. In temporal coincidence, in the sense that they have to appear simultaneously if this reaction occurs at all, that is when we observe neutrinos. And in spatial coincidence, that helps us to solve our substantial underground problems. Because although we have very high neutrino rates, it is only very occasionally that one is caught, so we get very low count rates and you can imagine that it is very difficult with such low count rates to fight against all the other processes which of course occur. You always have natural radioactivity from the environment, not from the nuclear reactor, that provides us with nothing at all, but from our detector itself; radiation comes from the glass in the multipliers, radiation comes out of the concrete walls which we use for shielding, and of course radiation pours down on us from space, cosmic radiation, one has to fight against all that, and that is served by this location-sensitive evidence which I am outlining here. Now, I don’t want to bother you with all the details, I just want to mention briefly once more that the oscillation which I have written up formally here, that is a simple function here, this cosine term is significant, a trigonometric function. It depends firstly on the energy of the neutrinos, as I said, if we had no neutrino oscillation it should not be dependent on the energy. Then it depends on the distance, then it depends on this mass difference which I mentioned and which I have written down exactly here, Delta^2. This quantity which occurs here in the argument of the trigonometric function, depends on the masses M1 and M2 of the two neutrinos which I refer to in the square expression as I have written it here, and finally then the mixing angle is involved. You can see immediately that when the mixing angle is zero then there is no neutrino oscillation, that the whole expression here is zero and then 1 comes out quite simply. That means that nothing at all happens, neither as a function of the distance nor as a function of the energy does anything at all appear. But when a mixing angle is present, that is when the phenomenon of oscillations exists, then an oscillation term appears and from the length of this oscillation, from the argument of the trigonometric function we obtain information about this quantity Delta^2, about this square of the mass difference of the neutrinos, which we are interested in. Now, how does that look in practice? Such a detector looks very roughly like this sketch of mine here. We have here 30 such white boxes, they are the proton counters, the whole thing is roughly the inner counter, say 1m by 1m by 1m. So we have 30 such proton counters in which the neutrinos which come from the reactor, which come from somewhere outside here and arrive here, occasionally experience a transformation, react with a proton and produce a neutron and a positron. And we have to now detect this neutron and this positron. We detect the positron directly in these counters, there photomultipliers just sit at the end, the flashes of light which the positron makes in this scintillation counter, in the scintillation liquid which he have in here, are detected and they give us direct evidence of the positron. And the neutrons which are thereby produced, they come over here in these big helium-3 chambers marked in yellow, in which they are detected by neutron capture. This means that we detect the positron here, we detect the neutron here, and if the two occur simultaneously or practically simultaneously then we know that a real neutrino absorption event has occurred with high probability. The rest of this apparatus, and that is the greater part, that consists of what are known as anti-coincidence counters, which tell us that something wrong is coming from outside, some particle from cosmic radiation, we can then exclude that, then we don’t count it. Then there also a lot of other things here which I don't want to go into. The decisive thing is that we have many metres of concrete around the whole thing. The whole detector weighs around 1000 t, so they are large items of apparatus. In fact the Swiss Army helped us to drive this 1000 t around the area because we are not in a position ourselves to do this without help and we are not keen on spending our limited research funding on the transport of concrete. Now perhaps I will show you this inner counter once again more exactly, how it looks in detail. Here you see once more an enlarged view of the inside, you can see these 30 counters here, here the covers are removed in some cases, those are the photomultipliers which sit in front and behind these counters, and allow us to detect the light impulses, here in between, yellow, are these big helium-3 chambers, we put 400 l of the very rare isotope helium-3 in our detector. Now I would like to show you the results very briefly. What you see here is up above the count rate, this upper curve, recording the count rate per hour. You see, we have typically about 2 per hour here as a function of the neutrino energy or the positron energy which is directly linked with the neutrino energy. You see this upper curve here for the case that the reactor was in operation. We spent about half a year measuring this curve. You see that one needs time with this low count rate, and here below you see the curve we get when the reactor is switched off. That means they are so to say undesired side-events which we have to subtract from the desired events here above. I may be allowed to mention on the side that it was very difficult for this curve here below, we could only make measurements for about 4 weeks on this lower curve, these power reactors in the generating stations have the unpleasant characteristic that they are always in operation and are very rarely switched off. As a physicist, one would rather have reactors which run half the time so that one can measure this curve, and are switched off half the time, so that one can measure this curve. But one must of course subordinate oneself to the conditions which really exist. Now, the difference of these two curves here is that what you see recorded here below, that is the real spectrum and a curve is also drawn here which is not something like a fit to these experimental data, but is the curve that we would have had to expect if we had no neutrino oscillations. So all deviations between this continuous curve and our experimental data, and you can see a little bit here and maybe there, with a lot of imagination, all this would indicate oscillations. What we can state in this case is not masses, we saw no neutrino oscillations in the framework of the statistics that we applied, I will justify that in a moment, but we can state limits for these masses, and that is shown in this figure here, they are the latest, not yet published data where I can state the most accurate limits for the neutrino masses and the mixing angle. What you see here, and that is the green curve, in fact the continuous curve drawn on the right hand edge of the green curve, that is the so-called 90% confidence level for the exclusion of neutrino oscillations. And what is excluded is the whole right-hand area, to the right of this curve, and permitted, on the basis of 90% confidence, is what lies to the left of this curve. Displayed here is up above, this Delta^2, this squared mass difference of the neutrinos, and to the right the mixing angle. So here mixing angle zero and here complete mixing on this side. You see that a further region here is excluded, but everything here that remains to the left of this curve, in the left area, that is the region where neutrino oscillations and thus neutrino masses and neutrino mixing angles are still possible as before. We are now involved in extending these measurements to still greater distances. Here our goal is firstly to examine in more detail this region here, which I will have something more to say about soon, and in particular to go still further downwards here below. That means, if neutrino masses are there, with relatively high values, that is in the region of eV as the cosmologists and astrophysicists would have it, then such masses could only appear with extremely small mixing angles, that is with mixing angles which are clearly smaller than the angle that we like to use today in high energy physics, the Cabibbo angle and the Weinberg angle, which are both a little greater than 0.2. So we are already below this angle in this region here. Of course there is no reason why this mixing angle should agree with this other angle here, but still one thinks a bit in this direction, maybe it is a gift that they are of this order of magnitude. But that is not the case; here they are already, as you can see, below this region. So we can already extrapolate up to arbitrary masses. We can do that because here in these high masses, in this high mass region our oscillation term is averaged out, and that is e.g. one of the hopes which one has to explain the absent solar neutrino flux. It could in fact be that the neutrinos which come to us from the Sun undergo oscillations, that they transform into other kinds of neutrinos, that the electron neutrinos from the Sun may become muon neutrinos and tau neutrinos and the whole thing is mixed up, so that then, when it is roughly equally mixed - and one would assume that, if the oscillation lengths are small compared with the distance from the Sun to the Earth – that we can then assume, if we have three types of neutrinos, that each type accounts for about 1/3 of the intensity, and that would be about what we are measuring at the moment. But that is still very strong wishful thinking, I would say, that must still be verified much more precisely. In any case, in this left region here oscillations are still possible, in this right-hand area they are excluded, here below with really small neutrino masses, and there could of course be some, it could be the case that the neutrinos have this small mass, there they are still fully compatible with all mixing angles. Now I would like to show you, perhaps without going into numerical detail, another picture which caused a bit of excitement about a year ago. And here I show you the same curve again which you just saw in green, in lilac here, and simultaneously I show you measurements here which were made by Reines and his colleagues, very famous measurements, in which neutral and charged flux reactions were carried out with the help of neutrinos on the example of the deuteron, and in which these measurements were interpreted under the assumption that neutrino oscillations exist. Here it is the case that the permitted region is to the left with us, here the permissible regions is to the right. Here you can 90% confidence curve from Reines, to the right you can see the 90% confidence curve of our experiment, here there is no overlap of these curves, which means that the two experiments, at least on the 90% basis, and that can be extended further, in clear contradiction to each other. So we and our experiments do not agree with the assertions of this American group. Now, finally, a last summary of those data which we obtained at the nuclear reactor in Gösgen. It shows you very nicely the difference of distance and also of reactors. I show you again here in blue what you have already seen in other colours, that is our measurements which we carried out in Switzerland, at the Swiss power reactor, and at the same time I show you here in red, once more…

Rudolf Mößbauer on Pauli’s Invention of the Neutrino
(00:00:42 - 00:04:27)


No Nobel Laureate has lectured more often on neutrino physics in Lindau than Rudolf Mößbauer, although this was not his original field of excellence. While doing research for his doctor’s thesis, Mößbauer had discovered and explained the unexpected effect of recoilless nuclear resonance fluorescence, which bears his name and allows for the spectroscopic measurement of extremely small frequency shifts in solids. This earned him a Nobel Prize in Physics 1961 when he was just 32 years old. “I fooled around for 15 years in the Mößbauer effect and then I’ve had it and left the field for neutrino research”, he told his audience in the last of the eight neutrino lectures he gave in Lindau between 1979 and 2001. Just like Mößbauer, Wolfgang Pauli did not receive the Nobel Prize for his contribution to neutrino research but for the discovery of an important law of nature, the exclusion principle, which he had made at the age of 25. It bears his name and states that two fermions cannot have the same spin and energy at the same time or in other words “that there cannot be more than one electron in each energy state when this state is completely defined“.[3] When he was awarded with the Nobel Prize in Physics 1945 for this achievement, neutrinos still had not been discovered, as if his own prophecy should fulfil: „I have done a terrible thing. I have postulated a particle that cannot be detected.“[4] Wolfgang Pauli is widely regarded as one of the most eminent and influential physicists of the 20th century. He attended the Lindau Meetings only once in 1956 without lecturing, and passed away prematurely in 1958. His close friend Werner Heisenberg who had almost all of his papers proof-read by Pauli before their publication paid tribute to him in 1959:


Werner Heisenberg (1959) - Report on Recent Findings regarding a Unified Field Theory of Elementary Particles (German Presentation)

Ladies and Gentlemen, I would like to speak today about an attempt at a theory of elementary particles, a unified theory of elementary particles, and I would first like to say a few words perhaps about the physicists who worked on this. First I would like to mention the contribution made by my friend Pauli, who unfortunately passed away much too young, and who we all sorely miss here. Pauli had, following its discovery by the two Chinese physicists Lee and Yang, taken a renewed interest in the elementary particle he himself had predicted about 30 years earlier, namely the neutrino. And he had discovered a new symmetry property in the wave equation of the neutrino. Now, the importance of symmetry properties for the smallest components of matter is known not just to physicists, but even to philosophers, who can read it in Plato. Thus, the symmetry property is one of the most important quantities or one of the most important things that we can talk about in physics today, just as 2000 years ago. And Pauli had, as I mentioned, discovered a new symmetry property in the neutrino wave equation. At about the same time, or even the previous year, we were involved with non-linear spin theory in our group in Göttingen, a theory which was thought to be a model for a subsequent theory of elementary particles. Following the discovery by Lee and Yang, I had attempted to incorporate their thinking into this non-linear spin theory and had come upon an equation that struck me as particularly simple, actually simpler than the equation that I had previously investigated. And after having given a lecture in Geneva on this equation about two years ago, I visited Pauli in Zürich. And Pauli immediately discovered then that the new equation was also invariant with respect to his symmetry group. And with that arose the possibility now for the first time that this extraordinarily simple equation - or I would like to say, very simple equation in a certain sense - could account for all of the aspects of elementary particles. Pauli was enormously enthusiastic about this new potential at first, but then afterwards was very disappointed because further difficulties indeed became apparent that he could neither intially solve nor answer. We discussed this equation a great deal one year ago in Geneva and even more extensively in Italy, in Varenna, at the so-called summer school of Italian physicists, and were in complete agreement about all the details of the theory that had already been worked out. But Pauli judged the subsequent possibilities more pessimistically overall than I did. However, in the middle of the discussion, which we always conducted in letters, he unfortunately passed away in December. In the meantime, a lot of mathematical work has been done in conjunction with this equation, and I would also like to now mention at this point the names of my colleagues in Munich: They are Dr. Dürr, Dr. Mitter, Dr. Schlieder, and Dr. Yamazaki. When I lecture today about this field, or more correctly, about the new results in this field, then I would like to divide them as follows: I first would like to explain the fundamental thinking behind this theory from a very general point of view. However, I do not want explain the mathematical details at this point, which would be difficult to understand for such a broad audience, but would rather mention, in somewhat greater detail, the difficulties that still existed in this theory a year ago, and also state what answers to these difficulties we believe we are now able to give. And then I will go into the newer results that have been worked out by me and my colleagues, whom I have just mentioned, which will probably appear in detail in a German periodical a few days from now. Firstly, therefore: What are the basic ideas behind this theory? May I request the first image please, in order to clarify the problem? In this picture, which is a type thoroughly familiar to physicists, I would like to explain briefly what sort of problem is involved. Physicists research elementary particles by means of very energetic elementary particles that are either drawn from large machines or from cosmic radiation, and let these particles collide with other particles or with atomic nuclei. The particles split the atomic nucleus and thereby create new particles. I want to briefly explain what is roughly involved in the one case visible in the photographs here. Thus, for example, a proton from the left above collides with a proton in an atomic nucleus here. You can see that a large number of particles are ejected from this atomic nucleus. Most of them, namely the particularly wide black tracks, are protons that previously made up the atomic nucleus as its key components. However, there are also individual, narrower tracks, and you can notice for example a track moving perpendicularly downward - that is an elementary particle of a type that was discovered just about six to eight years ago, known as a tau meson. I may also perhaps mention: the photograph here comes from a balloon expedition that has been carried out jointly by English, Italian and our physicists from our institute in Sardinia, Italy. In this experiment, photographic plates are sent to high altitude and exposed there to the effects of cosmic radiation, and an image like this is obtained afterwards through microscopic examination of the plate. This elementary particle moving perpendicularly downwards, the tau meson, continues along in the photographic plate and at a subsequent location was photographed once again. You see this on the right. Again on the right, this particle enters once again from above and makes a pair of collisions at the point where its track becomes wider and displays a curvature, you can hardly see the collisions on the plate, and then finally comes to rest. After coming to rest, it decays into three more particles known as pi mesons, each of which decay again into a mu meson and an invisible neutrino. The mu meson in turn decays into an electron and two invisible neutrinos. You therefore can see an example here in which elementary particles are created through this kind of decay process, and that then decay radioactively and thereby mutate into other particles. This special image that has been selected here also shows yet another elementary particle whose discovery was somewhat later, I believe, about five years ago. That particle is known as a sigma hyperon, a particle heavier than a proton. It is moving horizontally to the right in the left-hand image. It is a wide, black track and at the end of this trace, the particle decays into a pi meson and a neutron. This image is only meant to illustrate the experimental facts that must be explained. We see in the experiments that new particles are created by sufficiently energetic collisions between elementary particles, and that these new particles in turn radioactively decay into other particles, and so on. I do not wish to explain the experimental methods in detail here that now that allow us to decide why the one particle is a pi meson and the other a tau meson, and so forth. Rather, I prefer to draw a qualitative conclusion from this image. I believe what we learn from such images is that we may not view elementary particles as indestructible, unchanging, ultimate constituents of matter. For we certainly see that the particles transmute into one another. Obviously the most correct way of speaking about these processes is to say: all these elementary particles are made of the same stuff, so to speak, and this stuff is nothing other than energy or matter ..., let us say, than energy. One can also express it perhaps so: the elementary particles are only various forms in which matter can manifest itself. Energy becomes matter, in that it assumes the form of an elementary particle. And if we interpret the elementary particles in this way - and based on current experiments, we can no longer be in any doubt that we are describing the events correctly - then the question immediately arises for theoretical physicists: Well, why do exactly these forms of matter exist in nature, as manifestations of matter? Why do the elementary particles have precisely those properties that we observe experimentally, etc.? That is, why does matter have to occur in just these kinds or in this excess of forms? Apropos, I would like to mention: There are many different elementary particles. We currently know of about 25 to 30 different kinds. Thus, 25 to 30 different forms which energy can take in becoming matter. When we endeavour to bring some theoretical order to all of these aspects, we obviously hope that all of the different forms that we have before us as elementary particles in the experiments, that they spring from one simple natural law. Meaning, there is just one fundamental natural law leading to just these elementary particles being formed and no others. This same unified natural law must then also stipulate the forces between the elementary particles, it must in fact allow us to actually derive all of the properties of the elementary particles. Experimentally, a large number of these kinds of images are actually available to us as material for such an examination, as you have just seen, and those of related experiments. That is, observations of the transmutation of particles, of the forces that they exert upon one another, the life spans, etc. Now, the basic premises of the theory that I want to talk about here, of the mathematical representation of these experimental events, would be roughly the following: Obviously there would be no sense starting from the view that the elementary particles are something given, and then introducing mathematical symbols for these elementary particles, which we then associate with a natural law. That would be unreasonable because certainly the elementary particles should not be in any way prerequisite to, but rather the consequence of natural law. We want to have the elementary particles with all their properties derive a priori from natural law, and for that reason we cannot insert them as something already given. We also cannot, as is often done in the conventional theory, introduce a new wave function or wave operator for every sort of elementary particle and then attempt to represent the complicated train of events. No doubt, you obtain a mathematical description of individual processes with this kind of representation, but will hardly be able to encompass all the interrelationships. We will therefore have to assume that we represent matter in an arbitrary form. We must therefore introduce some kind of mathematical symbol for matter and from that say the theory simply begins with accepting that something like matter exists, and for this I may introduce a mathematical quantity representing matter. And since matter is additionally in space, is in space and time, this mathematical quantity that represents matter must therefore also somehow be related to space and time. Perhaps I may write this briefly on the blackboard. We can thus say: The initial prerequisite is the existence of matter, and it follows that we can introduce a quantity, I will call it Psi of X, with Psi so to speak standing for matter and X for space and time. And mathematically one will, according to what we know about quantum field theory - and we know that these statistical laws are evidently just the right description of nature, and mathematically one will say that it must be a quantum field operator, and indeed - I want to briefly write in addition to this - that it must be a spinor operator. We mean the following by this: First, we understand a spinor to be a quantity having two components. The mathematics of spinors was studied very early on by Pauli and introduced into physics. It follows from an experimental fact that we need these sorts of spinors. We know there are many elementary particles that have what is known as spin one-half, that is, their moment or angular momentum is half of Planck's constant. To represent these kinds of elementary particles, you need spinors. And for that reason, the original field dimension for matter must also be a spinor, for otherwise we could not represent these spin one-half particles. That it is an operator ..., well, the term operator is naturally comprehensible only to physicists and mathematicians, but we can perhaps explain that this quantity Psi in mathematics, as it were, simply makes matter out of nothing. What I mean is, in the mathematical representation we need to somehow move from nothing, that is, from a vacuum to a state in which matter is present. And this transition is accomplished simply through the operator Psi. Now, not much is accomplished apparently though this general statement, but you will see that we, through very few steps, indeed arrive at very definite predictions through mathematics. The next premise that we will insert into this theory is simply: Something like natural laws must exist. And a natural law means that we are able to predict something about the future state of the world from the present one, or about the past state of the world as well. Formulated mathematically, that means this spinor operator Psi must be sufficient to describe any possible state of the world during a very short time interval - now - that is, we say between now and a moment later, over a very short time interval. For we must be able to predict something about the current state. And if you make this assumption, it means in mathematical language that you therefore can produce all existing vectors of a Hilbert space of quantum states by applying the operator Psi over the time interval between t and delta t from the vacuum. Now, this mathematical formulation says something of course only to the specialists in the field. Therefore, if we require this prerequisite, then something very important immediately follows, namely, that there must be a valid differential equation as a function of time for this spinor operator. I will represent it in this way: We know that natural law must exist. Consequently, there must be a differential equation for Psi in time, Psi as a function of t. Now we can go one step further. We know, of course, that interactions exist in nature, that forces exist. All of nature and the entire interplay of events in nature are certainly based upon forces being able to affect particles, and that it is through just this effect of forces upon masses that the dynamics of the world are possible. Therefore, there must be interaction. And from the interaction an important property of this differential equation follows: It must be a non-linear equation, for interaction is actually only ever described by non-linear terms in mathematics. There are a few exceptions to this rule, but I do not want to discuss them here. Therefore, on the left I want to write: Interaction and consequently a non-linear equation. Finally, we need one further very important and general premise. And this premise can, if you use the word carefully enough, be labelled with the word causality. Here I must make a proviso, however: You know recently from modern atomic physics that causality is no longer valid in a certain sense. The recent interpretation of Psi waves as probability waves, for example, comes from Born. This movement in modern physics is not touched on in the theory discussed here. All mathematical processes are actually meant as predictions about the probability that something happens. However, the word causality has yet another aspect. What we mean with the word causality is that the effect cannot be earlier than the cause. And when we take this prediction together with the prediction from the theory of special relativity, that effects cannot propagate faster than the speed of light, that therefore space and time have an actual structure, which was discovered by Lorentz and Einstein fifty years ago, then the word causality in this special form therefore contains a very specific prediction about how the differential equation for this spinor operator must look. Namely, that it must be a Lorentz-invariant equation, that is, a differential equation that satisfies the theory of special relativity, and in addition, we can state a further property of this spinor operator that is extraordinarily important. Namely, that we are able to determine that for space-like intervals, these spinor operators must always be commutable or anticommutable with one another. For if this were not the case, then this simplest concept of causality, namely that the effect must always be later than the cause, and that effects can only be propagated at the speed of light, would not be satisfied. Therefore, I want to write here again: causality. And from causality comes a Lorentz-invariant differential equation, or invariant differential equation (if I may abbreviate it), and further, another prediction follows about the commutation relation (I also do want to write this down for the specialists in the field - it looks like this: X minus X^2 should be equal to 0, that is for space-like intervals greater than 0. And actually, the entire theory follows from what is now on the board. There are no further premises in it. The next thing ... Yes, I want to mention one more mathematical consequence, although I do not want to deal with the mathematics in detail today. The circumstance, that we are dealing with a non-linear differential equation, that in addition the commutation functions should vanish for space-like intervals - these two circumstances already establish how the commutation relations have to be for time-like intervals and in particular for when we are on what is called the light cone, meaning when we are observing the location where the effect propagates with the speed of light. It follows namely from the two premises - a non-linear equation and space-like commutation - that we do not have that kind of singularity on the light cone that we are familiar with from the usual linear differential equations, that is, what are called the Dirac delta functions, so that these Dirac delta functions must be replaced by another kind of singularity, indeed by a smaller singularity. That has immediate, very broad mathematical consequences. One must take seriously a suggestion Dirac made about 15 or no, 17 years ago. You must also contemplate an indefinite metric in the so-called Hilbert space of quantum-theoretical states. Perhaps I may include a few words about the history of this suggestion made by Dirac. As I mentioned, Dirac had suggested in January 1942 that one would have to resort to this indefinite metric in Hilbert space in order to avoid a mathematical singularity in quantum field theory. Then Pauli was able to determine a short time thereafter that this suggestion suffers from a serious objection. Namely, that then the quantities, which usually occur in the mathematical interpretation of quantum theory ..., in the physical interpretation of the probabilities in quantum theory, that these quantities can then become negative. Now, a negative probability makes no physical sense of course, and cannot be interpreted. For that reason, based on the investigation by Pauli, we thought that this indefinite metric could not be any sort of starting point for describing nature. Nevertheless, we once again took up this suggestion by Dirac about eight years ago in Göttingen and have been successful in showing special examples where Pauli's objection simply does not actually occur. Therefore, we now know at least that there are certain cases in which Pauli's objection does not necessarily arise, so that indeed the introduction of the indefinite metric does not represent a priori an insurmountable flaw for a theory. If you now take these premises as they are represented on the board seriously, then you can simply say: ok, let us try to formulate a natural law from them, and then we need to take a look at whether it can approximate the real world of elementary particles. We therefore have to take a non-linear differential equation that is Lorentz-invariant, that is valid for a spinor operator of Psi as a function of X, and the simplest possible equation of this kind is the equation that I want to show in the next image. That is the equation. It looks somewhat complicated as it appears here, the gamma mu and gamma 5 matrices introduced by Dirac occur in it. However, one can write the equation in other mathematical forms that then also demonstrate that it is in principle the simplest non-linear spinor equation that exists. And it is indeed the simplest because it has the greatest symmetry. And this prompts the question: Is this simplest spinor equation actually the correct equation? And correct would mean: From this equation, one can derive the elementary particles with all of their properties. And the equation's simplicity is matched by the difficulty in treating it mathematically. For this reason, a great deal of mathematical work had to be done at this point before we could give a few answers to the questions I have just posed. Now, the most important empirical data about elementary particles that we know of are the data concerning the symmetry properties of elementary particles. In this regard, I was reminded a short time ago once again of the ancient Greek philosophers; symmetry is always the principal feature, so to speak, of a physical structure. And thus the symmetry properties are the most important properties. Now I would like to request the image with the large table. This table is meant to provide you with a brief overview of the results of experiments. On the left side of the table shown vertically are the masses of the elementary particles and you see the entire list, so to speak, of the elementary particles in the form of points entered along this table. The lightest of the elementary particles are those with rest mass 0, those are the light quanta, they shown at the bottom as points. Also very light elementary particles are the electrons and positrons. Then come particles that are about 200 times heavier, what are called the mu mesons, and then above them the pi mesons, which are about 270 times heavier than electrons. The pi mesons are most closely associated with the forces that hold the atomic nucleus together, they are the particles first predicted by Yukawa in conjunction with the nuclear forces. They were observed as particles actually occurring in nature by Powell in England. Then above them come the K mesons, which are about 700 times heavier than ..., no, 960 times heavier than electrons, then the ... a short while ago for example you saw such a K meson in the images, that is the tau meson, it is a special sort of K meson. Then come the protons and neutrons, they are the longstanding components of the atomic nuclei, and finally further up the hyperons. Now, this entire menagerie of elementary particles, that is, this quite complicated train of events, can be systematically arranged, in that you can introduce quantum numbers for the elementary particles that simultaneously represent their symmetry properties. One can state in a somewhat simplified fashion: For every symmetry property of an elementary particle, there exists a quantum number, or perhaps more correctly the other way around: every elementary particle is characterised by an entire series of quantum numbers, and if you know these quantum numbers of the elementary particle, then you know what symmetry properties the particle has. Simultaneously, these quantum numbers are also an expression of the conservation laws that apply to the elementary particles. Therefore, the quantum numbers customarily play a role similar to the spin. We say that the spin is an integer multiple of, or a half-integer multiple of Planck's constant. And the total spin is conserved for any particle transmutation processes or creation processes. The quantum numbers that we have empirically found are entered here in the table on the right. I must mention at this point that this kind of representation originates from the theory. It comes from work which Pauli and I jointly wrote back then. But it is basically just a representation of the facts discovered previously by the experimental physicists. In the first column on the left are the elementary particles with the labels that are customary for physicists. On the right as the first quantum number is the electric charge, labelled as Q. That is a longstanding property of elementary particles, of course. Then comes a quantum number called the isotopic spin, or isospin. This property of elementary particles was discovered about 30 years ago. The isotopic spin has turned out to be a very important symmetry property, because it still poses certain riddles for physicists, since it is not a complete symmetry property. That is, the conservation laws that result from this symmetry property are only approximately valid, and that means that symmetry in nature is also only approximately valid. Then come the additional quantum numbers that are labelled here with L, which I want to say something about later. Then comes a quantum number N, that is what is known as the baryon charge or baryon number. This is for protons and neutrons 1, for example, but 0 for electrons or mu mesons. Then comes a quantum number known as IN one-half, which has a simple relationship to the baryon number. You can therefore say that the quantum number IN and LN also mean the baryon number and lepton number. We therefore arrive at a table of quantum numbers like this empirically. A few are left out, for example the spin is not listed here. And for physicists the question now arises, also for theoreticians: can one explain all these quantum numbers through the symmetry properties of this wave equation for matter that we have provisionally written down? To answer the question, one has to examine what symmetry property the equation itself hat, in other words, under what transformations is the equation invariant. And as I had said already, the equation must be invariant under the Lorentz transformation. That is necessary so that the structure of space and time, which we are familiar with from Einstein, is correctly represented. The equation actually is invariant under the entire inhomogeneous Lorentz group as well, and the usual conservation laws for energy, momentum, angular momentum, and motion of centre of mass, etc. then follow from this invariance. However, we have not represented so far anywhere near all of the symmetry properties and quantum numbers that are shown here. One very important additional quantum number is the isotopic spin, which was discovered about 30 years ago. And I said previously, that Pauli had established a new symmetry property in the wave equation for the neutrino, precisely the transformation he discovered, and Gürsey then associated this group with isotopic spin based on work by his predecessor Schremp, whom I would like to mention here. The equation is also invariant for these transformations discovered by Pauli, and Gürsey has just been able to show that these transformations allow the isotopic spin quantum number, or isospin to be represented. Thus, some of the quantum numbers that you have seen presented empirically in this image have been explained. The lower group, I still have to mention for the sake of historical accuracy, the lower transformation had first been discovered by Touschek, and Touschek had already announced that one may be able to associate them with the baryon number. And once one has recognised all of these transformations as genuine transformation properties of the equation, then it follows that you really can explain some of the quantum numbers shown in the table. However, two quantum numbers that caused difficulties in the first place still remain. And the difficulties of which I spoke a short time ago, which still remained in the theory a year ago, had to do with the existence of precisely these two quantum numbers. I now therefore come to this question of the difficulties. The quantum number table that you have just seen still contains two quantum numbers named LQ and LN there. And at least one quantum number of the two must be able to take on all values from minus infinity to plus infinity. And this means that there still must be one additional transformation property of the equation, if the equation is correct, that is of the same kind as being able to, let us say, revolve about an axis, that is, a continuous scalar transformation group. Besides this, there is still the difference, so to speak, between LN an LQ. That is a quantum number which has been named the strangeness of the particle, one which characterises the K mesons in contrast to the Pi mesons, for instance. This strangeness is a quantum number that does not have to take on all values from minus infinity to plus infinity, it is apparently only capable of taking on a couple of values, that is, it can assume the values of 0, 1, or 2, plus, minus. It would be sufficient for this quantum number to have discrete transformation groups still. And indeed, the equation also remains invariant for one additional single-parameter group however, which is referred to as the scalar transformation. If I may write on the board: Psi of X and L (L is the quantity that occurs in the equation) can transform into Eta to the three/halves power times Psi of X Eta and L Eta. In ordinary language: the entire world, so to speak, can expand or contract, and we would not notice this if the entire transformation took place similarly. That a scalar transformation is conceivable was also recognised very early on, reported to me during a visit in Yugoslavia that the Yugoslavian philosopher Boškovic already had the idea of this transformation in the 18th century. And this transformation is actually contained in this equation as well and it appears to be sufficient to explain at least one of these two quantum numbers. And for the explanation of so-called strangeness - the discrete transformation properties of the equation, of which there are several, will probably suffice. To this extent, we can therefore give a thoroughly satisfactory answer now to the first difficulty. A second, still-greater difficulty consisted of particles, indeed strange particles, existing that have isotopic spins in multiples of one-half and simultaneously an integer, regular spin. Or put the other way, an integer isospin and a half-integer regular spin. That was incomprehensible in such a theory at first, because the operator Psi always creates a half-integer spin, so to speak. If it therefore is applied an odd number of times, it creates a half-integer number for both kinds of spins, and if it is applied an even number of times, it creates an integer number. Now, this problem is associated with another very peculiar problem, which I have also already touched on earlier, that indeed certain ...

Werner Heisenberg remembers his friend Wolfgang Pauli
(00:00:15 - 00:03:42)


Nature Rejects Fermi’s “Theoretical Masterpiece”

After his return from the AAAS meeting and other lectures in the US, Wolfgang Pauli attended a conference on nuclear physics, which Enrico Fermi and his group had organized in Rome to enhance their knowledge in this promising new area of research. They did so very successfully, and already in 1938 Fermi would receive the Nobel Prize in Physics “for his demonstrations of the existence of new radioactive elements produced by neutron irradiation, and for his related discovery of nuclear reactions brought about by slow neutrons". As Pauli later recalled, Fermi “immediately expressed a lively interest in my idea and a very positive attitude toward my new neutral particles”.[5] After the discovery of the neutron by Chadwick in early 1932[6], it was Fermi who proposed to name the much smaller particle postulated by Pauli “neutrino”, i.e. the little neutron. On occasion of the 7th Solvay Conference on Physics, which was dedicated to the structure and properties of the atomic nucleus and took place in Brussels in October 1933, the neutrino issue was discussed, and Fermi hypothesized that the neutrino does not per se exist within the nucleus but rather emerges as a result of beta decay, an assumption that laid the foundation for the theory of the weak interaction. As Nature rejected Fermi’s paper on this subject as too “speculative”, he first published it in an Italian journal. Fermi’s first doctoral student Emilio Segrè who shared the Nobel Prize in Physics 1959 with Owen Chamberlain “for their discovery of the antiproton” regarded this paper as Fermi’s “theoretical masterpiece”, as he mentioned in Lindau in his historical account of those years:


“The neutrino was christened by us in Rome”, says Emilio Segrè
(00:28:18 - 00:32:23)


According to Fermi’s theory of beta decay a neutron transforms into a proton by emitting an electron and an (anti-)neutrino. Likewise, a proton can transform into neutron by emitting a positron and a neutrino. The first happens during nuclear fission in reactors, the latter during nuclear fusion in the sun. Billions of solar neutrinos per square centimeter reach and cross the earth every second. Second only to photons, neutrinos are the most abundant particles we know. Some of them were also created in the Big Bang, others stem from supernova explosions or from the interaction of cosmic rays with the earth atmosphere or can be generated in high energy accelerators. Neutrinos are everywhere. For every atom in the universe, there are approximately a billion neutrinos. But why are they then so difficult to detect that they once were dubbed “ghost particles”? Because they are the only elementary particles that are subjected only to the weak interaction without being disturbed by one of the other fundamental forces (aside from gravitation). They hardly ever interact with matter. They almost never collide with a proton to form a neutron and a positron. They travel through sun and earth unhindered and fly through our bodies without doing harm or leaving traces. Nevertheless, they are of utmost importance for our existence. Without the weak force and without neutrinos, the sun would not shine but explode and no life could have developed, as Rudolf Mößbauer points out:


Rudolf Mößbauer (1988) - The solar neutrino problem

Ladies and gentlemen, Our good, old Sun – and a lot of physics is contained in these two adjectives – our good, old Sun usually stands in the sky, and doesn't attract much attention. Sometimes this changes. In the last few days we've read in the newspapers that there were large solar protuberances that also affect communications on Earth, but only from time to time. Today, too, the Sun will also affect you a bit, because you'll come half an hour later for lunch because of the Sun. Today I'd like to tell you something about the Sun, but not about the Sun’s surface. We know that it shines, we know that it emits a lot of light, which means that we know how much energy it emits. The Sun's luminosity is well known. But the light that the Sun emits is very old light. It needs hundreds of thousands of years to strenuously make its way from the place where it was generated in the Sun's interior to the Sun's outer surface. Today I don't want to talk to you about this outer surface, but instead about the Sun's interior. And we still know relatively little about the Sun's interior. We have very precise theoretical ideas, and we're also very convinced that these ideas are right. The Standard Solar Model was mentioned briefly. However, we actually still have relatively few technical measurement data. What do we know about the Sun's interior? Well, we have two experimental possibilities; one is more recent, and the other is a bit older. The more recent one is this helioseismography, where, in the final analysis, the Doppler effect is utilised that has already been mentioned today, and where we measure local wavelength shifts in the light emitted from the Sun. So, you have the Doppler effect. The biologists and chemists had it explained earlier, they now know how the Doppler effect works when a bear approaches you, and then goes away again – this was demonstrated. I'm going up to somewhat higher frequencies, we can take an example – just for the chemists and biologists again – where we say – if you drive through a red light in your car and the police catch you, then you can say to the policeman that you saw a green light due to the speed conditions in the case of this car. You can only do this in Germany where we don't have speed limits, but in America you generally pay 1 dollar or something for every mile above the speed limit. And if you approach the speed of light, then it gets very expensive. Then it's better to just pay for the red light instead. So, I don't wish to say anything about this seismographic effect, not least because we continue to have major problems with interpreting these data, although there's a lot of such data. I'd like to say something about the reactions occurring in the Sun's interior, and specifically about the nuclear fusion processes that occur there. The Sun is of course the only star, the only star out of an incredible number of stars that is so near to us in our Universe that we can really conduct experiments, and really perform measurements on the Sun relating to its interior. So, it's the prototype of a star, and everything we imagine about stars, such as with nuclear reactions, with energy production and so on occurring in the Sun's interior, at the moment we can only test this in experiments on our Sun. And that's what I want to talk about now. Now, the medium we test with are those particles, those strange particles that come from the Sun's interior to us here on Earth. And only around a quarter of the Sun's radius is burning, creating nuclear fusion, those strange particles that come down from the Sun's interior to us here on Earth, and whose existence we can prove, and which we call neutrinos. These neutrinos were introduced into physics as a hypothesis around 60 years ago by Wolfgang Pauli. It was then around 1956 or so when Cowan and Reines were able for the first time to show in an experiment that they really exist, but today we know relatively little about these particles, and actually, solar experiments are not only interesting in order to find out something about the Sun's interior, but also something about neutrinos. Now, if I speak about neutrinos, then these are interesting particles to the extent that they are the only particles that are exposed only to weak interaction. I'm not talking about gravitation, that's even weaker, so, I'm talking only about electromagnetic strong and weak interactions, and there it's the weakest type of particle. It has no charge, it's not exposed to other interactions, and so you can study weak interaction in pure culture with neutrinos, and this has been done at the big accelerators, meaning CERN, Fermilab, and Stanford and so on. I'm speaking here about relatively low-energy neutrinos, let's say MeV or typical nuclear-physics energies that are much more difficult to measure. Here, too, I can also now mention, for example, that there are particles that are even more difficult to measure, namely that neutrino background radiation that must have remained from the Big Bang in the same way as the electromagnetic background radiation – if the entire Big Bang theory is real, and not just a hypothesis. In other words, we have Penzias and Wilson, which was already mentioned here at the conference. Electromagnetic background radiation has been found. Neutrino radiation has not yet been seen because of its extraordinarily low energy. Measuring neutrinos at 3 degrees or so is very, very difficult, and particularly for the younger among you I can say that there are a lot of Nobel Prizes just lying around waiting to be won, if a good idea occurs to you, then just go with it. Now, the neutrinos I'm talking about here are neutrinos from the Sun's interior. They reach us in the range of some MeV energies, and they tell us what's going on in the Sun's interior. I've already said that these are particles that are subject only to weak interaction. And to make this clear to you, I'd like to say that these neutrinos, for example, enter the Earth at the front, and exit it at the back, and don't even notice that the Earth is there at all. And instead of this small, measly planet I could also take larger formations such as the Sun. They enter the Sun at the front if they come from the cosmos, and exit the Sun at the back – practically unaffected – these neutrinos lose almost nothing at all. Now, how can we then measure such particles at all with terrestrial experiments, perhaps with a detector, let's say, typically with 1 cubic metre of volume. In other words, incredibly tiny compared with this huge formation I've just mentioned. You can only measure it by having enormous numbers of these particles available, by letting them run through your detector, and perhaps every once in a while one gets caught. And you hang on to this one particle, and write lots of papers about it, and you then speak of this rare event. So, if you want to conduct neutrino experiments you need huge volumes of such particles. And there are, in practice, two sources to get such numbers of particles, such large numbers of particles, and these two sources that we have are, firstly, nuclear reactors and, secondly, solar fusion. In nuclear reactors we have the situation that we have nuclear fission, in other words, uranium-235 or plutonium-239 or one of these unpleasant things. So, they are split in the nuclear reactors, and as the heavy nuclei are very rich in neutrons and as they decay into medium-sized nuclei during this fission, we have too many neutrons, and the nuclei need to get rid of them in their excited states, and this occurs through this beta decay, the neutrons decay into protons, electrons and electron-antineutrinos. An analogous beta decay process occurs in the Sun. Here it's the other way round; you take protons and make neutrons, positrons and electron-neutrinos out of them because fusion processes occur in the Sun. In other words, you make one helium-4 nucleus out of 4 protons in each case. In the final analysis, this is what happens in the Sun, I've written this down here. It says, if I want to use 4 protons to make one helium-4 nucleus, which consists of 2 protons and 2 neutrons then, of course, I need to convert 2 protons into 2 neutrons in each case, that is to let this process run. So, the nuclear reactors produce electron-antineutrinos in large quantities because we have many such reactions per second, and the Sun makes exactly as many neutrinos because we have many of such processes per second in the Sun, in line with its size. That these reactions here exist is not something I want to talk about today. We've been trying for many years to measure with these things. We're now at the point of doing experiments with these here. That there are many of such processes is something we know from the theories of Bethe, Weizsäcker etc., but we'd like to verify this through experiments, of course. So, these neutrinos are what matter, and you might now first ask – "Why is the process so complicated?" That is, why is the situation such that we can't just forget the neutrinos at this point? That beta decay, for example, works with these two particles alone – this could also be possible. It's troublesome that nature had to add this third particle, troublesome also where tests are concerned that we have to take such things into account. It would be much simpler if we had just two of such particles. Now, I can say straight away that it's very, very important that these particles also come out here. And, in fact, Wolfgang Pauli introduced them for the following reason: first it was thought that there were actually only two decay processes during this transition, or during this beta decay. If this were so, then the well-defined energy of the starting condition that primarily enters there in kinetic energy would need to manifest itself by this particle also delivering a well-defined energy, in other words, a monochromatic line. Exactly as here. But this wasn't so. We've known since the 1920s that the beta spectrum is a continuous spectrum. Which led Wolfgang Pauli to say, in order to rescue the principle (of the conservation) of energy in physics, that something else needs to come out. And this "something else" that was supposed to come out was first a proton, which was obvious. So, he said, this is a three-body decay and not a two-body decay, this beta process, and there's also a third particle that comes out. And so now, experimental physicists among you, have a go at trying to find this photon … Many people, including Liese Meitner, spent many years trying to find this photon and couldn't find it, and Wolfgang Pauli, who, as a theoretician, was a particularly sarcastic man, naturally initially ascribed this to the experimenters' ineptitude. As it became clear over the course of the years that this point of view was untenable, Wolfgang Pauli simply invented a new particle, and this was a major achievement at the time, namely, a particle that was such that experimental physicists couldn't see it, and which has weak interaction so that it escapes their observations. And so weak interaction entered physics, and this particle is what we refer to today as the neutrino and the neutron had not yet been discovered and so on, all of which is unimportant for now. Now, why are neutrinos so important for us, why do they only make the tests more difficult? They are important for us because these processes are weak interaction processes. If we did not have this, then it could be attributed to an electromagnetic or strong interaction process, but this is now really a weak interaction process, in other words, it proceeds extremely slowly. I explained before what weak interaction really means, and extremely slowly means that in this manner we have found a way to exist, because our life has a certain timespan, of course, it doesn't happen in just seconds. Although it's ridiculously short compared with the age of the cosmos, it's still quite a lot compared with the time unit of the second. I'd like to remind you that we have processes in the cosmos that actually occur very much faster. Let me just remind you that we had a supernova explosion last year, and such a thing explodes in a matter of seconds. Such a star explodes in a matter of seconds, and if the stars that we have in the cosmos were to turn to dust within seconds, then our existence would be extremely imperilled. The fact that our Sun has already been living for around 4.5 billion years, and will live for as long again, means that we had time to develop ourselves, and we can thank weak interaction for this, we can thank these neutrinos for our existence. So, it's quite crucial that this weak interaction is not just a caprice of nature, but that it has made life possible. Now, people have been looking for these neutrinos. I'd now like to briefly show you an image of the Sun, and I'm going to show you this proton-proton fusion again. I'd like to remind you again, we know, of course, that we not only know about the existence of this proton-proton fusion from the calculations of Bethe and Weizsäcker and so on, but we also know that there is such a thing as a hydrogen bomb. It's not a very pleasant example, but the only thing that I can provide here that we have directly available. Here, the same processes are underway, the same that are responsible for energy release in the Sun. And for decades attempts have been made here on this Earth to get this controlled nuclear fusion going, this fusion of protons to helium. This would be the energy source that would make it possible for us to then get rid of nuclear energy, but it doesn't exist yet. Although we're getting ever closer to it, we've not yet got far enough to ignite. And even if we could ignite, which might perhaps be the case within the next ten years, then from there to the actual exploitation of this energy source, this practically inexhaustible energy source, could take an incredibly long time indeed. Now, in the Sun it was the case that it reached a state around 4.5 billion years ago that corresponds roughly to the state we have today as far as temperature and radius are concerned. So, it has been burning pretty much constantly for 4.5 billion years, and will continue to do so for about as long again. What happens, of course, is that the hydrogen – indicated here in red – gets consumed over the course of time, so that the helium portion grows. The burning core is indicated here, this is just the actual interior of the Sun, the interior of the Sun becomes ever richer in helium at the expense of the Sun's hydrogen. You see, a lot of the hydrogen has been burnt off – here's where we are today – but there's still a lot left, and we don't need to worry about the near future, about the Sun running out of fuel. Now in the next image I want to show you a little more precisely the neutrino spectrum that comes to us from the Sun. And here on the lower scale I'm first showing you the energies that are emitted there. Here I'm showing you the typical range of solar neutrinos – let's say from 0.1 MeV up to 10 MeV – and written up here in some units is the flux, the number of neutrinos that arrive here on this Earth per second and square centimetre. I'd just like to draw your attention to two ranges of this neutrino spectrum, namely this green branch and this blue branch. The blue branch is the actually important proton-proton fusion process. You see the logarithmic units here; this is absolutely dominant, in other words, almost everything, 98 percent of the solar energy, is connected with the blue branch. But the green branch is also important. It's a quite small, uninteresting ancillary process of the Sun, but it's currently important for experimental physicists because it's the only branch that we've been able to measure. It has not been possible to measure the blue branch to date. In other words, we had to make do with this green range here that arrives with paltry intensities, but high energies. And the high energy is important, firstly because the reactions that are used there have thresholds, and unfortunately they are so high that we only get the upper part as far as energy is concerned. And what helps us in this context is that the effective cross sections are quadratically effective in relation to energy, meaning our entire apparatuses prefer the high energies. It's easier, or only possible at all, to measure them. The reaction that was used here is chlorine 37+ neutrino, gives argon 37 + electron. This is the famous experiment conducted by Ray Davies and his colleagues, which was performed in South Dakota in a goldmine. You need to go so deep below the Earth because the count rates are ridiculously low due to the weak interaction. So you get a magnitude of around, let's say, 10^10 neutrinos per second and square centimetre, and the count rate that Ray Davies got in a massive detector of 620 tonnes, 620 tonnes of a chlorine substance, these count rates were around 1 neutrino every two days that manifested in this detector. Now, I've given this reaction here according to its threshold value. The threshold lies here for this reaction, and we can only measure what's above it, and you see this is essentially this green branch that was measured because of the high threshold. The shocking thing about this experiment, I've got a few details here, perhaps I should show you this first before I report about the shock that this generated in physics … So, this is Ray Davies and his colleagues in the Homestake Mine in South Dakota. Here's the reaction again. I also said that the reactor is very big. Perhaps it's not uninteresting to say that this liquid that you see here, this C2Cl4 is the same liquid that chemical cleaning companies use to remove fat stains from your jackets. It's a very cheap liquid, which made it affordable to create these enormous detectors in this goldmine. And the shocking thing was that, of course, you can now, in principle, calculate according to the Standard Solar Model how many neutrinos should arrive, and only about a third of them actually arrive. Now, you might say that this is not so shocking, but the shocking thing is the fact that the Standard Solar Model is a very tricky model. It includes relatively few parameters, and if you fiddle around a little with the parameters then the entire Sun goes up in smoke, then it doesn't have the radius, temperature and energy radiation and so on that it has. Which means, the Standard Solar Model is a model that many, many people believe in. There are many others, mainly theoreticians, who derive a living, of course, from the fact that they diverge from the Standard Solar Model. People don't quite believe in these theories, and they're unwilling to give up the Standard Solar Model. But this factor 3 that is there is quite disturbing. Deviations of 10 percent or so can be explained, but a factor of 3 is not so easy to accept unless you want to throw overboard a lot of physics in which we believe. Now, the difficulty with explaining this factor of 3 is that there is one explanation, it could be that the Standard Solar Model isn't right. And the problem is that this green branch, this small side branch, we're not quite sure that we know about it. It depends to an incredible extent on the temperature in the Sun's interior. It works at the power of 17 or so. And we still have a few excuses, but we don't have these excuses any longer if we could measure the blue branch, because the blue branch is directly connected with the Sun's luminosity, which is very well known. In other words, if we measure the blue branch, and we don't find the spectrum's flux again, if we could measure it, we wouldn't find it again, then actually nothing can be wrong with the Sun any longer. Then something needs to happen with the neutrinos on their way from the Sun, from the place where they are generated in the Sun's interior to the Earth, they must be doing something strange. This means that we would have a chance if we could find such deviations, to learn something about the neutrinos, especially about the mass of neutrinos, because this is something that we haven't known at all to date. Today, we all assume that neutrinos have mass; it's just that we have no idea how big it is. For this reason, we assume that they have a mass because, if they had no mass, there would need to be a symmetry principle in nature that disallows this mass, exactly as is the case with photons. There we have the gauge invariance that says that the photon rest mass is zero. We need to have precisely such a symmetry principle for neutrinos, but nobody has seen it anywhere to date, and it's a bit unlikely for this reason that it would have escaped us up to now that they have a zero mass. For this reason, they should have a mass, but we don't know how big it is. And theoreticians give us some indications that currently lie at the upper limit, let's say in the case of electron neutrinos at 100 electron volts, and a lower limit perhaps at 10^-6 electron volts, so that whenever we measure one, the theoreticians can say, "Look, that's what we predicted", but it's also the case that these limits can be readily extended upwards or downwards if need be. In other words, as an experimental physicist you have no indications of what these masses should actually be, except that they are not incredibly large if the particles are stable. So, if we could measure the blue branch, we have a chance to verify whether the Standard Solar Model is right, or whether something's wrong with it. This can't be good. If we find the full flux, then it's right, and if we didn't find the full flux, then something must be amiss with the neutrinos. Now, what is the problem, why don't we measure the blue branch? Well, it can be measured. And down here I've written a reaction with which it would be possible, namely the reaction gallium 71 + solar neutrinos = germanium 71 + electron, and this reaction has a very low threshold. It's here, and you see that, if I go up here, then I get most of this spectrum, if I were to use this reaction instead of that. The difference why this has not been done long ago is due to the fact, as I said previously, that this substance is very cheap – whereas this substance is incredibly expensive, so it's purely a question of price. And actually there was around - it is almost ten years ago – an American-German collaboration, the group at the Max Planck Institute for Nuclear Physics in Heidelberg on the German side, and Brookhaven National Laboratory on the US side, came together to conduct this experiment, which is one of the important experiments of this decade. But the German side was able to raise one-third of the money for the gallium. They needed around 50 tonnes of gallium at the time, which was equivalent to around 20 million dollars back then. The Americans failed to raise their two-thirds. This was probably due to the fact that it wasn't easy to raise 20 million dollars; it's relatively easy to find a million, and also relatively easy to find 500 million, but in between there's a range that's difficult to negotiate within committees. It's too high for low energy, and too low for high energy. And after years of frustration, this collaboration was dissolved, and the decision was then taken to try to raise the money in Europe, and this is precisely what has now happened. There's now a European cooperation, the European Gallex Project, Gallex stands for gallium experiment, and this collaboration consists of the following members. First there's a big group, the Max Planck Institute for Nuclear Physics in Heidelberg. This group is responsible for making the counters. Then there's a lot of chemistry in this experiment, and the responsibility for this lies with the Karlsruhe Nuclear Research Centre, the groups in Rehovot at the Weizman Institute in Israel, and here at the end is the Brookhaven National Laboratory again. They still haven't paid their entry subscription that has now shrunk to 1.5 million dollars, as I said, from 20 million dollars. They still haven't paid it yet, and we very much hope that they will do so in the not too distant future. So, the chemistry is being performed by these three groups. Then there's our group in Munich, we're responsible for the data processing. Then there are the Italian groups, because the experiment – as I will come on to explain – is being conducted in Italy, in Milan, and particularly in Rome. And then there are the French groups here in Grenoble, in Paris and in Nice, who are responsible for the so-called source experiment, which I'd also like to talk about if I have time. Now, what does this Gallex Experiment consist of? I've already shown you the reaction, it's able to register the low energy portion of the neutrinos. It's an experiment, the reaction is up here again, that now uses just 30 tonnes of gallium. These 30 tonnes of gallium are used in the form of gallium chloride, a highly acidic solution, the whole thing amounts to around 83 tonnes. And the idea is now, in this tank of 83 tonnes of gallium that we're going to use, to capture the neutrinos from the Sun. In this tank of this substance, we have around 10^29 gallium nuclei, and of these 10^29 gallium nuclei one will be converted into a germanium 71 nucleus every few days, if all goes well. And the problem is first to fish out a nucleus from these 10^29 nuclei, and to then hound it through a certain amount of chemistry, and to then get it into a proportional counter, and to then measure in this counter the transition back from germanium 71 back to gallium 71. And then this provides an indication that this process actually occurred previously from left to right, and that an absorption of a neutrino actually happened. The half-life of this germanium 71 here is of the order of 11 days. In other words, this fishing of nuclei will be conducted every 14 days, so you have to get them out relatively quickly. This takes around a day. And so we'll fish out not even as many as 14 nuclei, perhaps just 10 nuclei, which then need to be treated chemically and put into a proportional counter. Now, how is it at all possible to fish out an individual nucleus from 10^29 nuclei? It's not as difficult as it seems. You can do it with efficiencies of around 98 percent, and in the following manner: Gallium chloride has the pleasant property that it converts into germanium 71 chloride in the neutrino absorption, although this is 3 and this is 4 valence, and the 4 valence doesn't feel very well in the environment of the 3 valence, which gives it a tendency to move out relatively easily. A bit of neutral germanium carrier is then put into this substance, and blows the entire thing out with air or helium gas. When you have got this mixture of this gas with a few little germanium chloride nuclei in it, you then need to clean it, and this is where all the chemistry comes in, and you need to take care that you don't lose it somewhere along the way. So we have a quite large series of chemical processes, which I only understand in part so far and about which I would be better advised to say nothing here in this lecture, and finally resulting in a conversion of germanium chloride into germane. This substance here is called germane, and we think that it's very important that, particularly with this experiment, a French group, gallium, is involved at the start, and a German group, germane, is involved at the end. This is already a very good omen for this experiment. Now, this germane is then mixed together with xenon in a counter, the actual counter gas, and the counter – I must say this – has an active volume of around half a cubic centimetre, and the Heidelberg-based Max Planck Institute has been developing such counters for ten years. This is also an interesting story in fact. Here, if you have a nucleus that is activated per day, of course you need to take care that you work in an extremely background-free environment, that you work very cleanly, in other words, you don't have any students with dirty fingers around. You need to work extremely cleanly, but even this is not yet enough, it also needs to be backed up by refined electronics that make it possible to distinguish between correct and false impulses. I should perhaps in this connection also point out that there is a competitor company to this Gallex Experiment, namely a Russian group is currently conducting this experiment in the Caucasus, in the so-called Baksan Valley. I've been there a few times, and I'm very familiar with the situation there. The Russians have the huge advantage over us that they already have the gallium. In fact, they have 60 tonnes of gallium available there in large tanks. We're not going to have our full amount of gallium until the end of next year; actually, we still need approximately the equivalent of one year's global production of gallium for this experiment. But this will soon change because gallium production is now operated at many sites. In particular, it's also underway in Japan now, because gallium arsenide seems to be an important substance in the semiconductor industry, which is why people are interested in it. It's perhaps also quite interesting from the financial side. A lot of money was raised for these 30 tons of gallium, partly by the German Federal Ministry of Education and Research, and partly by the Max Planck Society, around 22 million deutschmarks, which is quite a lot of money, but the gallium isn't used up by this experiment, of course. So, every few days we convert a nucleus, but even the nucleus returns, as you'll have noted, so that, when the experiment is finally concluded – it will start in around two years' time, so at the end of next year. And all in all it will then last for around four years according to what we extrapolate – so that after around six years we can return this gallium to the market, and it may well be that we then get a lot more money for it than was previously invested. It was perhaps a good investment quite apart from the experiment. In any case, the Russians already have their gallium, so they're two years ahead of us, but we have the detector, and the Russians don't have their detector yet. And it's not yet quite clear what will happen. It's also interesting that with us, Brookhaven National Laboratory, with its wealth of experience in chemistry but that Los Alamos National Laboratory is involved with the Russian group, and that now in the age of Perestroika we have the interesting situation that, for the first time, two American national laboratories face each other across the Iron Curtain, and what will happen is something that we will see over the course of some years. It's certainly also very good that with this experiment, which is never done in the same way, that the Russians have another chemical extraction. Namely, they're not working with gallium chloride, but instead with gallium, with gallium metal. It's a very good thing that such an important experiment is being conducted at two different locations, as this will of course then boost the results' credibility massively. Because we want to find the full flux in this experiment, then everything will be okay. Then the Standard Solar Model is correct, then nobody will give two hoots about it, if you will If we then find less flux, then we must ascribe this to neutrinos, neutrino properties, neutrino masses, neutrino mixing angles, and, of course, nobody will believe us if this isn't confirmed by independent experiments, although we have another possibility which I will speak about shortly, and which we can use with this experiment to make it more credible. I'll now briefly show you an image of how an artist imagines we conduct the experiment. So, we're going to have a large tank here in the middle where we put the substance, the gallium chloride. As it's a highly acidic liquid, it's not all that trivial. We then have the major chemistry going on here, we then also have of course, in an emergency the possibility, if something happens, that we don't lose the gallium down here, after all it's a lot of money that we can then catch. The entire thing happens underground, where we have the huge advantage that we don't need to go into a goldmine, because the operating conditions are very difficult there, it's very hot, and the access routes are very narrow. You need to spend a huge amount of time talking with students to persuade them to go and spend a long time down there, so this type of doctoral work lasts five years, of course, and it's not much fun. So, we have a much more elegant method, we do it in Italy, in the Gran Sasso National Laboratory that's being built there. And first I'll show you – for my American colleagues I'm showing you the whole of Italy here – and for the Europeans I'm showing the central section, and so this is central Italy enlarged, here is Rome, and around 150 kilometres east of Rome is where the Abruzzo massif where the Gran Sasso is located. And through this Gran Sasso the Italians have built a motorway, and in the centre of this motorway, where the massif mountains above it are at their highest, large underground caves were built into the sides of the mountains, the largest caves that have ever been created artificially, and this is the Italian neutrino laboratory. Assergi is the nearest town, and L’Aquila is the nearest larger city, and this is where our experiment, among others, is currently being set up. A lot of other experiments are also planned at this site, and if Mr. Rubbia were here, he'd be able to report on another such experiment. This underground laboratory is extraordinarily large, I'm just giving you a small picture of it here. So, here are the caves that have been excavated, here's one side of the motorway, the other isn't shown. Cars travel on the other one, this side is still closed to traffic so that these laboratories can be constructed undisturbed. It would be impossible to do such a thing in Germany, where once a motorway is finally completed it naturally goes straight into operation. It has already been finished here for two years in principle, but until now we've had the opportunity to get on with building it undisturbed. Our laboratory is here, this small cavity that you see here, I'll show it to you next as it was when it was still a building shell. A lot of large cavities are planned for other experiments, including for laser experiments and so on and so forth. I'm showing you just an old picture of how it looked in its rough state, and just this small cave here in which we will work, it looks something like you see it here. Here you see one of our physicists who we have included here to give you an idea of the dimension of this cave. This cave has the advantage compared with gold mines that we can of course enter it with extraordinarily large vehicles. We can put all of our gallium in a tank, and this is extremely important because the relationship between outer surface and volume is thereby very favourable. The outer surface affects us because, of course, radioactivity intrudes everywhere, from the walls, from below, radon and so on, and naturally from the tank itself, and so this experiment offers this fantastic opportunity to circumvent this problem. Now, I briefly mentioned that we are going to do something to boost this experiment's credibility, and what we will do is that, into the middle of our detector, I'm showing you it again, here's the tank, twice during the experiment we'll place into the middle of our detector an artificial neutrino source, which must compete with the Sun, in order to calibrate the experiment. Although we know to some extent and in many cases a great deal about the many, many individual steps that are required, in order to ensure that a gallium atom that is converted here into germanium, and is then pulled out, pushed through the chemistry and finally lands in the counter, in order to ensure that and with what probability it will occur. We know all, all, all of these steps very precisely, with the exception of the effective cross section that we know precisely to some extent, perhaps to around exact 10 percent, but nobody will really believe you if it hasn't been verified. And for this reason, we'll introduce an artificial neutrino source here where we can then calculate precisely how much production rate, how many neutrinos we will measure in our counters. And if this number then agrees with the prediction, then naturally the credibility of this experiment is then immensely boosted. So, this source is produced, and our French and American colleagues are mainly involved in this; the French colleagues to the extent that they will introduce the reactor radiation, of the order of around 1 megacurie The Americans will carry out the isotope enrichment for the source. We need around 120 kilogrammes of chromium, which we will use as neutral chromium as an output material. We will then enrich it, and we will then work with around 40 kilogrammes – and these are huge volumes – and this 1.5 million dollar admission price for the Americans, that's the price to manufacture this enriched isotope that we need for this source experiment. And now, at the very end of my talk, I'd just like to show you what we can learn about neutrinos now if things go well, if we measure less flux than the Standard Solar Model gives us. I'm showing you here a drawing with two simple parameters, a quite simple model, the simplest I can use here. And the neutrino mixing angle is drawn down here. In the case of the quarks, this would be called the Cabibbo angle, but that's now the mixing angle for potential lepton mixing, theta, in logarithmic units, and drawn up here is the squared mass difference, the size that we can measure there, in other words, this would be, for example, M1^2 minus M2^2, where M1 and M2 are the masses of two potential neutrino mass eigenstates. What can namely happen on the way from the Sun to the Earth is that the neutrinos that are produced as electron neutrinos convert into muon neutrinos, revert into electron neutrinos, but also perhaps into tauon neutrinos or perhaps into others that we don't about yet, so that we lose electron neutrinos, and this would mean that our counter measures less, because it's sensitive to only the one type. This conversion – and that's a long story – will not occur so much in the vacuum or quasi-vacuum between the Sun and the Earth, but in the Sun's interior. The fact that electron neutrinos have another interaction than all other neutrinos with the electrons in the Sun, results in this, or can result in this. And this means that, depending on how sensitive they are to this conversion that is determined by the mixing angle, which says how well electron neutrinos convert into others, and once the neutrino masses enter and not necessarily eigenstates of the weak interaction. Now, I've indicated here in blue the sensitivity range of the Gallex Experiment to these two parameters. So you see, as far as the mixing angle is concerned, in the mass range of 10^-4, we're extraordinarily sensitive to as far down as 10^-4, further, in the case of even small masses, we're less sensitive. I've also indicated for comparison in red here another experiment that we conducted a few years ago, and about whose beginnings I've already spoken here, the Gösgen reactor experiment, where we measured the electron antineutrinos, and looked for neutrino oscillations. Where we found nothing, but could nevertheless state limits due to our measuring errors, where these parameters here are no longer possible, and this red area is excluded by these earlier experiments. Back then it was much more dramatic. I've cleverly not shown you a logarithmic scale here, but instead a linear scale, and then this naturally overlaps with most of this image here, and you see that neutrinos can now only exist in small ranges relating to these parameters. But now, when we replace the distance between the detector and the reactor that we had then, of up to 65 metres by the distance between the Earth and the Sun, we get a great deal more sensitivity, which is why I now make use of a logarithmic scale here and there, and you see that this is the range that is accessible to our Gallex Experiment. So, if mass differences arise here in this range, then we will see them, but we might be unlucky with the whole thing; it could be that it is precisely the neutrino masses that are in them. And it's not accessible to either this experiment, because it has a large distance, or to this experiment, because it has a small distance, we would then need to make a particular effort. And there are actually a lot of groups that currently want to get into this range. I'm somewhat against this because I think we should first wait for this experiment, which is in any case being conducted for quite other reasons. If it then supplies a positive result, in other words, if we are in the blue range, then these experiments are useless. If it's not the case, then they can still be done a few years later. The reason why I'm against experiments in this range at the moment is that you can relatively easily bring it down by a power of ten; but secondly it will then become incredibly expensive. So, the reason is essentially of a financial nature. The money would be better given currently to solid state physicists and biologists before this experiment is done, and if we then see that we must get into this range, then we could still do it in a few years' time. But this is all still a long way off, as I said, in around two years we'll begin, and by around four months after the start we'll have the first data that tell us whether the flux comes from the Sun or not. And we will then measure for a total of around four years, and perhaps during this time I'll have the opportunity to then report on the current status of the experiment. So, in around six years from now – if all goes well – the experiment will be concluded, and we will know whether we lie in the blue range with our neutrino characteristics or not. Since I don't wish to look that far into the future now, I'd like to conclude at this point.

Rudolf Mößbauer on the Importance of Neutrinos for Life
(00:12:02 - 00:13:33)



“We Have Definitely Detected Neutrinos”

Chargeless as they are, neutrinos cannot be detected directly. In 1956, however, Frederick Reines and Clyde Cowan, probably encouraged by previous considerations of Bruno Pontecorvo, succeeded in proving the neutrino’s existence. They set up their experiment close to the Savannah River nuclear reactor where sufficient amounts of (anti)neutrinos were produced to give them a chance to hit upon a signal in the scintillator fluid of their detection tank once in a while. Their reasoning was as follows: When an (anti)neutrino collides with a proton, a neutron and a positron is generated in an inverse beta decay. The positron will meet an electron in the fluid, resulting in both’ annihilation and a gamma ray signal. The neutron will tumble through the fluid until being absorbed after typically five microseconds by a nucleus that consequently flashes out a gamma ray signal. Hence, would they record two gamma ray signals with an interval of precisely five microseconds, they had indirectly discovered a neutrino. Carlo Rubbia (Nobel Prize in Physics 1984 together with Simon van der Meer “for their decisive contributions to the large project, which led to the discovery of the field particles W and Z, communicators of weak interaction") explained this experiment in one of his Lindau lectures:


Carlo Rubbia (2012) - Neutrinos: a Golden Field for Astroparticle Physics

Thank you very much. Today I will have to be describing you something different. Something which I think is getting progressively more important which is the nature and the presence of neutrinos as a particle, astro particle physics. Now, let me say the discovery of the Higgs boson at CERN LHC, as said by Professor Veltman a minute ago, will crown the successful standard model. And will call for verifications of the Higgs boson couplings to the gauge bosons and to the fermions. The neutrino masses and oscillations represent today a main experimental evidence and a possibility, a real possibility, for physics beyond what he has so carefully described to you to be the standard model. The only elementary fermions whose basic properties are still largely unknown, neutrinos, must naturally be the one of the main priorities to complete our knowledge of the standard model. Still unknown precisely the incredible smallness of the neutrino masses, compared to those of the other elementary fermions, point out to some specific scenario which is all to be elucidated. The astrophysical importance of neutrinos of course is immense. So is their cosmic evolution. The beginning of experimental neutrino physics starts 1956 with the first observation of antineutrinos in a reactor, the Savannah River Reactor by Cowan and Fred Reines. And it took them about 40 years precisely in order to get the Nobel Prize. Fred Reines got the prize in 1995. The basic idea is a source, is a reactor. Neutrino electron from Beta-decays and rich of fission products. And there are essentially uranium isotopes which are the main responsibles. And plutonium isotopes are mainl responsibles for the producing of these Beta-decays. There are 6 neutrino electrons produced for each fission which gives you an enormous amount of neutrinos. There are 2*10^20 neutrino electrons per GeV thermal, each and every second. And we are talking about a very large amount of power. Reactors are very large. Therefore the number of neutrinos is absolutely astronomical. The detection is relatively simple. It is based on a scintillator. It's based on inverse Beta-decay. The neutrino electron plus the proton becomes a positron plus a neutron. And then what you see is the rate and you observe then the spectrum and the only possible disappearance experiment. In this graph here you see how the experiment occurs. The neutrino comes in. It hits the nucleus and produces an electron. Which is a positron which is detected and the neutron and the neutron is wandering around for a few microseconds. And then it's captured producing gamma ray captures. So the signal between the gamma ray capture and the neutron and the signal for the positron are used to detect the events in question. In these experiments you don't see any other neutrinos, as you know, because the other neutrinos are much more... are different than this. Now many years have passed by, today we have much larger... huge experiments working on neutrino. Let me mention a few of them. One is Daya Bay Experiment which is a set of 6 commercial reactor cores which uses something like 14.4 GeV of total particles. And there are antineutrino detectors indicated to have a total mass of as much as 120 tones. And the experiments are shown in this graph here. Here is the reactor and these are the places where the experiments are located. They are located underground in an axis of the order of several hundred metres of water equivalent in order to remove the cosmic ray background. This experiment is a main experiment going on in China. Another experiment worth mentioning is the RENO experiment. Similar situation, you can see there 6 huge reactors producing neutrinos. Those neutrinos are detected by a near detector at 250 metres, by a far detector under the mountain at 1.3 kilometres. And you can see here how the thing is located physically. And this whole thing is now presently operating and it is in Korea. Another important thing I like to mention at this point is the fact that developing countries like Korea and China are becoming very strong and very successful in producing most important results in this field of neutrino physics. And finally the other important thing is the combination between accelerators and neutrino detectors. Here you can see a picture of the CERN to Gran Sasso neutrino beam. A beam from CERN which you see here, a well-known lake already described by Martinus Veltman a minute ago, is emitting neutrinos which are traversing the Alps and they're coming under ground at up to 20 kilometres under the Po Valley. And then because of the rotation of the earth they come up again and they emerge in a laboratory, very large laboratory, which is located in Gran Sasso. Where you can see here the many various areas. And this is a place where the neutrinos arrive. So the neutrinos travel for something like 730 kilometres between the introduction point and the arrival point and during this distance many, many new, as you will see many, many new phenomena have occurred. Oscillation phenomena like this. Now briefly speaking, how many neutrino species there are in nature? This is the question that has already again Martinus mentioned about the fact 3 quarks in 3 families. Neutrino oscillation established a picture consistent also with the mixing of 3 physical neutrinos. Neutrino electron, neutrino muon and neutrino tau with the help of 3 mass eigenstates, v1, v2, v3 in a matrix operation. The actual masses of neutrinos are, believe it or not, so far unknown. All we know now is the difference in masses: the different mass between neutrino electron and neutrino muon or neutrino muon and neutrino tau. But we do not have the actual masses. Although we know they are very small. The sum of the strength of the couplings of all possible neutrino invisible states is observed by taking the zero particle. Take the zero particle, let it decay, in the decay process some events are not visible because neutrino is not detectable. Counting then also events which are invisible states, you can determine how many new types of neutrinos that are available. All types of neutrinos should be connected to the zero. And you find their number is actually 3. It's not 35, it's only 3. However, the question that we conclude, that the result in number of neutrinos which is 3 is only possible if the neutrino in similarity with leptons have unitary strength. You can have also some other additional elements which spread the number 3 out to a much larger number of objects. At present experimentally measure 3 weak coupling strengths are rather poorly known leaving lots of room for more exotic alternatives. And there may be, ok, some evidence for the presence of a number of anomalies, what we called anomalies, related to neutrinos which of course are highly speculative at the present moment. They are quite interesting, they are quite exciting but very speculative. And next years will be needed to confirm either yes or no experimentally. And the story which we are going to mention which is the title more or less of my talk is the question of sterile neutrinos. What are sterile neutrinos? Sterile neutrinos are a hypothetical type of neutrinos that do not interact via any of the fundamental interaction, the standard model except gravity. Since per say they will not interact electromagnetically weakly or strongly, they are extremely difficult to detect. If they are heavy enough they may also contribute to cold or dark matter and warm dark matter in the universe. Sterile neutrinos may mix with ordinary neutrinos via a mass term. Evidence may be building after several experiments. There are 2 fundamental experimental anomalies which appear to be present in this experimental data. One is a light with a mass of the order of 1 eV^2, sterile neutrino from neutrino electron observation starting from an initial neutrino mu accelerator experiment. And a second effect is neutrino disappearance. When you count the number of neutrinos coming from a nuclear reactor you find that actually you detect less neutrinos than should have been there starting from the production process. And therefore there is room for some neutrino to have disappeared. To be there and not to be directly detectable. And the same situation is also indicated with very intense megacurie electron conversion neutrino sources. Which also are objects in which you take neutrino emissions from a source and that neutrino emission from the source... You know how many there are coming from the source. You know how many you detect experimentally and you find the deficit. So those deficits are at the present moment one of the main reasons of interest and concern about the possibility of sterile neutrinos. How many are the sterile neutrinos. Again nobody knows. The argument here is an example, you see in this drawing. You have one single sterile neutrino here. You have the 3 neutrinos: neutrino tau, neutrino mu, neutrino E, in this order. Maybe there are also turn around in different order but anyway they are there. And then there is an additional sterile neutrino forth up there. This is a 3 for 1 models. Now, neutrino mu to neutrino E interaction, a very tiny level, are expected from this process because neutrino mu and sterile may interact with each other. And neutrino E and sterile also may interact with each other. And the fact is that these 2 processes are possible. And therefore you can have a situation which you see in apparent neutrino E production present in abundant, dominant neutrino mu. Because of the intermediary passing through a single neutrino scalar, sterile neutrino event, neutrino S. Now current measurements seem to indicate that perhaps not only 1 neutrino is available, there could be 2 sterile neutrinos. This is in some theoretical model. You can see here 2 sterile neutrinos getting together, together with standard 3 normal neutrinos. And they are more complicated, more difficult and nobody really knows whether nature has decided to be otherwise. Let me also point out to you that if you need to have in your hands neutrino parity violation, the presence of at least 2 neutrinos, the sterile neutrino one is preferable. Now, all that is theory. Now, what about experiment. The experimental story about neutrinos, sterile neutrinos started from a so called LSND anomaly. Which is an experiment which was preformed something 20 years ago in which a very high intensity proton beam coming from a proton accelerator at LAMPF has been put on target. And in this way you produce... The pions produce muons, which are then decaying into electrons. And electrons are then associated with antineutrino E and neutrino mu oscillations in this form. And your signal which you detect in here is the presence of an anomaly of neutrino electrons in a familiar situation which started from pions and muons. You should also have neutrino, antineutrino mu. So antineutrino electrons instead of antineutrino mu. This is a very striking fact. You can see plotted here the ratio of the length of the path, divided by the energy in MeV of the neutrinos. And you can see this is metres per MeV, about 1 or 2 metres MeV. So it's a relatively short object, a few metres. And you can see here a very substantial peak, this is the signal. And all the other backgrounds are very much smaller. So you get a very strong excess of event or antineutrino electrons of unknown source. Which has 87 plus or minus 22 etc, etc. Which has a probability of occurring relatively small but not that small, 0.2%. And this gives you a 3.8 evidence for the oscillation. Such a result was there for a number of years, still unchallenged. A new experiment was developed at Fermi Lab. It's called MiniBooNE. In this experiment it's more conventional. You take a standard booster accelerator, you're producing targets for protons, protons becomes pions, pions become muons, muons go through dirt and are caught by the detector. And this experiment has also shown the presence of LSND anomaly. Both in neutrino and antineutrino. You can see that from this graph. Here is the L over E, the length divided by the energy. And you can see the neutrino, antineutrino both indicating a clear signal. The result is compelling with respect to ordinary 2 neutrino fit indicating anomalous excess. And the excess occurs about at the L over E, about 1 MeV per metre. Now this reported effect is essentially within the experimental error compatible with LSND experiments which well know was originally dominant in the antineutrino channel. You can see here both the LSND old results and the new MiniBooNE results, you see within the very large statistics and errors, maybe something is there. This is certainly a big question mark which we have to understand better. Now back to the reactor experiments. Reactor experiments are the ones which were studied initially by Cowan and Reines. You can see a general plot. All we know about, in our present day nuclear reactors, you can see here the distance, reactor distance from the reactor itself. This is 10 metres, this is 1 kilometres, this is 100 kilometres. And you can see here essentially the rate of observed over predicted neutrinos. I mean if nothing would happen, no oscillation would occur, neutrino will be a stable object. The ratio will be 1. You make so many neutrinos and you collect the same number. But indeed this is not so and this represents a very large amount of pieces of information which should becoming available. You can see for this graph for instance that there is a solar neutrino anomaly which is well known, well measured, which occurs at a mass difference which is quite small, 10^-5 eV^2. In which essentially the neutrino electrons and neutrino mus are in fact oscillating with each other. And the phenomenon of this oscillation is producing a drop in the rate of neutrino electrons which are the only ones seen by the reactor. The reactor doesn't see neutrino mu. The reactor doesn't see neutrino tau because they are not part of the... because the tau is too heavy and the mu is too heavy. But still the neutrino E is present there. So you see a cancellation, a very huge cancellation in the number of neutrinos produced by the solar neutrino anomaly. Another anomaly was discovered in the period from '86 to '98 and this is a so called atmospheric neutrino anomaly which relates essentially the neutrino mu with the neutrino tau. And you can see here a second peak right there in the experimental data and the results are in agreement. And then you have the reason for the argument today. There is a certain part which is essentially terra incognita in which you find lots of experimental results. You see them here. Which all happen to be for many, many different groups, which all happen to be some percent below the value that should be the one predicted. You can see that all these results are compatible with the event rate R, which is not 1 but is 0.93. And this is essentially an anomaly which is somehow... we have to explain: Why do we have less neutrinos out of any and every reactor that you expect to have? The reason for that must be explained and so far is unknown. A third point of information is coming from the so called gallium anomaly. The gallium anomaly is an experiment which was used to study solar neutrinos. Neutrinos from the sun were detected in experiment for instance in Gran Sasso laboratory. And in this experiment you have done some measurements with a radioactive source. A radioactive source is a neutrino source, a source in which a neutrino is emitted with a megacurie intensity. And also a similar thing was done in Russia. And the calibration signal produced by intense artificial capture processing in chromium and argon. And you can see there is the way which the detector has been detecting, relatively simple detector. And there again you find that alternate source. There are less neutrinos than you would like to have. That you can see that the ratio between the neutrinos you actually detect in the experiment and the number of neutrinos which you have produced by your source, that you have "bought", you calibrated, you generated, and this is less than 1. And this best fitted value may favour the existence of undetected sterile neutrino with certain evidence with a few standard deviations. And a broad range again. Roughly the same values which correspond to the LSND experiment and to the MiniBooNE experiment. So the mass difference squared the other one eV which is much greater than what the oscillation indicated in the curve. So the question which comes about is: Is there a unified approach here? You can see here the 2 kinds of experiments. These are the LSND anomalies indicated there which expect an antineutrino from accelerator driven. And this is a possibility which is a whole spectrum now, not well identified between the angle of sine Theta and Delta M^2. So all this area is allowed by the results coming from the LSND and MiniBooNE experiments. And then you find the gallium reactors, the gallium and the reactor sources which are producing something there. The most remarkable thing is that all these 2 elements are within this very huge area compatible with a new situation which occurs of the order of about 1 eV^2. And therefore the question is: Are they all true or some of them is true or nothing is true? Are they induced by common origin? If they are common origin it's perfectly normal to expect that because the total phenomenon of disappearance it has to be much greater because of disappearance to any possible final state. Well, here we have a specific channel in which in fact the neutrino, the mu initially produced, is transformed through neutrino sterile into neutrino electron and the probability for that has to be smaller. So the question is, is there a unified approach of that. Now, the evidence for all this as I said is mounting. Let me very briefly show to you, summarising what I said. LSND, old experiment, gave 3 point standard deviation. MiniBooNE was performed both on neutrino and antineutrino. The combination of these events is 3.8 standard deviation. The gallium absorption experiment has given us 2.7 Sigma. Reactor experiments as I've shown before... The beta-fission, beta-decay they give about 3 standard deviations. And there are some indications which are not real proof but some sort of indication from cosmology, which say that perhaps some sterile neutrinos, if it's not incompatible with cosmology. Maybe in some circumstances that could maybe even be connected to some cosmological arguments. And the combined evidence of all these things is that we find it possible overall anomaly of about 1 eV^2. And let me tell you the number for this is remarkable, its 3.8 + 3.8 + 2.7 + 3 + 2. This is something which is worth looking at and very understood. Will any of course these observations survive to a more complete analysis? The study of questions? And for that we need some detection, detector development. There is a major new detector development which is occurring now today which is a new powerful detector which is called liquid argon bubble chamber. Those people which remember the old days, old neutrino physicists, can remember that about 30 years ago an experiment was performed by a bubble chamber called Gargamelle. And that was the first real evidence that exists a neutral current. It's because the Gargamelle that we saw with some neutrino electron observed which were not supposed to be there that indeed the zero particle was a reality. And this started the whole program and this experimentally was the argument which was used in order to explain essentially a standard model as a result of it, as beautifully explained a few minutes ago by Martinus Veltman. Now, the Gargamelle bubble chamber is a very complicated. Now we have a new development which is coming out which is called electronic chamber which has very similar property of density, radiation length and collision length, in fact, it has very similar events except it's done electronically. And it uses very large amount of target. Many kilotons of detectors instead of a few tonnes. 3 tonnes was the mass of Gargamelle. And it is essentially very similar from the point of view of the dimension. Very quickly I will tell you what the basic idea of this is. Maybe I should skip all this because of time. And show you more clearly how the thing works. In this chamber you have a sort of a detector in which you measure essentially the time of a family of drift wires. And these are the number of drift wire indicated there. There's the time of the drift of the electrons, free electron in the liquid that's indicated there. And whenever there is a pulse coming after a certain delay you have a signal which you record. Those signals are all collected up by the electronic system and they become a way of reconstructing a track of an event. Let me show you. There is very large ICARUS type detectors, liquid argon detector now operating under ground in Hall B of LNGS. And it is quite a substantial hardware equipment as you can see. Very large in terms of equipment, materials and everything else. And essentially the basic element in here is essentially the fact that to make such an experiment work you need to have incredible purification of the liquid argon. Liquid argon which comes initially as part of air is containing a lot of oxygen. The oxygen is of course an electro negative gas. In order to be able to run this experiment you have to have in this site liquid argon, a few part per trillion of the mass of the system. And this is of course quite a small number. Events are reconstructed with this system. These events are perfectly identical to the events you can see in the Gargamelle bubble chamber. You can see here on top an event with a shower, on the bottom an event without a shower itself. I mean events are contained for instance in this volume here. And when you expand it you can find each and every track. So it's really a true visual detector which provides you to see everything which happens. And it will be the basic tool which will be used in order to reach this kind of explanation for these phenomena to virtual neutrinos. Now let me also point out to you the key element in this is the fact that in a liquid argon bubble chamber you can separate extremely nicely the single ionising events. Which is electron events. From a pi zero event which proves it's 2 Gammas. And you can see them separated in here. And the separation is absolutely superb. And anyway these experiments are in fact the only in this way. There is one more point which is important. We need at least 2 detectors in order to be able to detect this situation and essentially as I said the length of energy has to be the same, as observed by the pervious anomalies. You need the image detectors. There are magnetic spectrometers in there. You have to detect both neutrino and antineutrinos. And you have to collect a very huge number of events. We are talking about millions of events to study and answer these questions and identify very clearly both the neutrino mu and neutrino E phenomena. The neutrino facility at CERN is described... is under discussion now, it's described here. There are 2 detectors available. One in the near position and one in the far position and they are all liquid argon detectors. Fermi Lab is also advancing with a similar program. Here you can see 2 liquid argon detectors. One at very close, 200 metres from the accelerator, the other one at about 700 metres, 800 metres from the accelerator which is also doing similar program. Now, the important thing of this fact is that these accelerator experiments are not... are looking at 2 identical detector at different distances. And there is all Monte Carlo calculations are cancelling out. The neutrino spectra in the near and the far position are closely identical as you can see from this graph and therefore any deviation from the perfect proportionality. If you find the perfect proportionality between the 2 positions then the absence of neutrino oscillations in the neutrino is actually eliminated. Now the experiments are in fact going to look both at the LSND MiniBooNE of neutrinos, antineutrinos. They are going to look at the Gellex-Reactor, plus reactor oscillatory disappearance. An oscillatory disappearance may also be present in neutrino mu signals. And we have to compare neutrino and antineutrino to see whether there is something associated with CP violation which is behaving different from neutrino, antineutrinos. As I said at the beginning in the absence of these anomalies the signal for the detectors should be a precise copy to each other for all the experimental signatures and without the need of any calculation. If you see a difference that it is. Now these events are in fact quite remarkably different. You can see in this graph here the line of the LSND prediction window. You can see there are various places there, 1, 2, 3, 4 which are possible places in which the event may occur. And you see from the graph to the left that indeed the various values 1, 2, 3 and 4 correspond to vastly different distributions in the angle distribution observed by the liquid argon TPC. And by the way also the intrinsic neutrino E background is shown and you can see from that clearly that in fact the 2, 3 systems are quite remarkable. This really would be a real definitive experiment of this. There are lots of curves showing that things is quite correct. I will skip those. And essentially disappearances or anomalies also indicated there. So the whole system is now coming to a situation which this kind of new experiments will give us all possible solutions. Now this idea in a few more minutes will allow us to... There is a real revolution in the sterile neutrino searches. And there are several other experiments besides the one I have described which are being developed from this. There are radioactive sources by very large number of different experimental proposals throughout the world. The reactors which are repeating the measurement again, they are stopped by zero beams which are also being provided by the people. And there are decaying flight beams like the one we are doing now in CERN or Fermi Lab and others. All this enormous amount of effort is coming together. We have here seen now today that there are 114 institutions which are working on this subject. This is something of the magnitude of an LHC type experiment in terms of a total community of people. Clearly the population now is working extremely hard in order to find out whether the phenomena are true or not. Let me briefly use the 2 more minutes I have left to show you some of the other examples. Here for instance is the example of a reactor which is called Lucifer. Which is a characteristic of studying the neutrino oscillations. Very close to the target. This is done in France, in CEA. And it certainly is a device which will improve quite a bit on the measurement of this terra incognita. The other example here is the Daya Bay in China which I mentioned initially where they want to produce a very large megacurie type of radioactive source which will produce neutrinos. Those neutrinos will be produced inside this block and here there will be detectors around to collect the information in multiple source location to probe sterile oscillations. And you can see here very large magnitude detector, this is a very large swimming pool with a tremendous amount of equipment and money and effort involved in this kind of project. In fact, you can see that the fact that you can observe with a method, it's a tiny little difference between what it should be with the oscillation and without the oscillation. So the experiments are not simple. They are difficult and complicated. But the technology is really quite well advanced. You can see here the graph showing where the sterile neutrino, its appearance should be. And this is the line which theoretically you predict from the experiment, you can see maybe, maybe not, you will get an answer. Putting together all these people working on sterile neutrinos you have a complete list of them in this list shows that in fact there is a very large, rich scientific community which is now working on this subject. The idea is that sterile neutrino area is this one. And each and every one of these experiments is introducing some kind of additional information which in fact will be of great value in order to understand whether this is just something fake or whether there is a real window opportunity beyond the standard model. In fact you can see here the various alternatives which are the various types of experimental programs, the vast neutrino program. I think I will stop here, thank you very, very much for my presentation.

Carlo Rubbia on the first detection of neutrinos in 1956
(00:01:29 - 00:03:20)


“We are happy to inform you that we have definitely detected neutrinos from fission fragments by observing inverse beta decay of protons”, Cowan and Reines wrote in a telegram to Wolfgang Pauli on 14 June 1956. “The message reached Pauli at a conference in Geneva. He interrupted the proceedings and read the telegram out loud.”[7] On the agenda of the Royal Swedish Academy of Sciences, though, neutrinos had no priority then. Reines received a share of the Nobel Prize in Physics almost forty years later, in 1995, when his colleague Cowan had long passed away. The first Nobel Prize in Physics, which was explicitly related to neutrino research, was awarded in 1988 in equal shares to Leon Lederman, Melvin Schwartz and Jack Steinberger, amazingly not for the discovery of the first, but for the second type of neutrino, the muon neutrino. The muon is an unstable particle with a lifetime of roughly two microseconds. It has more than 200 times more mass than the electron, and had originally been discovered in the mid 1930s. The discovery of its neutrino in the early 1960s was important because it established the existence of a second family of elementary particles. In the mid 1970s Martin Perl discovered the tau as the major lepton of the third family of particles. For this achievement he shared the Nobel Prize in Physics 1995 with Frederick Reines. In 2000, finally, the tau neutrino was discovered by the DONUT collaboration at Fermilab. In contrast to electron neutrinos, muon and tau neutrinos are not produced in the sun or in nuclear reactors, but in laboratory accelerators or in exploding stars. Along with their respective six-packs of quarks the three lepton pairs constitute the three families of elementary particles, which are themselves integral parts of the standard model of particle physics. Its basic construction was elegantly outlined by Steven Weinberg (Nobel Prize in Physics 1979), one of its founding fathers, in the first of this two Lindau lectures:


Steven Weinberg (1982) - Prospects for further unification in the theory of elementary particles

Physicists naturally try to see phenomena in simple terms. You might say that the primary justification for doing elementary particle physics with all its expense and difficulty is the opportunity it gives us of seeing all of nature in somewhat simpler terms. Great progress had been made a few years ago, say from the late 1960’s to the mid 1970’s, in clarifying the nature of the elementary particles and their interactions. Then starting about the mid 1970’s, we began to confront a series of problems of much greater difficulty. And I would have to say that very little progress has been made. I would like first of all today to remind you of what the present understanding of elementary particle physics is, as it was already formulated by the mid 1970’s. And then for the larger part of my talk discuss the attempts that have been made since the mid 1970’s to go beyond this to the next level of simplicity. The present understanding of elementary particle physics, I would say, is based on three chief elements. First of all, there is a picture of the interactions of the elementary particles as being all extremely similar to the one interaction that was earlier well understood, the electromagnetic interaction. You know that the electromagnetic interaction is carried by a massless particle of spin-1, the photon, which for example is exchanged between the electron and the proton in the hydrogen atom. In the present understanding of elementary particle physics there are 12 photons. And I should do this, meaning that the word is put in quotation marks. There are 12 photons which carry all the forces that we know of between the elementary particles. These 12 photons comprise first of all the familiar old photon which is emitted say by an electron or by any charged particle, and then by three siblings, three family members called intermediate vector bosons, a W-, a W+ and a Z0, which are emitted for example when leptons change their charge. When electrons turn into neutrinos or neutrinos turn into electrons. And the neutral one, the Z0 is emitted by electrons and neutrinos when they don’t change their charge. Similarly the W and the Z are also emitted by quarks when they do or do not change their charge respectively. In addition to the four “photons” of the electro weak interactions, there are eight similar particles known as gluons, that Sam Ting has already mentioned, which are emitted when quarks change, not their charge, but a different property which has been humorously named their “colour”, so that a green quark may turn into a red quark emitting a red-green gluon. There are three colours and hence there are eight gluons. You may ask why not nine gluons and I will tell you if you ask me privately later. Now, in electromagnetism we have not only the general idea of a photon but a very clear picture of a symmetry principle of nature which determines the properties of the photon and determines in particular all of its interactions. The principle known as gauge invariance. In a sense, from the point of view of the theoretical physicist, the photon is the way it is because gauge invariance requires it to be that way. The 12 photons of the elementary particle interactions as we know them are also governed by a principle of gauge invariance, but the group of gauge transformations is larger and it is known mathematically as SU(3)x SU(2)x U(1). The SU(2)x U(1) is a 4-dimensional group which governs the four particles of the electroweak interactions, the W and the Z transmit the weak nuclear force, which gives rise to radioactive beta decay. So this whole set of interactions are called the electroweak interactions. And the SU(3) governs the eight gluons which give rise to the strong nuclear forces. This is sometimes called the 321 theory. The theory of the eight gluons by itself is what is known as quantum chromo dynamics. The electric charge of the electron say is in fact just a peculiar weighted average of coupling constants, G and G prime associated with these two groups, SU(2) and U(1), G and G prime play the same role for these groups of gauged transformations that the electric charge played in the older theory of quantum electrodynamics, and the electro charge in fact is given by a simple formula in terms of them. And similarly there is another coupling constant. A coupling constant is just a number that tells you the strength with which these particles are emitted and absorbed. There’s another coupling constant that tells us how strongly gluons are emitted, say when quarks change their colour, known as G sub S for the group SU(3). Now, this is a very pretty picture, especially since, based as it is on familiar old ideas of gauge invariance, it requires us really to learn very little new. Always to be preferred. But there’s an obvious difficulty with it, that is that gauge invariance requires, that the vector particles, the spin-1 particles that transmit the force, have zero mass. Just as for example electromagnetic gauge invariance requires the photon to have zero mass. But of course the W and the Z do not have zero mass. They have masses which are so large that no one so far has been able to produce them, although we have strong reasons to think we know where they are. The explanation for this is now universally believed to be, that the gauge symmetry, although precise and exact, in no sense approximate, is subject to a phenomenon known as spontaneous symmetry breaking. That is these are symmetries of the underlying equations but they are not realised in the physical phenomena. This is a lesson that elementary particle physicists learn from solid state physicists who understood it much earlier than we did. That symmetries can be present at the deepest level and yet not apparent in the phenomena. The symmetries are just as fundamental as if they were not broken but they’re much harder to see. Because the electroweak symmetry of SU(2)x U(1) is spontaneously broken, the W and the Z have masses, the W mass is greater than 40 GeV, the Z mass is greater than 80 GeV and the precise values are determined by an angle which just basically tells you the ratio of these two coupling constant. The angle is measured in a great variety of experiments and on the basis of that we believe the W will be at a mass of about 80 GeV and the Z will be at a mass of about 90 GeV. Of course we anxiously await confirmation of that. The second of the three ingredients,or the three elements on which our present understanding is based, is the menu of elementary particles. I won’t dwell on this, there are six different varieties of quarks, of which five have been discovered and the sixth is anxiously awaited. Each one of these varieties, sometimes called flavours of quarks, according to quantum chromo dynamics comes in three colours, so that altogether there are 18 quarks. And then in parallel to the quarks there are doublets of leptons, neutrino-electron and then the muon behaving like a heavier electron has its own associated neutrino and the tau lepton has its associated neutrino. Physical processes are most simply seen in terms of these quarks and leptons. So for example, when a neutron decays, a state which was originally an up quark and two down quarks of three different colours, turns into two up quarks and a down quark of three different colours, a W- being emitted, which then turns into an electron and an antineutrino. This menu of elementary particles is in no sense forced on us by theory, except for the structure of doublets of quarks and leptons and colour triplets of quarks. The fact that there are 6 flavours is just taken from experiment. And it has to be regarded as just an independent empirical foundation of our present understanding. The third of the foundations of the present understanding of physics is more mathematical but I think equally important. The idea of renormalisability, renormalisability is very simply the requirement that the physical laws must have the property that whenever you get, calculate a physically relevant quantity, you don’t get nonsense, you don’t get divergent integrals, you get an integral which converges, that is a finite number. I think we’ll all agree that that is a desirable quality of a physical theory. The physical theories that satisfy that requirement are actually very easy to distinguish. If an interaction of some sort has a coupling constant G, like the coupling constants G and G prime and GS that I discussed earlier. And if that coupling constant has the dimensions of mass to some power minus D, let’s say a negative power, D is positive. And when I talk about dimensions, I will always be adopting the physicists system of units in which Planck’s constant and the speed of light are one. Then, because the coupling constant has the dimensions of a negative power of mass, the more powers of coupling constant you have in the matrix element for any physical process, the more powers of momentum which have the dimensions of a positive power of mass, mass to the first power, the more powers of momentum you will have to have in the integrals. So that as you go to higher and higher order in the coupling constant, you get more and more powers of momentum in the integrals and the integrals will therefore diverge worse and worse. That’s a bad thing, that’s not a renormalisable theory. The allowed interactions, the renormalisable theories are those with coupling constants therefore, which are not negative powers of mass, but which are either dimensionless like the electric charge of the electron or positive power of mass like for example any mass. A physically satisfactory theory ought to be one which contains only such coupling constants. Now, that is enormously predictive because the energy densities or the Lagrangians that determine physical laws always have a fixed dimensionality, mass to the 4th power. And I remind you that I am using units in which Planck’s constant and the speed of light are one. So energy has the unit of mass and length has the unit of inverse mass. So therefore, if a coupling constant appears in an energy density and it multiplies an operator, a function F with a dimensionality mass to some power D, then the dimensionality of the coupling constant must be just the dimensionality of the Lagrangian 4 minus D. So therefore, in order to keep the dimensionality of the coupling constant positive or zero, we must have the dimensionality of the operators 4 or less. But almost everything has positive dimensionality, fields have dimensionality 1 for boson fields or 3 half for spino fields. Derivatives, space-time derivatives have dimensionality 1, and therefore, as you make an interaction more and more complicated, its dimensionality inevitably increases. But the allowed interactions have dimensionality only 4 or less. And therefore the principle of renormalisability limits the complexity of interactions that are allowed in physics. This is just what physicists need, they need something to tell them The limited set of simple theories that we allow ourselves to think about are those with interactions whose dimensionalities are 4 or less and therefore are sharply limited in the number of fields and derivatives that they can have. In fact so powerful are these limitations that principles A, B and C determine a physical theory uniquely, except for a few free parameters. The free parameters are things like the electric charge of the electron, the Fermi coupling constant of beta decay, the mixing angle between the Z and the photon, a scale parameter of quantum chromo dynamics which tells us where the strong coupling constant begins to become very large. And of course all the quark and lepton masses and masses for other particles called Higgs bosons, that I haven’t mentioned. But aside from this fairly limited set of free parameters, not as limited as we would like but still not enormous number of free parameters, the physical theory of elementary particles in their observed interactions is completely determined. And not only determined but agrees as far as we know with experiment. One of the features of this understanding, which I think is perhaps not as widely emphasised as I would like, to me it seems one of themost satisfactory aspects of what we know about physics, is that the conservation laws of physics, that were painfully deduced from experiment in the 1930’s, 1940’s and 1950’s and 1960’s are now understood as often approximate consequences of these deeper principles. The theory, as constrained by gauge invariance and by renormalisability and other fundamental principles, cannot be complicated enough to violate this symmetry principle. So for example, as long as you assume that certain quark masses are small, the strong interactions must obey the symmetries of isotopic spin invariants chirality in the "eightfold way" of Gell-Mann and Ne'eman, which were previously deduced on the basis of data. Whatever the values of the quark masses, the strong and the electromagnetic interactions must conserve the quantities known as strangeness, charge conjugation invariance and with certain qualifications, parity and time reversal invariance. And independently of the values of the quark masses, and without any qualifications at all, the strong, weak and electromagnetic interactions must conserve baryon and lepton number, there is no way of writing down a theory complicated enough to violate these conservation laws, a theory that would be consistent with the principles that I’ve described. This understanding of the physical origin of the symmetry principles leads us to a further reflection. We now understand why, let us say, strangeness is conserved. Strangeness, the property that distinguishes a K meson from a Pi meson or a hyperon from a nucleon is not conserved because the laws of nature contain on some fundamental level a principle of strangeness. Strangeness is conserved as a more or less accidental consequence of the theory of strong interactions known as quantum chromo dynamics. The theory simply cannot be complicated enough to violate the principle of strangeness conservation. Because strangeness conservation can be understood without invoking strangeness as a fundamental physical conservation law, we are led to reflect that perhaps it is not a fundamental symmetry and perhaps when you widen your scope beyond the strong interactions you will see that strangeness is not conserved. That’s in fact of course true and it’s been known to be true since the first days that people started talking about strangeness conservation. The weak interactions don’t conserve strangeness, for example a hyperon is able to decay into an ordinary proton or neutron violating the conservation of the strangeness quantum number that distinguishes the two. In exactly the same way, because we now understand that baryon and lepton number. By the way, baryon number is just a number which counts the number of quarks, it´s 1/3 for each quark, and lepton number is a number which counts the number of leptons, it´s 1 for each lepton. The conservation of baryon and lepton number for example prohibits processes like the proton decaying into a positron and a Pi 0, which would otherwise be allowed. Because the conservation of baryon and lepton number is understood as a dynamical consequence of the present theory of electroweak and strong interactions and the principle renormalisability, there is no reason to doubt that, when we go to a wider context, that this conservation law will be found to be violated. Because it is not needed as a fundamental conservation law. It is understood without it´s being needed on a fundamental level. A great deal of attention has been given to this possibility, that baryon and lepton number are not conserved. Suppose for example that there are exotic particles with masses much greater than the W or the Z. Let me take the mass capital N as just some characteristic mass scale for a new world of exotic particles that have not yet been discovered, and by exotic I mean rather precisely particles with different quantum numbers for the gauge symmetries, SU(3)x SU(2)x U(1) than the known quarks and leptons and gauge bosons. The world that we know of is just the world of those particles that have much smaller masses than this new scale capital N. And that world is described, since we’re not looking at all at physics, we’re only looking at part of physics, not by a fundamental field theory but what's called an effective field theory. We should describe our present physics in terms of an effective Lagrangian. That effective Lagrangian, since it’s not the ultimate theory of physics, might be expected to contain non-renormalisable as well as renormalisable terms. In the same way that when Euler and Heisenberg in the mid 1930’s wrote down an effective Lagrangian for the scattering of light by light at energies much lower than the mass of the electron, they wrote down a non-renormalisable theory because they weren’t working at a fundamental level but only with an effective theory that was valid as a low energy approximation. The effective theory should contain non-renormalisable terms, and as I indicated before, these are terms whose coupling constant has the dimensionalities of a negative power of mass. That is we have operators O with dimensionality D and coupling constants with dimensionality 1 over mass to the D minus 4. And what mass would this be? Well, it would have to be the mass of the fundamental scale of the particles that have been eliminated from the theory the same way the electron is eliminated from electrodynamics in the Euler-Heisenberg theory of the scattering of light by light. This tells us then that the reason, that physics appears to us to be dominated by renormalisable interactions at low energies, is not because the non-renormalisable interactions aren’t there, but because they’re greatly suppressed by negative powers of some enormous mass. And we should expect in the physics of low energies to find not only the renormalisable interactions of the known electroweak and strong interactions but much weaker, subtler effects due to non-renormalisable interactions suppressed by very large masses in the denominator of the coupling constant. There has been work by myself and Wilczek and Zee to catalogue all possible interactions of this sort up to dimension-6 or -7 operators. The lowest dimensional operators that can produce baryon violation turns out are dimension-6operators and hence according to the general rules I’ve given you, are suppressed by two powers of a super large mass. A catalogue has been made, of - these dimension-6 operators of the form quark, quark, quark, lepton. A catalogue has been made of all of these interactions and it turns out that they all satisfy the principle that, although they violate baryon and lepton conservation, they violate them equally, so that for example the proton can decay into an antilepton.The neutron can also decay into an antilepton, neutron can decay into e+ Pi-, but the neutron cannot decay into a lepton, neutron cannot decay into e- Pi+. And there are other consequences of the simple structure of these operators, something like aDelta I =1/2 rule. The decay rate of the proton into a positron is 1/2 the decay rate of the neutron into a positron. We can say all these things with great confidence without being at all sure that protons and neutrons actually decay. The decay rate of the proton and the neutron, that is decay at an observable rate, the decay rate of the proton, let us say, will be suppressed in the matrix element by two powers of a super large mass, and there’s sure to be a coupling constant factor like the fine structure constant. You square the matrix element and multiply by a phase space factor, the proton mass to the 5th power to get a decay rate. The big unknown in this formula for the proton decay rate is of course the super heavy mass scale. We know the proton is more stable than, well its lifetime is longer than 10 to the 30th years and therefore this mass must be very large indeed, it must be larger than about 10 to the 14thGeV. There are other effects that violate known symmetries. Lepton number is violated by an operator that has dimensionality, not 6 but only 5. And this produces neutrino masses of the order of 300 GeV2 divided by the super heavy mass scale, that’s a very low neutrino mass, that’s less than 1 volt if the mass scale is greater than 10 to the 14thGeV. Now, there is other debris, which might be expected to be found in the low energy world and I simply won’t have time to discuss this. In a sense gravity itself can be regarded as the debris in our low energy effective field theory of a more fundamental theory that describes physics at a mass scale above 10 to the 14thGeV. Why in the world should there be a mass scale so much larger, twelve orders of magnitude larger than the highest masses that we’re used to considering? A possible answer comes from the general idea of grand unification. Grand unification is very simply the idea that the strong and electroweak gauge groups are all parts of a larger gauge group, which is here simply denoted G. Just as the electroweak gauge group is spontaneously broken, giving masses to the W into the electromagnetic gauge group, and that’s why the W and the Z are so heavy and have not yet been discovered, the grand gauge group G is assumed to be broken at a very high energy scale, M, into its ingredients SU(3)x SU(2)x U(1). And one coupling constant will turn out hopefully to generate the strong and the electroweak coupling constants in the same way that the two electroweak coupling constants combine together to give the electric charge. Another hope here, equally important, is that the quarks and leptons would be unified into a single family so that we wouldn’t have red, white, blue and lepton colours but we would have one quartet for each flavour of quarks and leptons. Models which realise some of these ideas were proposed, beginning in 1973, starting with the work of Pati and Salam, and then Georgi and Glashow, and then Fritsch and Minkowski, and then many other people. But an obvious difficulty with any model of this sort is the fact that the strong coupling constant, as its name implies, is much stronger than the electroweak couplings. Sam Ting told us that the fine structure constant for the strong interactions is about .2 and we all know the fine structure constant for the electromagnetic interactions is 1 over 137. How can two such different strengths of force arise from the same underlying coupling constant G sub G. The answer which is now I think the most popular was proposed in 1974 by Georgi, Quinn and myself, our answer was that these coupling constants are indeed related to a fundamental coupling constant, but they’re related only at a super large mass scale M. The strong and electroweak couplings, which are indicated here as these three curves, are not really constants, they’re functions of the energy at which they’re measured. This is well known in quantum electrodynamics, the property of asymptotic freedom means that the coupling constant shrinks with energy. The coupling constants of the electroweak force, one of them shrinks, one of them increases. One imagines there might be a point at which they all come together, at some very high energy. Indeed there is such a point but since this variation with energy is very slow, it´s only logarithmic, the energy at which the coupling constants come together is exponentially large. It’s given by the formula that the logarithm of the ratio of this fundamental mass scale to the W mass is something like 4 Pi square over 11 e square, where e is the electric charge, with this correction due to the strong interactions. And that comes out to be about 4 to 8 times 10 to the 14thGeV. So we see now why there has to be a scale of energies greater than 10 to the 14thGeV. It’s to give the strong interactions time to get as weak as the electroweak interactions. These three curves are coming together at one point, it’s not easy to get three curves to intersect at a single point, and in fact the way it’s done is by carefully adjusting the data at the low energy end to make them aimed in the right direction so that they’ll all hit at the same point. That careful adjustment of the data I can put in a slightly more complementary way, as saying that we predict and this was done by Georgi, Quinn and me, we predict certain ratios of these coupling constants which can be expressed as a prediction of the value of the mixing angle between the Z and the photon. That prediction was in conflict with experiment in 1974, in agreement with experiment now and it’s the experiment that changed, not the theory. There are a great many problems, I would say in fact that this prediction of this mixing angle is the only tangible success, quantitative success so far of grand unification. There are a number of problems with further progress. One problem is that we have had no convincing explanation of the pattern of quark and lepton masses. By convincing explanation I mean more than a theory in which you’ll have enough free parameters to rearrange things the way you want. But something that really gives you a feeling you understand it. There’s been no convincing explanation of the pattern of generations that we have not only an up down electron generation and a charm strange muon generation but a third generation, maybe a fourth generation, we don’t know why any of that is true. Perhaps the most puzzling problem of all, we have no fundamental explanation of the hierarchy of forces, that is that there is a ratio of 12 orders of magnitude between the symmetry breaking scale of the grand gauge group and the electroweak gauge group. We know that that’s true, we think we know it’s true because the strong force at low energy is so much stronger than the electroweak force. But where this number of 10 to the 12thcomes from is endlessly speculated about, there are many, many ideas but there’s no one idea that really convinces people. And finally there’s no one model that stands out as the obvious model. There are many candidate models of grand unification but since all of them leave A, B and C un-understood, we can’t really attach our allegiance to any one of these models. There is a further development which started in 1974 which has occupied a large part of the attention of theoretical physicists in the succeeding years, this is a symmetry called supersymmetry, invented, although there were precursors, invented by Wess and Zumino and then carried on by Salam and Strathdee and many other people. Supersymmetry is a symmetry which operates in a sense orthogonally to the symmetries that I’ve been discussing up till now. The electroweak gauge symmetry for example connects the neutrino and the electron, both particles of the same spin but different charge. Supersymmetry connects particles of different spin but the same charge, flavour, colour etc. For example supersymmetry would connect the electron which has spin ½ with another particle that might have spin-0 or spin-1. It had been thought that such symmetries were impossible and in fact they’re almost impossible. There is a theorem by Haag, Lopuszanski and Sohnius, terribly important theorem that tells us that the kind of supersymmetry, which was invented out of whole cloth, just out of the richness of their imagination by Wess and Zumino turns out to be unique. It is the only mathematically allowed way of having a symmetry that connects particles of different spin. And therefore, without too much arbitrariness, we can fasten our attention on a particular kind of symmetry which is simply called supersymmetry and explore the consequences of it. And we know that, whether it’s right or wrong, there isn’t anything else that could unify particles of different spin. Now, we don’t see any trace in nature of supermultiplets of this sort, that is the electron does not seem to have a partner of spin-0 or spin-1. Well, that in itself should not convince us the idea is wrong, we’re used by now to the idea that symmetries can be true at a fundamental level and yet spontaneously broken. Supersymmetry must be spontaneously broken. In fact there is not a trace in nature of any supermultiplet that is visible to us. Sometimes it’s said that supersymmetry is the symmetry that unifies every known particle with an unknown particle. Supersymmetry is surely spontaneously broken, the big question is: Where is it broken? And I’ve been giving a lot of attention to this question lately and I think I will close by just summarising my conclusions, you might think perhaps that supersymmetry is broken at the same sort of energies at which the electroweak gauge symmetry is broken, that is energies like MsubW or of the order of 100 GeV. There are many reasons why that doesn’t work. The partners of the quarks, which are scalar particles, spin-0 particles and hence are called squarks, would give very fast proton decay. This could be avoided by inventing new symmetries. The squarks and the sleptons would be too light, that is they would be light enough to have been observed and as I already told you they haven’t been observed. This also can be avoided by inventing new symmetries, in particular Fayet has explored this possibility. These unwanted new symmetries and other new symmetries that you can’t avoid in such theories lead to light spin-0 particles called Goldstone bosons, which are apparently impossible to get rid of, and Glennys Farrar and I have been exploring their properties. And we have come to the conclusion, or I should say I have come to the conclusion, I’m not sure if Glennys agrees, that supersymmetry broken at these low energies is really hopeless. Another possibility is that supersymmetry is broken at medium high energy. That is supersymmetry is broken at energies which are much larger than the W mass, much smaller than the mass scale at which the grand unified symmetry is broken. Now,supersymmetry is a gauge symmetry like the electroweak gauge symmetry, it has an intermediate vector boson but because of the peculiarity of supersymmetryit’s not a vector particle spin-1, it’s a spin-3 half particle called the gravitino. The gravitino like the W particle has a mass and the mass is related to the scale at which the supersymmetry is broken. And in fact the mass of the gravitino which is a super partner of the graviton, the quantum gravitational radiation, the mass of the gravitino is the symmetry breaking scale square divided by 10 to the 19thGeV. If the symmetry breaking scale is in a very broad intermediate range, from 10 to the 6 GeV up to 10 to the 11th, it would wreck the universe, its mass density would reduce much too large a deceleration of the expansion of the universe. And yet the gravitino’s would be much too light to have decayed by now, so that they would be present in very large numbers. It would be a cosmological disaster. So we can conclude therefore that this large range of intermediate scales of supersymmetry break down are forbidden and therefore that supersymmetry, if valid at all, can only be broken at the very highest energies, energies comparable to the energies at which the grand unified group is broken or perhaps even higher. This means that supersymmetry is going to be primarily of importance, not to the experimentalist but to the poor theorist who is struggling to produce a more satisfactory theory. Nevertheless I feel that supersymmetry must survive as an ingredient in a final theory because sooner or later we must ultimately have a unification, not only of electromagnetism with the weak interactions and electrons with neutrinos, but we must have a unification of all the particles of nature, the W boson with the electron, with the graviton. And within a very broad mathematical context we now have a theorem that tells us that the only way that that’s possible is through supersymmetry. So in the end I’m very optimistic about supersymmetry but I’m afraid that the problems of supersymmetry are going to be solved only when a larger class of problems having to do with grand unification are solved. Theoretical physicists in the last few years have not been making rapid progress in solving these problems. I would say that this has been one of the most frustrating periods that I have seen during my period in physics. At the present moment the theorists, somewhat through exhaustion, are waiting for the completion of a series of heroic experiments which we hope will break the logjam and get us started again. There are a number of key experiments that deal with the problems that I have been discussing here. Well, first of all of course there are the experiments like the experiments that Sam Ting was discussing and other experiments to find the W and the Z particle as isolated physical particles, which are crucial in pinning down the parameters and in confirming the existing understanding. But to go beyond the existing understanding, I would say there are three chief classes of experiments which are being actively pursued. There are experiments on proton decay, which test the conservation of baryon number, there are experiments on the mass of the neutrino which test the conservation of lepton number. And finally there is the question of whether the magnetic monopole, which was discovered perhaps a few months ago at Stanford, is real. And that will be tested by continuing those experiments at Stanford with a larger apparatus. The grand unified theories, as pointed out by Polyakov and ´t Hooft characteristically predict the existence of magnetic monopoles with masses of the order of 137 times the grand unified mass scale. So roughly speaking 10 to the 16thGeV. The grand unified theories themselves of course don’t tell us how many of these particles there are in the universe, they just tell us that they can exist. At Stanford there is a possibility that such a particle has been discovered and we eagerly await confirmation of its existence. These experiments have all so far in fact yielded positive results, there are a few proton decay experiments in an experiment in the Kolar gold field in India. There are preliminary indications of a neutrino mass in an experiment in Russia and in another one in California, or rather in Georgia. And then of course there is the magnetic monopole, for which we have exactly one candidate. We are now in the position of sitting at the edge of our chairs, waiting to see whether these preliminary indications of proton decay and neutrino masses and magnetic monopoles will in fact be born out by further experiments. And I will be happy to tell you my predictions for this after the experiments are completed.

Steven Weinberg explains the standard model
(00:01:11 - 00:10:37)



An Amazing Contradiction Between Theory and Experiment

In the standard model, which was completed in the mid 1970s, neutrinos are massless particles. Yet this assumption had already been questioned in 1957, when Bruno Pontecorvo suggested that neutrinos might have some mass. This “neutrino mass problem” remained unsolved until the beginning of the 21st century, much like the “solar neutrino problem”. The latter was revealed by John Bahcall and Raymond Davis in 1968. They built up an experiment that drew upon a method suggested by Pontecorvo, and used a big chlorine tank to detect solar neutrinos. Whenever a chlorine atom interacted with a neutrino, it would be transformed into radioactive argon whose signal could be caught. Their original intention was “to test whether converting hydrogen nuclei to helium nuclei in the Sun is indeed the source of sunlight”.[8] In each of these reactions four protons are transformed into two protons and two neutrons while emitting two positrons and two neutrinos. Using a detailed computer model of the sun, Bahcall calculated both the number of neutrinos, which should arrive on earth, and the number of argon-mediated signals that they should trigger in the 380.000-liter tank filled with the common cleaning fluid tetrachloroethylene. This tank was located one mile below the ground in an old gold mine to avoid disturbing signals and enable the scientists to distinguish true neutrino signals from false ones. To their big surprise, however, their experiment failed to prove the prediction. They detected only about one third as many radioactive argon atoms as were predicted. Where had all the missing neutrinos gone? In 1969, it was again Pontecorvo who first proposed an answer to this question that pointed into the right direction.[9] But most of the theoreticians were not prepared to accept this answer, and the experimental possibilities were not yet advanced enough to provide the required data. When Raymond Davis (at the age of 88!) whose experiment ran until 1994, and Masatoshi Koshiba shared one of the two Nobel Prizes in Physics 2002 "for pioneering contributions to astrophysics, in particular for the detection of cosmic neutrinos" both expressed their regret that Bahcall had been left out. Bruno Pontecorvo had died already nine years before.

Masotoshi Koshiba and his team had constructed the Kamiokande detector in a mine in Japan. It consisted of an enormous tank filled with water. „When neutrinos pass through this tank, they may interact with atomic nuclei in the water. This reaction leads to the release of an electron, creating small flashes of light. The tank was surrounded by photomultipliers that can capture these flashes. By adjusting the sensitivity of the detectors the presence of neutrinos could be proved and Davis's result was confirmed.“[10] In February 1987 a supernova explosion was observed that had taken place some 160.000 years before in a neighboring galaxy to the Milky Way called the Large Magellanic Cloud („it was the nearest and the brightest supernova seen in 383 years – since Johannes Kepler observed a supernova in our own galaxy with his naked eye in 1604“[11]). Such an explosion releases most of its enormous energy as neutrinos, and Koshiba’s group was able to detect twelve of the estimated 10.000 trillion neutrinos that passed through their water tank. This was the birth of neutrino astrophysics, as Koshiba titled his talk in Lindau in 2004, where he also introduced the construction of a larger detector, the Superkamiokande, with a much increased sensitivity to cosmic neutrinos.


Masatoshi Koshiba (2004) - The Birth of Neutrino Astrophysics

X-ray astronomy had to wait for the development of space transportation, the reason being very simple, the x-rays that we’re looking at are absorbed in the atmosphere up to about 100 kilometres. So you have to put your detectors above 100 kilometres. This was done first by Herbert Friedman, who used V2 German rockets which had been captured after World War 2, and put detectors on it to go up and start looking for x-rays from the sun. This went on from ’48 to ’58 or so and a lot of information was obtained on the sun. But any attempt which was being made to discover sources from stars other than the sun or outside the solar system had failed. I fell into the field so to speak by chance. I had been hired by a corporation, a small corporation of 28 people at the time, and I was given the job to design a program of space research. This was in response, of course, to the fact that the United States was getting very nervous about the Soviet Union flying rockets and we had to catch up. And so there was a lot of opportunity for space research. And I started looking at different things. Bruno Rossi, who was the chairman of this corporation and also, of course, a professor at MIT and also chairman of the space science board at that time, had followed the discussions which were occurring at the space science board. Several people, Friedman, Leo Goldberg and others had discussed the fact that it would be nice to look at the sky in x-rays, because the x-rays would penetrate large spaces, interstellar spaces, and would be an index of violent processes occurring, high temperature processes occurring in the universe. I loved the geometry, as my mother had taught me. That geometry was great, god plays geometry. And I happen to come through a Pfluger encyclopaedia statement, that you could have total external reflection by x-rays impinging at raising incidence. And then of course, if you know geometry you construct in your mind a paraboloid. And so the first thing I did in x-ray astronomy was to design an instrument which was a telescope to actually collect and focus x-rays, so that you could have a tremendous improvement of signal-to-noise ratio, a factor of about hundred thousand or a million. What are you looking at? Well, the rocket goes up, this is already summing all of the rotation that occurs when you are above the region in which the x-rays are absorbed. So you're looking around the sky from 0 to 360 degrees during a spin and we had 2 detectors with different windows, this was the thicker window mica, this was the thinner window. Here is the magnetic field. Here is the moon, remember that the air force was interested in us looking for the moon so we had to have the moon up. And then the number of counts. Now what do we observe? We observe an enormous peak here in counts, which we didn’t expect at all. That is, we expected at most to see something, you know, at this level, if we were lucky, if the crab nebula actually admitted as we were hoping it would, and so forth. And this enormous peak was totally unexpected. As it turns out, the reason why it was so large and it was unexpected, is that we were seeing a nucleus of objects. These were binary x-ray stars. I now want to skip along and say, ok, so I told you about the ability to slow down the rotational rate. This is slowed down a lot. Distance here is 4 seconds. So looking at this particular source, Centaurus X-3, in May of ’71 we found what we believed to be Now this was somewhat unexpected, because pulsars had been discovered by Hewish. But we didn’t expect the radio pulsars to be emitting x-rays at this rate and with this kind of periods. So in that sense it was strange. The other thing that we notice is - and this is where the long time given to you by a satellite rather than where the satellite stayed up for years. So for every hour that you were up, you were doing as much x-ray astronomy as had been done until then. Here was a use of time. Here are the days of May’71, and what we notice was that the intensity of the x-ray source that we were seeing was going up, staying steady for a while, then going away, then coming up, steady for a while and going away. Now the thing that became very interesting – that I think is fundamentally important - was that when we actually measured this period over 3 years, we found that the period was decreasing. The pulsating source was acquiring energy rather than losing energy. And how could this happen? Well, the way this happens is that here is a cut in the gravitational plane of the 2 sources. This shows what is called a Rho …, that is an equipotential. The normal star has gas in its atmosphere. It can fall, after appropriate rotations in order to lose angular momentum, onto the compact object and as this is a cut in the vertical plane, as it does this, a proton will acquire more energy in the infall than it can actually produce by nuclear fusion. So this has become the explanation for all of the compact sources that we are seeing, and then extrapolated to very large dimensions to supergalactic, supermassive black holes. I won’t go through that - except to say that the magnetic field or rotation or the compact object for neutron stars explains why you see pulsations. I’ll go to the next one which is, we saw another source, which was very different from the regularly pulsating one. That was Cygnus X-1. It created some excitement, there was optical determination of the position, I mean, there was x-ray determination of the position, radio, then optical, in an identification with a source. Webster and Murdin measured the mass of this object to be something like 6 solar masses. Now it had been shown that neutron stars cannot have mass that large, so what we were seeing was an indefinitely collapsing object, which we call a black hole. For lack of understanding, or what physics goes on into it, this is an object in which density has much exceeded the density of even a neutron star or 10 to 15 grams per cubiccentimetre, and it is a black hole. That has meant that - I will just show you 2 pictures - this is a picture in x-rays, real life - there is also a movie of it - of the pulsar in crab nebula. You see the acceleration of the jets, you see shock waves being propagated in the interstellar medium. So this tells you that with this resolution now you can do dynamic studies of plasmas and shock waves in galaxies, in clusters of galaxies, in supernovas and so forth. But the last slide I wanted to show you is this one. This is one of the longest exposures, not the longest anymore, but one of the longest exposure done on a fixed field, which was not known to contain any x-ray source and no particular visible light object. It’s in the south, Chandra Deep Field South. It was obtained in the year 2000, and what you are seeing here is a collection of objects which is very high density. It’s 3000 objects per square degree, so you're looking at something that fills the night sky at the level of a hundred million objects or something. What are these objects? To be brief, they are all super massive black holes accreting from accretion discs around them. We are seeing these objects at distances which are greater at time than those, with which you can follow them in visible light. So we can study them early in their evolution and formation. And just to close I’ll show you one picture which tells you where we are in this kind of studies today. This is the x-ray contour plot of a source overimposed on the Hubble - one of the deepest exposures of Hubble There was none to be seen. So that means that this object here is less than 27 magnitudes. When you look at it with Keck you can’t see it, when you look at it with VLT you cannot see it. But with the arrival of Spitzer, the new satellite that works in the infrared, we can now see it very clearly. And we conclude from this that what we are seeing is a QSO, a quasi-stellar object, active galactic nuclear, which is very absorbed, very darkened by gas and dust around itself at the redshift of about 6, which is fairly early on in the life of the universe. So this is simply to say that there is interesting physics to be done in x-ray astronomy and there is tremendous power for further observations, which are of relevance to evolution, cosmology and so forth and I’ll stop here.

Masatoshi Koshiba on the Advantage of Neutrino Astrophysics
(00:00:01 - 00:04:44)


A Convincing Solution Around the Turn of the Century

The Superkamiokande (Super-K) is designed to capture neutrinos that are created in reactions between cosmic rays and the Earth’s atmosphere. It can detect both neutrinos coming straight from the sky above and neutrinos coming from below after having travelled through the entire globe. The balance of both should be equal, because neutrinos pass through the earth unhindered. For electron-neutrinos this was indeed the case, but not for muon-neutrinos, as Super-K researchers first reported in 1998. They detected more muon-neutrinos coming from the atmosphere a few kilometers straight above than from down under. The latter, of course, had a much longer journey behind them. This had obviously given some of them the time to change their identity and switch into tau-neutrinos, which could however not be observed in the Super-K. Yet this was possible in the Sudbury Neutrino Observatory (SNO) in Canada. In contrast to the Super-K it is designed to measure solar neutrinos, and its tank is not filled with ordinary but with heavy water. The deuterium isotopes therein allowed scientists to measure either the amount of electron-neutrinos alone or the grand total of all three neutrino types. Both sums should have been equal, because the sun emits electron-neutrinos only. Yet they weren’t: Only one third of the expected electron-neutrinos arrived. The grand total, on the other hand, matched the expected number of incoming neutrinos. This provided evidence that solar electron-neutrinos oscillate and partly change their identities in flight. When the SNO scientists published their results in 2001 and 2002, John Bahcall’s theoretical calculations were vindicated[12]. The solar neutrino problem was solved. It is caused by neutrino oscillations. Such oscillations are only possible when neutrinos have mass. Although we still cannot figure how much mass, the Super-K and the SNO experiments had killed two birds with one stone. For this achievement, the scientists Takaaki Kajita and Arthur McDonald were awarded the Nobel Prize in Physics 2015.

Clues to the Big Unanswered Questions

Since the breakthrough of Kajita and McDonald and their teams, neutrinos have taken center stage in physics. For many decades, neutrino research was regarded cutting-edge but somewhat esoteric. This has changed dramatically since they apparently offer clues to some of the major unanswered questions both in particle physics and in cosmology. How much attention they have caught and how much prospects for future research they offer became visible in several lectures at the Lindau Nobel Laureate Meeting 2015, three months before the Nobel Prize for Kajita and McDonald was announced.

Brian Schmidt, for example, who shared the Nobel Prize in Physics 2011 with Saul Perlmutter and Adam Riess "for the discovery of the accelerating expansion of the Universe through observations of distant supernovae" mentioned neutrinos - is there a fourth type? - in conjunction with the mystery of dark matter:


Brian P. Schmidt (2015) - The State of the Universe

Welcome everyone. As we all rearrange ourselves. Today we're going to talk, in this talk, about the state of the Universe. Now, you've had the benefit of several talks of cosmology. I'm going to try to tie it all together. So, in 2015, what is the state of the Universe? Well, the Universe is expanding. The Universe is 13.8 billion years old. The Universe is close to being geometrically flat. And the Universe is composed of a mixture of things, including dark energy, dark matter, atoms, neutrinos, photons, and a few other things of lesser amounts. Now those are words, and I want to explain why we know and think that we understand the state of the Universe so well. So let's first start with the expanding Universe. Now Saul Perlmutter talked about this, but in case people missed that, I figure it's good to review just a little bit. So the expanding Universe goes back a long time. At least to 1929, when Edwin Hubble went out and looked at galaxies, we didn't really know what galaxies were, except for collections of stars, until about this time. And he looked at how bright the stars appeared in those galaxies. And he noticed that the galaxies, some had brighter stars, some had fainter stars. And he attributed that brightness to their distance, of course, the further away an object is, the fainter it's going to appear. Another person, Vesto Slipher, most of you will not have heard of, but he's a great astronomer, and his family gave me a scholarship, it turns out, so that was one of the reasons he's great, for me. Is that Vesto realized that he could go through and see an effect, the Doppler Shift. He thought it was the Doppler Shift, we now call it Red Shift. That galaxies' light was stretched. And some more, some less, but almost all of them had this effect. And if you plotted the amount of stretching of light, which is equivalent to their velocity, or apparent velocity, away from us, you get this plot that Hubble made where you have a trend. And what I love about this plot, is this is the plot that makes us realize the Universe is expanding. And that's not beautiful-looking data. It's data that a biologist might be proud of. But physicists, maybe not so much. Yet it tells you the further away an object is, the faster its apparent motion. So, from that, we learned that the Universe was expanding. Why? Well let's think what it means to be the further, the faster the motion. I have a toy Universe here, which I'm going to expand, alright? So if I take the Universe, and I make it a bit bigger, in between before and after, and then I overlay those two things, what do you see? You see that for nearby objects, their motion has been small, they would have a low velocity. Distant objects, well, they will have moved a lot in that same amount of time. They'll have an apparent high velocity. And it doesn't matter where you are in the Universe, everyone sees the same thing. In a Universe that's expanding, the further away something is, the faster the apparent motion. Now it turns out, this was all predicted, sort of, by theory. Theory from none other than Albert Einstein. Now before we get to that, let's think what it means to be expanding. If I'm expanding, well, you're getting bigger and bigger apart. But I can run the experiment in reverse. What do I get? I get things getting closer and closer and closer, until everything in the Universe is on top of everything else. That's the thing we nominally call the Big Bang. So if you measure how fast the Universe is expanding, that's giving you an idea of how old the Universe is. And this was such an interesting thing to do that I decided to do it for my PhD thesis. And from a graph you can think of it this way. Right now, you look at galaxies, and they have some separation. I run the Universe back in reverse, based on how fast it's expanding. And I can simply read off, when the two objects are on top of each other, to the age of the Universe. And so, here I am, at the end of my PhD thesis, three years, eleven months and four days, but I wasn't counting. Showing my PhD supervisor Bob Kirschner the answer I got. So the answer I got was that the Universe was roughly 14 billion years old. Now I'd like to say that I get credit for measuring the age of the Universe. The answer is, I didn't convince that many people with my thesis. But I did get the correct answer, which is important. Actually it's not that important, but that's OK. OK, so the theory that ties this all together comes from Einstein. Einstein had this revelation in 1907, which was that acceleration due to moving faster and acceleration to gravity, in his mind, must always be essentially the same, they always were equivalent. Kind of a small thought, that took him eight years to sort out. He had to learn a lot of mathematics, which I can still challenging this day, but essentially the idea is, imagine you're in a box, or in a room. Are you being accelerated by 9.8 meters per second by Earth's gravity? Or are you actually in a rocket ship, being accelerated by 9.8 meters per second. There's no way to tell the difference under his assumption. And that assumption has never been shown to be wrong. So astronomers are the ones who actually vindicated Einstein's theory of gravity. How? Well, by looking at eclipses, and this is the real from 1919. It's terrible, it's out of focus, but it was good enough to show that stars next to the Sun were displaced due to the gravity of the Sun and the way that the Sun curved space. Because Einstein, to solve this seemingly small problem, had to say that space was curved by mass. And that was a huge leap forward. Many people were, of course, sceptical. Others weren't. But it's been vindicated, and that basic idea has never been shown to be false. We know it doesn't work with quantum mechanics, so there is at least something that's probably not right somewhere. Alright, so Einstein actually became a household name due to, not special relativity, but to general relativity. He was all over the front pages of the newspapers. And so it was really an idea that came out of pure thought. And that is very, very rare within science. Normally you have an idea, there's a problem, you need to solve it. There was no problem at this point that Einstein was thinking about, he just said, this must be the way world was. So, in the coming years, de Sitter and Einstein first tried to figure out what this meant for cosmology. Newton could not solve cosmology, that is, how the Universe behaves, with his equations of gravity. The only way they made sense is if the Universe had nothing in it. So de Sitter and Einstein tried, and then Alexander Friedmann in St. Petersburg came along and said, let's assume the Universe is the same everywhere. Homogeneous. And isotropic, it doesn't rotate, or something. And he came up with a series of solutions. Einstein knew about those. But, at the same time, Georges Lemaître, a Belgian astrophysicist, came up with them independently. And then said, the mathematics means that the Universe is actually expanding. So, unfortunately for Lemaître, he met up with Einstein in 1927, showed him his work, presented it to him. And Einstein said, your mathematics is correct, it's already been done by Alexander Friedmann, but your understanding of physics is abominable. Lemaître was forgotten, for a while. Hubble came along, and said, the Universe is expanding. He didn't tell Einstein, he just put it on the front page of the New York Times. And so we remember Hubble, we don't remember Lemaître. But you should, because Lemaître did it, as part of his PhD thesis. So what does Friedmann's equations say about the Universe? Well, they say that the Universe is empty, it just keeps on getting bigger and bigger and bigger and nothing really happens. It just, it's on this straight line. Not dissimilar to a ball, if you threw it out in space. On the other hand, if you have a light Universe, gravity is going to slow down the Universe a little bit. But the Universe will be able to expand and expand and expand forever. On the other hand, if you fill it full of stuff that Einstein had already thought about, dark energy, energy that's part of the fabric of space, this stuff makes gravity push rather than pull, and you might get a Universe that would exponentially expand, very very quickly. Finally, there was the, kind of most intriguing model, the heavy Universe. The Universe that has so much gravity, that it expands, stops expanding due to the effects of gravity, goes in reverse. So all the universes seem to start with a Big Bang, but only the one that is heavy ends with a gnaB giB, that's the Big Bang backwards. Alright, so another feature of curved space is that a heavy Universe literally curves around itself in four dimensions. Given enough time, you can imagine heading out in this direction, and eventually coming back to where you stand. What's the Universe bending into? Well, it's bending into this time thing. On the other hand, if the Universe is light, it bends the other way, the shape of a saddle. A Universe that has the shape of a saddle, triangles, for example, and this is an experiment I've always wanted to do, you send a graduate student off this way, another one off that way, and in a billion years, travelling at about 99% of the speed of light, we get together, and we measure the angles of a triangle. And we ask, what are they? In the light Universe they add up to less than 180 degrees. In the heavy Universe, they add up to more than 180 degrees, just like they do if you did this experiment on Earth. Get a globe, if you don't know what I mean. Finally, there's the "just right" Universe. That Universe precariously balanced between the finite and the infinite, where triangles add up to 180 degrees. So there is a geometry to space that is inherent with general relativity. So, when I finished my PhD, I went off to Australia, and on the way to Australia I was able to work with several of my colleagues, and specifically Nick Suntzeff in Chile, to devise an experiment to measure the Universe's past. And that was to look back in time and literally see how the expansion rate of the Universe was changing over time. We can do this, because when we look a long ways away, it takes billions of years for light to reach us. And in 1994 there was new technology, and there were new ideas of how to use supernovae to go through and see the trajectory of the Universe. So this is what Saul Perlmutter talked about in his talk. And essentially we wanted to see how the expansion rate of the Universe changed over time. And we could see whether or not it was going to, for example, exist forever, or collapse into something analogous to the Big Bang in reverse. So the supernovae were the new ideas done by a group in Chile. The technology was developed for six years by Saul Perlmutter, and we sort of married these two together in what was quite a competitive experiment. Saul and I always had a lot of fun with it. I'm not sure that all of our team members did. But competition was great. Because it meant we always knew the other team was going to push us, in terms of how efficient we acted, but also point out any mistake we might make. And so, after several years, we came up, this was our data, and we noticed immediately that the data was in the wrong part. The distant data was in a place of the diagram where the Universe was expanding slower in the past, and seemed to be accelerating. Einstein told us we need something funny for that, stuff we call dark energy. This is the work that produced the two papers of the two teams in 1998. And, of course, Saul and I are here, but we represent about 60 people who did this work, our team and Saul's team are both represented in Stockholm. A great party for us all. Alright, so what's pushing on the Universe? Well, Einstein told us it's this stuff called dark energy. Well, what is dark energy? He called it the cosmological constant. It really is energy that is part of space itself. And it turns out, it does, through his equations, cause essentially gravity to push rather than pull. Now, if you go through and did a detailed analysis, as we did in 1998, you could see that you needed a mix of stuff, you needed about 30% of the Universe to be gravity that pulls, and 70% to be gravity that pushes. Alright, so that was the conclusions there. Now, let's think, again, if we take the Universe, and we compress it, what does that do? Well, if you do that experiment here on Earth, for example you'd take a little piston and I'm going to compress it, or this gentleman is going to compress it, things get hot. So there's a little piece of cotton wool in there, and he just took a glass piston and pushed it, and it heated up, to the point where you could set the cotton wool on fire. The Universe is very similar. It has a certain temperature now, when the Universe was more compact it was hotter. And if you go back in time, you reach a point, to when the Universe was roughly 3,000 degrees. So, 3,000 degrees is when hydrogen becomes ionized, so, before that, the electrons were essentially not attached to their protons, and, to a photon, that means they look like a giant target. Likes to scatter off electrons. So the Universe was opaque. Right now it's not, we can see through it. And so that led to the discovery, which hopefully Bob Wilson, you would've heard him describe, and this is one of the most remarkable discoveries of the twentieth century, where you look out in all directions and the Universe is glowing. And it's glowing because it was opaque and hot, and things that are opaque and hot glow. Not a lot like your stove or the Sun. And so, 13.8 billion years ago, what is what the Universe looked like. And in this image, we see many interesting things. The one thing that we can imagine is imagine that we're bumps and wiggles in the Universe. They're going to create sound waves, and those sound waves are going to move out, at the speed of sound. And they're going to get out to, essentially 380,000 years' time, that's the speed of sound, and they get dragged around a little bit by the Universe, but let's not worry about that. And so those sound waves have a scale, which we can calculate using physics, to about one part in 10^4. It's a ruler of incredible precision. Especially by cosmologists, who normally are used to dealing with factors of two. And so, if you have a ruler, you can look and say, ah, I can figure out how far away that object is. And how does it work? Well, it turns out there are many things that cause that ruler to have a size, but the most important one of interest is the geometry of the Universe. If the Universe is curved, the ruler gets distorted. It's not dissimilar to looking in a rear-view mirror. The entire Universe acts as a lens. If it's flat, it doesn't get distorted. So you can look at those sound waves and you can say, how big are they? Well, you do that experiment, and this was done, starting in 1998, just the same time as our experiment, and you can say, OK, if the sound waves are small, it's a Universe that's light. A heavy Universe, they're big, and somewhere in between, they are somewhere in between. So, when you do that experiment, it turns out the Universe is just right. It has got that perfect amount of material so that space is neither curved on to itself or away from itself. Fantastically accurate experiment. This is why we say the Universe is geometrically flat. Alright, now, this, there's so many things we can do with the cosmic microwave background, it's sort of our best laboratory. One of the things, you know, is that these bumps and wiggles are going to become the galaxies of today. So in Australia, back in the early 2000s, some of my colleagues went out and mapped out 221,000 galaxies. People in the United states did a million galaxies a few years later. But you can go through and measure, essentially, gravity in action, because gravity's going to take those bumps, and it's going to make a pattern. We use computers to see what that pattern looks like. And it depends on, it turns out, what the Universe is made of. And how much gravity it has. So you can look up here, at the real Universe, where it says "Observed", and here are four different universes, made up of different things. And your eye is pretty good at doing the Fourier transforms we really do to compare those data sets, and, if you're looking at that, all figure out which one you think is right. If you said that one, you're right. That is what the data looks like. And that is a mixture of five parts dark matter, which I haven't talked about yet, one part atoms. What's special about dark matter? Dark matter doesn't interact, except for by gravity. It goes right through itself, it goes right through the Earth, it goes right through atoms, it essentially is invisible to itself and everything else in the Universe. It only has gravity. That makes it insanely easy to calculate, because you literally just have to put Newton's laws in, and it works. You don't have to worry about pressure or anything. So it's very convenient that the Universe made the Universe out of this, because if it didn't, it would be really hard to calculate. So, dark matter. Well, we don't quite know what dark matter is. It has gravity, but we think it's probably some undiscovered particle. And we can sort of get a sense to say this stuff really exists. Here are two clusters of galaxies which are made up of atoms and dark matter. When you clash atoms, what do they do? They light up, make x-rays, and they condense in the centre. If you look at where the atoms are in this picture, well, they're in the centre. If you map out the mass, by using how space gets distorted through what we call gravitational lensing, you can see where the mass is. And the dark matter seems to have gone right through itself, as these galaxies have collided, and the atoms are stuck in the centre. Just like we see. Every way we look at it, we see dark matter in this five to one ratio. Not just here, but, again, in the cosmic microwave background. Why? Because those sound waves are dependent. The wave action in the early Universe is dependent on what the Universe is made out of. If you plop something into a pond, and it's made out of honey as opposed to water, the waves look different. And so, the Big Bang essentially, as I'll show you, threw gravel into the pond, those waves fluctuated out, and we get a snapshot as the Universe became transparent. And that's what the Universe looks like. So what does the wave action look like? Well, we measure it in the form of, essentially, how many waves there are at different scales. So you get this very complex curve. The curve is the theory, the dots are the data. You don't get many experiments better than this. And, if you change anything by even small amounts, the theory no longer fits the data. And it says the Universe is 25% dark matter, same answer we get in the other place. It says it's 5% atoms, or baryons, as we like to say. It tells us approximately, it tells it very accurately, but it's roughly, there are 10^9 photons for every atomic nucleus. And that there are 3 low mass neutrinos. Heavy mass neutrinos, 4 neutrinos, all screw things up. Alright, that's what we get by doing that wave action. So, when you put this all together, you get this quite remarkable but unexpected story. Flatness, that means all of the matter in the Universe adds up to 100% of enough to be flat. Turns out about 30% of it, when you look at how much gravity there is, is stuff that's attractive. So we need mystery matter, 70%, the same stuff that the supernovae experiments that Saul and I have been talking about seem to require, to push the Universe apart. So we apparently have our Universe, which we are only 5% of. Because we have all these constituent parts, and we know how they work together, we can very accurately trace the Universe back to the time of the Big Bang. Back to 380,000 years, and then it's a very small extrapolation. And it tells us that the Universe is 3.8 billion years old. Sorry, it's 13.8 billion years old. The best number is 13.82, people get out in a fight if you say it's 13.78 now. So we've really got it aged very accurately. Now let's go before the CMB. You make the Universe smaller, it gets hotter and hotter. Eventually you get to the point where the photons are so hot that they can create things. For example, they might create an electron-positron pair. Now an electron-positron pair get together, they like to get back together and produce two more photons. One of the interesting issues of all of the particle physics people are looking at, is they have this self-contained model, that seems to all work, but we know from astronomy there are a bunch of unanswered questions from this point. Why do atoms exist? Because, in a hot universe, all the equations are such, that every time photons get together, they make and destroy matter evenly. Yet we have this ratio that there's 10^9, a billion photons, per every atom. There is an asymmetry in the equations we don't understand. Because, as they stand now, we aren't here. The entire Universe is just photons. So something's going on we don't understand. We of course haven't discovered dark matter. We think it's a particle, but we haven't seen it. What is this dark energy stuff? We really don't know there. We have learned almost nothing about dark energy since our discovery in 1998. It's a sad state of affairs. Why do neutrinos have mass? There are reasons from astronomy, and from experiments here on Earth, that we're pretty sure they have a little bit of mass, but the equations don't tell us that either. And finally, how does gravity work with other forces? That's sort of the huge question in physics, the big one, of unification. Now, if we run the Universe back to a little more moderate temperature, 800,000 degrees, the whole thing acts as a nuclear reactor. And so you can very clearly predict what the nuclear reactor's going to do. You go through and do the equations, same ones that we use for doing nuclear bombs and other things, and you sort of get a universe that produces a set of elements: hydrogen 75%, helium 25%, and trace amounts of other things. Exactly what we see in the Universe. Everything else, the carbon, all the stuff that makes biologists excited, well that's all happened lately in the stars that we look at. So we know the Universe was hot, it cooled. Here, those little sound waves are only fluctuations, in one part in a hundred thousand. They grew by gravity, and something happened. About probably 100 million years later, first stars formed, and suddenly, 13 billion years ago, we had this really exciting Universe that looks not that dissimilar to today. We're at the point of being able to go and ask questions. So one of the things I've been working in, is taking and looking at every single star in the Milky Way, and asking, can I find stars that only are made up of hydrogen and helium? Those would be the ones made right after the Big Bang. So, we're looking at billions upon billions of objects, and last year we found the closest thing yet. It's an object that has no iron in it whatsoever. Exploding stars make iron, they also make other things. So this object has literally no iron. One part in 100 million. Less than the Sun. And so this is a spectrum where you can identify the iron, there really is nothing there. It does have a little bit of calcium, and other things. So we think this star was created out of the ashes of one of the original stars of the Universe. We only have one, we're going to be, over the next five years, hopefully finding a whole bunch of these. And piecing together how the Universe was formed from the fossil relic. But we can also, with the next generations of telescopes, look literally back to this epic, and hopefully see the objects being created. In real time. With the James Webb space telescope, and the extremely large telescopes. So, we can tell, though, that the Universe right now is 2% made up of heavier atoms. So stars have taken that initial mixture and created 2% of the Universe and transformed it through nuclear reactions, into the things that make, quite frankly, life here on Earth quite interesting. Except for those of you who like to play with helium. OK, another big question, what is the Big Bang? This picture is really funny. It's very lumpy here, but we have incredibly put up the contrast. The temperature is the same on both sides. How can we understand that? When, it turns out, if you go through and make a diagram, or you sort of look at how light looks at, you see, that if I look that direction, and I look the other direction, and I go back to a distance of 380,000 years after the Big Bang, you can go through and then make those light cones and you can say that piece of light and that piece of light, never can see each other, and yet they're the same temperature at one part in 10^4, almost. Alright, so we need a trick. And the trick is something called inflation. And inflation allows a time in the Universe where the Universe exponentially expanded and stopped for some reason. And so that makes that diagram look a little funny. And I seem to have lost one of my slides unfortunately. It also takes tiny fluctuations caused by quantum mechanics, and blows them up to the size of the Universe. Seems crazy? Well, if you don't have this, the Universe, instead of looking like what we see it, it would look like this, there would be no structure in how gravity could grow things. It also turns out, is a way of taking a very lumpy universe and as it accelerates, it flattens it out. It matches all these observations we've made of the Universe. But it does mean that the Universe had this funny time, where we think it essentially expanded by, you know, umpteen orders of magnitude, in a tiny fraction of a second, and then essentially made the Universe as we see it today. So that brings the question, what is the Big Bang? Well, I don't know what the Big Bang is. To me, the Big Bang is whatever caused the Universe to inflate. Now there's a time, probably, before that, but essentially that magnification has washed out almost anything we can know. And it may well be that the Big Bang is the inflation, or maybe there's something before, we don't know yet. So I'm afraid I don't know what the Big Bang is, I only know what happened after the Big Bang. And what's our future? Well, I'm sad to say, that our Sun's nuclear reactor's getting stronger and stronger. In about the next 500 million years it's going to get very hot on Earth. Not the little degree or something we're seeing right now, fluctuation, we're talking 50 degrees hotter. So, in 500 million years, we're going to have to find another home, or we're going to cease to exist. Eventually, 5 billion years from now, the Sun really does run out of nuclear fuel, and we are definitely doomed at that point. So, as the Sun expands and becomes hotter, life's going to be uncomfortable. But I have hope that we will be able to move to another planet, maybe not quite like Interstellar, but we will continue on and probe the Universe. And what are we going to see? Well, it turns out, we have lots of places to go. There's 100 billion stars in our own galaxy. And if you go through and zoom in from our galaxy, and look at a tiny piece of sky, we see literally, in this tiny little piece, postage stamp, you know, 20 thousand galaxies. And so that is a very rich Universe. But it's finite, quite remarkable. Because, if we look over the entire sky, you see essentially everything. And so, if you want to go through and ponder how insignificant we are, when you just do a quick little back-of-the-envelope calculation here. So, how many stars in the Universe? Well there's about 260 billion galaxies, if I add up all those pictures. There's about 100 billion stars in each one of those. It turns out about one in five of them, we think, has an Earth-like planet, from latest measurements. So there's a lot of stars in the Universe. More stars than there are grains of sand on planet Earth. So we are insignificant. But think that we, 17,300 years ago, in a cave not too far away from here, our forefathers made this picture. We have the Pleiades stars, the same ones we see today, almost every human's been able to see them, they have Taurus, the Bull, the same place where it is on the sky today. The same description we use today. We call these the Seven Sisters, the Aboriginals of Australia called them the Seven Sisters, they'd been isolated from Europe for tens of thousands of years. Stars are our history. And what is remarkable is that we have been able to go through, as insignificant humans, and piece together the story I have shown you today. But time is of the essence, both because I've run out, but also because the Universe is exponentially expanding, we're at the beginning of it. And so, the more space expands, the less significant we are. The more space expands, the more dark energy can push it apart. Eventually, the expansion of space is going to happen faster than light's ability to traverse space. So galaxies we see today will be lost in the future. So when you read my grant proposals, please fund now, because we have limited amounts of time. So, to be clear, we're not expanding. I just want to make sure you understand that. We're in a very dense part of the Universe. And gravity has collapsed our part of the Universe, we are not expanding. However, when we look out, as cosmologists, to the rest of the world, the Universe will be accelerated out of sight. Now we don't understand what dark energy is, but unless dark energy suddenly fades away, for some reason we cannot foresee, the Universe will, at an ever increasing rate, expand, fade away, leaving me and my fellow colleagues unemployed. Thank you very much (applause)

Brian Schmidt sheds light on unanswered questions
(00:19:32 - 00:24:34)


David Gross, co-recipient of the Nobel Prize in Physics 2004 “for the discovery of asymptotic freedom in the theory of the strong interaction", showed how the discovery that neutrinos have mass casts doubt on the standard model:


David J. Gross (2015) - The Future of Particle Physics

Every year is a celebration of many anniversaries. There are three wonderful celebrations that we celebrate this year, in a way. Of course, one is 100 years of general relativity. of space and time and changed fundamental physics forever. And we're celebrating this centenary year here, although I must say this occasion has not been mentioned in Lindau until now. It's also 90 years of quantum mechanics which sort of came to its complete form in 1925. Perhaps the greatest conceptual revolution of the 20th century and one that informs all of microscopic, atomic, subatomic, fundamental physics. And finally in a sense, it is the 40th anniversary of the completion of the standard model of elementary particle physics. The comprehensive theory that was developed in mostly the 20th century that gave us a rather complete understanding of the basic constituents of matter and the forces that act on them. OK. Well, this time does not count against my meagre allotment here. Now let me see, let me unplug your monitor. Is that the problem? Which I don't like because then I can't use my thing. It's not suitable but that's, that's fine, we'll get that. This is irrelevant. So as I was saying, 2015 is a celebration of many things, perhaps the most amazing important in the centenary of general relativity, but also 90 years of quantum mechanics and 40 years of the standard model of elementary particles. I want to say a bit about Einstein because this is perhaps the only scientific meeting in the world this year that hasn't mentioned or celebrated Albert Einstein's remarkable contribution which occurred at the end of 1915. In one week, he published four papers. At that time, he could submit a paper to the Prussian Academy and get it published within two days. And in the final paper, he wrote down in final and complete form so-called Einstein's equations that describe the curvature of space-time sourced by matter, which is the source of energy and momentum of matter that curves space-time and gives rise to what we otherwise call gravity. Einstein left a legacy that is unmatched since perhaps Newton, one that will persist for generations. Beyond the specific form of the equations that describe gravity at its core, there are really three contributions that will live on way beyond his specific equations which will be superseded. First, he finally realized his dream of making space-time into a dynamical, physical object. Not an inert frame that is set down in some Kantian fashion, but rather a dynamical, physical object whose metric is subject to variation. It responds to the presence of energy and matter, curves, and that then affects matter. And we still are struggling in trying to understand what it means especially in quantum mechanics to have space and time itself be a dynamical, fluctuating entity. He also made possible physical cosmology. Before Einstein, cosmology was the domain of religion and theology and philosophy. After Einstein, it became physics, and immediately following his theory people and began to construct models of the universe and that has made possible 100 years of amazing developments in astronomy, astrophysics and cosmology. And finally, he dreamed and motivated generations of physicists after him to be as ambitious as he was and to try to unify the forces of nature and get to the core of physical reality. The first point, space-time be dynamical, was a major change in our notion of what space and time is and the dynamics of course is that space and time responds to energy and matter, curves, then gives rise to gravity. So it's the curvature of space-time that causes the earth to rotate around the sun and Einstein's equations describe how you understand this quantitatively. It also predicted strange new objects in the universe that might occur when there's so much matter that it curves space and time so dramatically, creating regions from which light cannot escape, otherwise known as black holes. Black holes were appeared months after Einstein's equations were put down by Karl Schwarzschild, a brilliant theoretical physicist who died a few years later on the western front. We're celebrating Einstein's equations which occurred, were written down during World War I. But Einstein never believed in the existence of such crazy objects and most of physicists were quite suspicious, although now we know that they are abundant throughout the galaxy. Indeed, throughout the universe. Indeed, at the centre of every galaxy. As far as we can tell, there is a big black hole, including our own, here it is, and we can measure the orbits of stars around this black hole and confirm that in the middle here, which you can't really see, there's a very small region which is entirely black and contains a mass of a million suns and is responsible for these orbits. Black holes also appear in other places in the galaxy. They form when stars collapse, supernova, leaving behind black holes which power through these accretion discs, these ultra-relativistic jets we see as quasars or gamma ray bursts. And they are the subject of continual theoretical experiments since their properties are so weird. So for example, we still struggle with the following paradox: One takes a well-defined quantum-mechanical system, say here consistently you have two particles that are correlated, as we would say, in quantum mechanics, they're entangled, and then we drop one of them into the black hole. It can never be in communication with us again according to the classical laws of general relativity. We don't know what state it's in, therefore we've lost information and can't predict a final state. Naively, information is lost and over 45 years ago, Hawking, around 40 years ago, Hawking posed a paradox in describing the quantum-mechanical properties of black holes, and suggested that this phenomena is information loss at the basic level and that indeed the quantum-mechanics in general relativity were somewhat inconsistent. He advocated giving up some aspects of quantum mechanics such as the preservation of information. People like me and others who came from particle physics community believed that we'll have to change and modify Einstein's theory as he always suspected, and we'll keep quantum mechanics. I think our side won, in fact Hawking admitted as much and paid off some debts a few years ago. Physical cosmology is the other great achievement of the Einsteinian framework. Before Einstein, we knew nothing and understood nothing, first approximation about the universe. There were all these bright things there, stars, we thought the Milky Way was the whole universe, we thought it was static, unchanging, we didn't know what stars were, anything. But after Einstein one could construct mathematical models of the history of the universe, that's about now a physical question and over that 100 years we have mapped out that history in extraordinary detail. We understand the 17.7 billion years of expansion. First rapid, then slowed down and now accelerated. The formation of structure from the hot Big Bang that is seen about 400 million years after the beginning. And finally, unification and one of the most astounding demonstrations of our unified, or partially-unified theory of elementary particles which is devoted to discovering, observing and understanding the basic building blocks of matter and the forces that act on them. We've made extraordinary progress in roughly 75 years. I'm almost 75 years old so in one lifetime, we've gone from no elementary particle and no understanding of the forces acting on them, except for electricity and magnetism, to a rather complete theory, quite remarkable. In a sense, experimental elementary particles began with Rutherford's discovery of the nucleon, the nucleus of atoms. He wanted to understand what goes on inside an atom so he invented a technique which we still use today, he bombarded gold foil, gold nuclei, gold atoms with alpha particles which were omitted nuclei of helium from radioactive substances and his students observed little dots of light on fluorescent screens and could measure the deviation of particles scattering off the gold nucleus. He deduced from this using theory. Rutherford was a good theorist too, he knew electromagnetism, he assuming the force between these alpha particles in the nucleus was electromagnetic, he could determine the size of the locus of the positive charge of atoms and their mass. He came to the conclusion that in the centre of atoms, 100,000 times smaller than the size of the atom was where all the mass, most of the mass, and all of the positive charge was located. That was the discovery of the nucleus of atoms. And of course in the last, since then for over 100 years, we have been exploring experimentally and constructing theories of what goes on inside the nucleus of atoms. But using exactly, conceptually the same idea: if you want to discover what something like this is made of, you take something else like this, you smash it together, see what comes out, and try to figure out what's going on. Make new particles, try to figure out the laws. We of course use much bigger accelerators than Rutherford had available. This is LHC. at CERN; this is CERN airport as you know. That's a massive 20 kilometre accelerator which accelerates protons, smashing protons at around a trillion, a million million electron volts. And then we detect the pictures of what comes out with these massive detectors and try to figure out, find one event out of a hundred-billion that might be some indication of new physics. Well, developing this theory, the standard model, it's called, has been recognized by Nobel prizes, many Nobel prizes. I did the exercise of counting how many and there are 52 Nobel Laureates who've contributed to the development of the standard model. And I'm leaving out, by the way, people like Einstein, Dirac, Heisenberg, Schrödinger, who created the theory of quantum mechanics, the framework in which of course the standard model is embedded. So even leaving them out, there are 52 laureates spread over 30 Nobel Prizes in the last 75 years. Also interesting, 20 of these prizes are for experiment, 10 for theory. So the lesson is if you want to get a Nobel Prize, experiment is the way. The other hand, in particle physics, if you want to do experimental high-energy physics, you have to join a group often consisting of thousands and the Nobel Prize limits to three, so it's a bit of a problem nowadays. Out of these 52 laureates, four are present here in Lindau. Why so few? Good question. This is an illustration of the standard model of elementary particle physics. It's the list and properties of basic constituents of matter, quarks, leptons, here you see the electron, the neutrino, the up and down quarks that make up the nuclei in our body, two other families, and three forces that act within the atom in the nucleus. Very similar at a very basic level. But different because of the strange quantum properties of the vacuum. Electromagnetism, that was already there in the 19th century, of course, and the weak and strong nuclear forces that act within the nucleus. And then of course the Higgs sector, or the Brout-Englert-Higgs sector which has been added on to account for properties of the weak and nuclear force. So these are the people who contributed. Starting with J.J. Thomson who at the end of the 19th century discovered the first elementary particle, the first basic constituent of matter, the electron. And then Rutherford, who discovered the nucleus, although actually he never got the physics prize, he got a chemistry prize for radioactivity but he deserves clearly to be on this list. Niels Bohr, the theorists are marked in red, Niels Bohr, who constructed the first model of the atom, of the structure of matter based on E and M, electricity magnetism and quantum mechanics, which was essential. Chadwick, who discovered the neutron. Took a long time from the nucleons to protons to the neutron. Carl David Anderson, who discovered the first antiparticle, predicted by Dirac, who I haven't put on this list, and discovered the positron, the anti-electron. Ernest Lawrence, who developed the cyclotrons, the modern particle accelerators we use. Blackett, who developed cloud chambers to use the cosmic rays accelerated throughout the universe as accelerators. Yukawa, who in 1949 made the first attempt to construct a theory of the nuclear force and predicted an existence of a new particle called the pion, which was then discovered by Powell. Then we have after World War II, the big development that followed the new scientific tools that were made available, like radar. Of course then Lamb who discovered anomalies in quantum electro-dynamics. Lee and Yang who proposed that parity might be violated in the weak force, in the weak interactions. Glaser, who developed the bubble chamber. Hofstadter, who probed the structure of the nucleon. Segrè and Chamberlain who discovered the antiparticle of the proton, the antiproton. And then Tomonaga, Schwinger and Feynman, who perfected, completed the understanding of quantum electro-dynamics. Luis Alvarez, who built a bubble chamber and the modern way of analysing high-energy physics experiments. Murray Gell-Mann, who discovered symmetry patterns among the nuclear particles that were being produced experimentally. Burt Richter and Sam Ting who discovered the J/Psi particle, or the charmed quarks. Glashow, Salam and Weinberg, who were the developers of the electroweak theory, the weak nuclear force. Jim Cronin who is here, the yellow stars are people who are here, the few, and Fitch who discovered C.P., or time reversal non-invariance in particle decays. Carlo Rubbia, and van der Meer who discovered the carriers of the weak force, the W and Z particles. Lederman, Schwartz, and Steinberger who discovered the two neutrinos. Friedman, Kendall, and Taylor, who did the experiments that for me were totally crucial, illustrating that inside protons, there really are quarks and this prize is in a sense for the discovery of quarks. Georges Charpak who developed many crucial experimental detectors for high-energy physics. Perl and Reines. Perl for discovering the tau lepton and Reines for the neutrino. Hooft and Veltman, again teaming here for understanding or normalize the properties of the gauge theories we use in the standard model. Finally the 21st century, Davis and Koshiba for discovering neutrino oscillations, in fact the neutrinos do have some mass. This is one of my favourites. The discovery of asymptotic freedom and the theory of the strong nuclear force. Oops, uh, where were we? And then uh... And then... And last but not least, and for the first time on the Nobel website where I took all these pictures in colour, Englert and Higgs for the discovery of the Brout-Englert-Higgs mechanism and again, Englert is not only here, but here. OK, and what came out of these 52 men whom I'm proud to be one of, it's quite a crowd, and unfortunately, not one woman, is the standard model, but it really is a theory, so it's a theory you can see because you can put it all on one t-shirt. And this Lagrangian, as far as we know, describes just about anything in a fundamental reductionist sense, all of science. I add in here Einstein's general relativity, which so far we truly only understand classically, and I'm adding here the cosmological constant responsible for the acceleration of the expansion of the universe. It's an unbelievably successful theory. The goal of millennia, of science of course, but the real development of this took only 75 years, from J.J. Thompson to 1975-ish. Unbelievably successful, as far as we know it works down perhaps to the smallest conceivable scales and to the edge of the universe. In a reductionist sense, and all physicists are reductionists, all of physics and therefore chemistry and biology et cetera, all are contained here if you all need to work hard enough to solve these equations. There are elements of this totally beautiful t-shirt that are still somewhat mysterious. For example, this term here is the one that accounts for the quark and lepton and neutrino masses. We don't understand its origin, it has a lot of parameters we have to measure and cannot calculate. Something is missing in our understanding of this term. Then there's this term which is... this term is another problem we don't truly understand. Then there's Einstein's term which we don't truly understand how to quantize, how to make its system in quantum mechanics. Then there's dark energy, whose form was predicted by Einstein and well tested, a great triumph in general relativity, but the magnitude of this term is an incredible, theoretical mystery. And then there are of course, many, many measurements, direct and indirect and puzzles that inform us, like Einstein, that even this fantastic standard theory must be provisional or must be physics beyond it, including dark matter which I'll come to, neutrino masses, baryon, why are there baryons left after the Big Bang, the acceleration of the universe. And theoretical mysteries, like how do the forces unify? Various, enormous disparancies of the scales of fundamental physics. The properties of the quarks and gluons, their masses and so on, and the theory of the universe. Some of those puzzles seem to be resolved by a very beautiful idea of super symmetry which I've discussed in Lindau before, in which we still wait for any evidence at the LHC. Dark matter however is there for sure, it has been observed indirectly throughout the universe by astronomers. They see something that affects matter and light and therefore they know there's matter there and indeed, most of the matter in the universe is not made of stuff that we are made out of and therefore it's called dark, it doesn't radiate. There are intensive searches to detect or produce such matter. I have no doubt that will happen in the next decade. Theorists want to unify following Einstein's dream and we're in the position to do so because we understand all of these forces now and when we extrapolate their properties, we find that they all sort of come together and look very similar and fit together at an extraordinarily high-energy short distance. Happens to be very close to where gravity becomes a strong force and we must take it into account. This is the strong intent that has motivated us for the last 40 years, to try to go beyond the standard theory of the strong, weak and electromagnetic forces through a unified theory, perhaps of all the forces, leading us to string theory, for example. Where we could imagine that all the different forces and quanta of the fields that describe these forces, all the particles and forces, are due to different vibrations of a single superstring. Now in that picture, that enormously high energy, enormously short distance, is known as the Planck scale. Discovered by Max Planck when he discovered the Planck's constant. He realized he now had three parameters which were clearly fundamental in physics. The velocity of light, the strength of gravity, and the constant he needed for his radiation law. And with three dimensionfull units, you can construct natural units for physics. And he did. And he advocated using these to communicate with E.T., extra-terrestrial civilizations, you know? They would say, "How big are you guys?" And a million years later we would tell them, "Well, we're two meters." Come on, what's a meter? No, we would tell them we are 10^35 basic units of length and anywhere, any physicist anywhere in the universe would know what that meant. That's the Planck length, it's awfully small. The Planck time is awfully fast. The Planck energy scale is awfully big, but that's a fact of nature. It's not a choice of theorists who like to probe domains which are inaccessible, it's a fact. And we have to live with it and it's what I call the curse of logarithms. So if you try to measure energy on a scale using a scale that is relevant to physics, meaningful, then you really should measure logarithm of energy. You increase the energy by factors of 10, 10, 10. That's always what we're doing by the way, particle physics. We always want to build a bigger accelerator by a factor of 10 so you should really use a logarithmic scale. On that scale is a scale where physics changes from energy to energy scale. So on that scale, Rutherford, in units of billion electron volts. Rutherford was down to 10^-3 when he was probing the structure of the atom. The strong interactions are characteristically probed at 10 to 100 Gev. Proton weighs 1 Gev and the weak interaction scale is maybe a TeV, a trillion electron volts being probed at the LHC, or 10 TeV, and we would really like to get to the unification scale, the Planck scale and that's 10^19 Gev. We're hoping to build an accelerator that'll go to 100 TeV, that goes a bit farther, but on a logarithmic scale, you see, going from the 75 years from Rutherford to the standard theory, this is, you know, a big step but it's only about as much as we've done before. And that's why we can, as theorists, speculate and work without being able directly to measure. And one of the reasons we can't directly measure up here is that there's another scale, called dollars. Oops, something again happened. I'll get back. God Almighty. Let's go to... God Almighty. Play, play. OK, back to the curse of the logarithms. So the real scale is, the scale of physics, it goes log of the energy, we've made a lot of progress, we have to get to here, but society uses dollars. Now it turns out that dollars increase and building accelerators like the square of the energy at best, and that's exponential then, in the scale of physics. And exponentials are really bad. So on the same scale, Fermilab, which cost about a billion dollars is down here, and then the LHC, which cost maybe six billion dollars is up here, that's a long way. And then the new machine that we would like to build at 100 TeV costs about 10 billion dollars, and the machine we really need to probe the Planck scale... I don't know what's in this direction, Munich? So on this scale, it's probably in Munich. So that's a fact of life and we have to deal with that and there are all sorts of strategies. And it certainly affects the way we look at the future, because if you look at the present and future of particle physics, you can either be extremely optimistic, as I tend to be, or extremely pessimistic, as even I tend to be sometimes. From the extremely pessimistic point of view, you could say well, "Standard theory works so well." In fact, recently, finally confirmed Higgs sector agrees with a prediction, the simplest predictions of the simplest models that were considered 50 years ago extraordinarily well. And that's disappointing in a way, it works but of course it doesn't tell us anything we didn't know, in a way. There also is no signal for these new particles and new symmetries we imagine, dark matter has not yet been directly observed and we're not guaranteed that it will be in the next decade or two. We have no direct experimentally, provable indication of where the next new threshold is and it might be as high as the LHC, as the Planck scale. What do you do if that turns out to be the case? And we'll know that in the next three, to five, 10 years for sure. Now the extremely optimistic scenario, which I subscribe to more fervently is that, well, there probably are a bit of deviations from the simplest Higgs model. Super symmetric particles will be observed in the next run of the LHC, which began a few weeks ago and dark matter will be detected on the sky or underground, produced at the LHC. And that'll give us enormously strong guidance for the next steps, and they're many steps, experimental steps of various colliders. In both cases, however, in both scenarios, the lesson I take away is that we must fully explore the next scale, 10 times greater. This really gets us into the scale where the electroweak sector of the standard model can be really understood. And we can do it, in fact, the United States did this 20 years ago. Then Newt Gingrich came along and shut it down. And we need the SSC. If it had not been killed, having gone through its first upgrade, would be running at about 100 TeV. Its design and energy was 40 TeV, would've been easily extendable with modern magnets to 100 TeV, so we already built it before Newt Gingrich got his hands on congress. And there are all sorts of plans by Geneva and most excitingly by China, which is now perhaps if not today, next week, the biggest economy in the world. They can afford to do this. It was a very exciting Chinese proposal to build a 100 TeV collider around here... and that decision will be taken probably by the end of this year. CERN has its own plans but a longer time schedule since they must finish with the LHC. Meanwhile, theorists can go on speculating as we have been so successful in the last 100 years, but now the questions we're asking are really profound in some sense but they build on previous knowledge. Space-time was altered in a totally fundamental way 100 years ago by Einstein, but once we add the quantum mechanics to the game, it remains the deepest mysteries. And many of the properties of space and time we take for granted, which when as infants we construct this model to navigate the world, appear to have no real fundamental meaning and many of us are convinced that space and time is truly an emergent concept. At the Planck scale, it's simply not a good way of describing, there's something more fundamental, space and time are emergent. We're now beginning to understand how that can happen and how to see what properties of a quantum-mechanical system underline our usual space-time descriptions. Space time emerges, of course gravity is just dynamical space-time, so it's also an emergent force. But this is difficult, very difficult conceptually. We have to, in a sense imagine, how do you start from a more fundamental basis for physics in which space and time are not there to begin with? How do you formulate the rules of physics without postulating space and time? And then both cosmologist and fundamental physicists of all types are now faced, given this understanding of the history of the universe, 100 years of story with understanding the Big Bang. Can't avoid it anymore. That again is now taken away from religion and philosophy and becomes a matter of physics. To find a solution to go beyond, you know say, "Well at some point it was hot and dense." But to really go back to the Big Bang requires confronting a question that physics has been able to avoid for millenia, which is, "How did the universe begin?" And, "Is this a question physics can address?" Can we determine the initial condition? I believe, that this question has to be confronted because what we do now and speculate, all speculations about unifying the forces of nature with gravity, the theory of space-time, in the end, the consistent answer to that has to describe how the universe began. Or modify the question as we transform our notion of what space-time is to one that can be answered. But it's obviously very difficult. And it's very difficult to see signals from very early times. We had a lot of hopes earlier this year with an experiment called BICEP 2 looking for relic gravitational waves, ripples in the metric of space-time that came very close to the very beginning. Unfortunately it turned out that they ignored the dust and we haven't yet seen such gravitational waves, but eventually we'll probably see them. So I am very optimistic and all the young people here should be optimistic for the following reasons. It's my reading of history and my own life's experience is that once a fundamental question becomes a well formulated scientific question, which means that it can be approached by experiment, by observation and by theory, mathematical modelling, it will be answered in your lifetime. So I've posed some of these really interesting questions, they will be answered in your lifetime, I didn't say my lifetime. Also, once an important scientific instrument is technically feasible and addresses a fundamental scientific question, like the 100 TeV collider, it will be built in your lifetime, maybe not mine but, you know... So I'm optimistic for particle fundamental physics because they're new discoveries I believe are around the corner. There are new tools that are being developed to try to deal with this incredible hierarchy of scales, there are wonderful new ideas and theoretical experiments. We have a wonderful theory of elementary particles but the most exciting questions remain to be answered and as always in science, well, there are incredible experimental opportunities. To be specific, I am very excited about the possibility of China coming into the game. The new experiments and new accelerators that are coming and new discoveries around the corner. So the best is yet to come, thank you.

David Gross on missing elements of the standard theory
(00:23:06 - 00:25:05)



Francois Englert, finally, who shared the Nobel Prize in Physics 2013 with Peter Higgs “for the theoretical discovery of a mechanism that contributes to our understanding of the origin of mass of subatomic particles, and which recently was confirmed through the discovery of the predicted fundamental particle, by the ATLAS and CMS experiments at CERN's Large Hadron Collider", shared his perspective of the unknown beyond the discovery of the Higgs boson and put neutrinos in a context with quantum gravity and the birth of the universe:


Francois Englert on Challenges Beyond the Discovery of the Higgs Boson
(00:27:55 - 00:31:50)


Revisiting Majorana’s Legacy

Given the importance of the questions raised in these lectures, it is quite understandable that the experimental setup of neutrino research is getting more and more complex. In December 2010, the IceCube Neutrino Observatory in Antarctica was completed. It is designed to detect neutrinos from cataclysmic cosmic events that have energies a million times greater than nuclear reactions and thus to explore the highest-energy astrophysical processes. Its 5.160 digital optical sensors are distributed over one cubic kilometer under the Antarctic ice. Its experimental goals include the detection of point sources of high-energy neutrinos, the identification of galactic supernovae, and the search for sterile neutrinos, a hypothetical fourth type of neutrino that does not interact via any of the fundamental interactions of the Standard Model except gravity and could help to explain dark matter. About 300 scientists at 47 institutions in twelve countries are involved in IceCube research. Between 2010 and 2012 the IceCube detector revealed the presence of a high-energy neutrino flux containing the most energetic neutrinos ever observed. “Although the origin of this flux is unknown, the findings are consistent with expectations for a neutrino population with origins outside the solar system”[13].

One of the most exciting questions of current neutrino research is whether neutrinos are their own antiparticles. Ettore Majorana had suggested this possibility already in the 1930s. The experimental proof for this daring hypothesis would be the discovery of a phenomenon called “neutrinoless double beta decay”, in which two neutrons decay together, so that “the antineutrino emitted by one is immediately absorbed by the other. The net result is two neutrons disintegrating simultaneously without releasing any antineutrinos and neutrinos”.[14] In this case, in contrast to a cardinal rule of the standard model, the lepton number would not be conserved. The standard model had to be fundamentally revised. And more than that: The neutrino’s two-sides-of-one-coin-identity could explain the symmetry breaking between matter and antimatter shortly after the Big Bang and thus establish the reason for the existence of our universe.

While the discovery of the Higgs particle has been hailed as the justification of the standard model, the discovery that neutrinos have mass shows in any case that the standard model has not yet been sufficiently worked out to describe the world by physical equations. „The experiments have revealed the first apparent crack in the Standard Model“, the Royal Swedish Academy of Sciences stated with regard to the Nobel Prize in Physics 2015[15]. This is a good sign for future insights and a strong motivation for young researchers – or, as legendary singer-songwriter Leonard Cohen once expressed it: “There is a crack in everything. That’s how the light gets in.”



[1] quoted by Jayawardhana R. The Neutrino Hunters. Oneworld Publications, London, 2015, p. 42

[2] For the full text of the original German letter, in which Pauli’s humor is captured best, see Mößbauer R. (Ed.) History of Neutrino Physics: Pauli’s Letters. Proc. Neutrino Astrophysics, 20, 24 Oct. 1997, p. 3-5.
Both the full German text and its English translation are accessible under;filename=pauli%20letter1930.pdf


[4] Pauli reportedly had said so to his colleague Walter Baade, cf. Jayawardhana R., l.c., p. 42 and 132

[5] Cf. Jayawardhana R., l.c., p. 44

[6] James Chadwick. Possible Existence of a Neutron. Nature Feb. 27, 1932, p.312

[7] Cf. Jayawardhana R., l.c., p. 72

[8] cf., p.2 (John Bahcall. Solving the mystery of the missing neutrons)

[9] Gribov VN and Pontecorvo BM. Neutrino astronomy and lepton charge. Phys. Lett. B 28, 493-496 (1969)

[10] cf.

[11] Cf. Jayawardhana R., l.c., p. 120

[12] „For three decades people had been pointing at this guy “, Bahcall said in an interview,“and saying that this guy is the guy who wrongly calculated the flux of neutrinos from the Sun, and suddenly that wasn’t so. It was like a person who had been sentenced for some heinous crime, and then a DNA test is made and it’s found that he isn’t guilty. That’s exactly the way I felt. “quoted by Jayawardhana R., l.c., p. 107

[13] Aartsen et al. Evidence for High-Energy extraterrestrial neutrinos at the IceCube detector. Science, 342, Issue 6161, 22 November 2013

[14] Jayawardhana R., l.c., p. 160




Additonal Mediatheque Lectures Associated with Neutrinos:

Rudolf Mößbauer 1979: "Neutrino Stability"

Rudolf Mößbauer 1985: "Rest Masses of Neutrinos"

Hans Dehmelt 1991: "That I May Know the inmost force…"

Leon Lederman 1991: "Science Education in a Changing World"

Melvin Schwartz 1991: "Studying the coulomb state of a pion and a Muon"

James Cronin 1991: "Astrophysics with Extensive Air Showers"

Rudolf Mößbauer 1994: "Characteristics of Neutrinos"

Martin Perl 1997: "The Search for Fractional Electrical Charge"

Rudolf Mößbauer 1997: "Neutrino Physics"

Rudolf Mößbauer 2001: "Masses of the neutrinos"

Martinus Veltman 2008: "The Development of Particle Physics"



Specify width: px