When Chen Ning Yang and Tsung-Dao Lee received the 1957 Nobel Prize in Physics for their theoretical prediction of parity violation, they were still in their thirties and could look forward to long careers as theoretical physicists. They were both invited to Lindau for the physics meeting in 1959, but none of them accepted the invitation. When Yang gave the present lecture in 1973, it was his first visit to a Lindau Meeting and Lee didn’t lecture in Lindau until 1994. So for many generations of students and young researchers participating in the Lindau Meetings, Yang and Lee were only names on the long list of Nobel Laureates that did not accept the Lindau invitations, like Erwin Schrödinger, Richard Feynman and several others. So when Yang actually came in 1973, it was an event worth noting down. His lecture on the structure of the proton could also be given an alternative title, e.g., scattering theoretical description of proton-proton collisions. In the audience was Robert Hofstadter, who had received a Nobel Prize in Physics 1961 for his studies of electron scattering on atomic nuclei. Hofstadter had discovered that the proton has a charge distribution with Gaussian tails and this is the starting point for Yang in the first part of his lecture which describes results from experiments studying elastic scattering of high-energy protons on protons at Brookhaven and CERN. In particular the results from the Intersecting Storage Rings at CERN are referred to repeatedly. In 1973, the quark structure of the proton had not been accepted, but experiments that would eventually lead to the acceptance (and a Nobel Prize in Physics 1990 to Jerome Friedman, Henry Kendall and Richard Taylor) were ongoing. Yang shows himself as a master theoretician in managing to describe almost all aspects of proton-proton collisons in a general physical approximation which circumvents the need for massive detailed field-theoretical calculations.Anders Bárány

- Mediatheque
- Laureates

The proton is the smallest nucleus. That physicists should want to study its structure and should want to take it apart is of course in the best tradition of physics. We recall that from the year, about 1890, for 40 years physicists studied the structure of the atom. Through a series of brilliant experimenter and theoretical discoveries during those 40 years, one was led to a comprehensive and accurate understanding of the structure of the atom. We all recall that culminated in one of the great revolutions in the history of physics in the discovery of quantum mechanics and I'm sure it's a satisfaction on the part of everybody here that we have with us some of the principle architects of this whole enterprise. Since that time physicists of course tried to continue this tradition and want to study the next smaller building block of matter. And that was of course easily leading us to the study of the structure of the proton, at which physicists have been working for many years. What is it that physicists have learned? Have physicists understood the structure of the proton? What are the prospects in the next few years? It is these questions that I should like to report to you this morning very briefly. I will discuss the following items: First, elastic scattering, that means proton-proton collision going into only proton-protons. And second, inelastic scattering or fragmentation process. And thirdly, I will report to you some very exciting recent discoveries which I shall label as darker or blacker and blacker proton. And lastly some remarks about the future. So this would be sort of the brief outline of what I will discuss with you during the next 40 minutes. Let us begin from the end of the Second World War. There was a period of perhaps what we could call the age of innocence in which people thought that if we hit two protons against each other and studied the elastic scattering of those two protons we would know everything about the elastic scattering and hence the force between the two protons. And from that in the same way that people learned about the structure of the atom, we would know everything about nuclear forces and the subject would be successfully carried to an end. It very rapidly became evident that that was not so. Because when two protons collided with each other at energies which is somewhat higher than the lowest energies, an enormous number of other particles were produced. And in many senses physicists or high needed physicists' attention was diverted to the topic of classification and the study of the intrinsic properties of these other particles. And that led to great excitement in the late '40s and throughout the '50s and early '60s. But the structure of the proton itself eluded these studies for a long time. Now during the '50s, an enormously important set of experiments was started by Bob Hofstadter. And I think he referred to it this morning earlier in trying to find the electromagnetic structure of the proton. And for simplicity, let us just say that what he was trying to do was to study the charge distribution inside of the proton. And that was an extremely important line of research, very successfully carried out, first by him and then by various laboratories all over the world. So that now we have a detailed understanding, experimentally, of that structure. Now it was found that the proton was not a single point charge. It had a charge distribution which in small dimensions is somewhat like a Gaussian, it's not exactly a Gaussian. And then it has a long tail and whether that tail is, the precise shape of that tail is not very easy to completely determine. But it in general looks like that with a characteristic size of about 0.7 x 10^-13 centimetres. The next topic that one would study of course was to go back to the original proton-proton elastic scattering idea. Except by the '60s, people have already become sophisticated and knew that the scattering was not to be described as a kind of force between the two protons. But anyway, such experiments were started, first at Brookhaven in the early '60s, then in CERN and now at Serpukhov and at Batavia. Such studies led to very complicated results but it is fortunate for physicists that the results had rather dramatic and perhaps unifying features to it. In order to give you a general impression of this, let me present to you a composite picture which was recently put together of what elastic proton-proton scattering, angular distribution looks like. One thing good about this is that you can focus it yourself. What is plotted here in that scissor is the angular distribution, what is plotted precisely is the three-momentum-transfer squared in the centre of mass system. Therefore it is = 2 x k^2 x 1 - cos(theta), while k is the centre of mass momentum and theta is the centre of mass scattering angle. And what is plotted in the ordinate is the differential cross section in terms of d\sigma/dt, where t is the abscissae. So d\sigma/dt is of course proportional to d\sigma/d/omega, where omega is the solid angle. But you see that these were the experimental data. That's for incoming 3 BeV collision on the proton at rest. That's for 5 BeV, 7 BeV, etc, etc. Now this set of data points through which no line is then drawn, is at the highest energy which is the equivalent 1,500 GeV proton colliding on a proton at rest. Actually it is 27 GeV proton on 27 GeV proton at the intersecting storage ring at CERN. Now I call your attention to two important features of the slide. The 3, 5, 7, 10 GeV data were actually something like 10 years old. But it was already obvious from the first experiment at Brookhaven, up to about 30 GeV that there are characteristic features of elastic scattering which one can summarise as follows. The first important feature that one sees is that d\sigma/dt at large angles is very small. Now to impress you with this fact, let us go back to the previous slide. The previous transparency. You notice that the abscissae is a logarithmic plot, these points where the curves end are 90 degrees scattering. If you take let's say a 10 BeV incoming energy, you find that the 90 degrees scattering compared to the incoming is down by something like a factor of 10 million. If you go to a 30 BeV scattering, it's down by a factor of about 10^12. Nobody has yet studied it for the 1,500 BeV data because the cross section at 90 degrees is so vanishingly small, it presumably would not be able to be detected for a long time to come. If you try to extrapolate the 90 degree cross section, it is clear that it forms like the exponential of some power of the incoming energy, perhaps the square root of the incoming energy. So I think it is very easy to observe from the experimental data that there is this fact. In fact what was extremely startling to physicists 10 years ago when this data was first found, was the fact that if you do proton-proton scattering at 90 degrees, at 70 GeV, this has not yet been done. But if you do it and extrapolate from the lower energy data, lower energy being up to 30 GeV, you would find that the elastic proton-proton cross section, which is of course deriving from strong interactions, is smaller than that to be expected from weak interactions at the same energy. Now to a physicist who had traditionally thought that the weak interaction is just extremely weak compared to the strong interaction this was a most startling fact. A second important feature of elastic scattering at high energies as studied in the early '60s and that was true not only for P-P scattering. Both of these two features were true, not only for P-P scattering but for all the other elastic scattering which were later studied, was the statement that d\sigma/dt for a fixed t is not very much dependent, is not much dependent on the incoming energy. In other words for a fixed t it seems to become a function of t alone and independent of the incoming energy. Let us go back to the slide and see whether we would see that feature. Well that statement is clearly not true when you go to let us say t = 2 because you see that the 3 BeV cross section is higher than the 5 BeV, However, if you look at a small t value, let us say t = 0.2, these curves become very accurately falling on top of each other. And this was studied more accurately at small scattering angles by blowing up these diagrams and it was indeed a general feature. There are slow dependents, perhaps logarithm-wise, depending on the incoming energy, are logarithm-wise which we shall neglect in this discussion. Because we are here talking about the general features. If we do regard these as the perhaps characteristic general features of elastic high energy scattering between two hadrons, what is the a priore orientation that we should adopt in order to approach theoretically this question of very small distance structure of the hadrons. Now many schools of thought were developed trying to approach very high energy collisions and what I will be reporting to you is only one particular way of looking at it, which in my opinion has had considerable success. And that is that these two important features are in fact not at all difficult to understand once one accepts the view that the proton is as already demonstrated from Hofstadter's experiment, is not a point particle but an extended object with a finer dimension with many internal degrees of freedom. If you accept that view I would submit that these two important features are rather natural a priore consequences, almost independent of the subsequent detailed mathematical structure, which however can later be supplied. But even without the detailed mathematical structure, you would see that they can easily be accommodated in this picture. In fact it was natural from this picture because the first feature merely says that if you have two extended objects with many internal degrees of freedom, colliding on each other, if you want to study the 90 degree elastic cross section, the elastic cross section is very small. Where if you ask, if I have two drops of water, each droplet of course having many internal degrees of freedom, and I shoot them at each other at very high energies and ask, what is the elastic 90 degree cross section at high energies, the answer is of course extremely small. The a priore reason is because the many internal degrees of freedom in each droplet having its own energy momentum would not collaborate with each other if you want to transfer a large momentum simultaneously to the whole droplet. And therefore the probability of keeping the two droplets intact becomes exponentially small. So therefore, a priore, once you adopt seriously the structure, or the theoretical structure, of two extended objects with many internal degrees of freedom, with localised energy and momentum, it becomes almost inevitable that you would have feature 1. Feature 2 is also very easily a part of this picture because of the fact that of course t is nothing but 2k^2(1- cos(theta)) by definition. And for small values of angles for fixed t at large k, theta is of course very small, this is of course nothing but k^2 theta^2. Fixed t therefore is merely fixing k theta, fixing k theta is of course a well-known phenomenon, that's diffraction. Because fixing k theta and having a cross section which depends only on k theta merely means that as you go to a higher and higher wave numbers, you would go to smaller and smaller angles. That's of course nothing but well known diffraction phenomena. Therefore if you have two objects which have finite sizes and if there is any coherence in the scattering picture, you naturally would have a diffraction phenomenon which is a zero-approximation to the statement. Such a picture can in fact be extremely easily accommodated in what is known as the 'eikonal approximation'. The eikonal approximation is not new. It was already something which was discussed in optics because as you understood from what we just said in the last few minutes, what we are saying is that feature 2 is merely a wave propagation at very small wave numbers. And that of course was the subject of optics. So therefore the idea of eikonal approximation optics is very easily borrowed into high energy physics. In particular the eikonal approximation in the variety called the approximation of stationary phase was used by Molière in the 1930s to study high energy electron scattering. It was further sharpened and more clearly and rigorously worked on in the mid-1950s by Schiff and by T.T.Wu independently. But anyway, we have a very good picture in mathematical description of eikonal approximation which was also made use of by Serber in the late '40s. Only Serber's language was more expressed in terms of a force which is not the language that is used in recent times. But the spirit is the same. In eikonal approximation what one does is the following. First let's imaging that there is a scattering centre and let us have a plain wave impinging on this. Now in the eikonal approximation, the eikonal approximation is sort of a cross between the wave picture and the particle picture. So it has part of the characteristics of rays but yet it retains some of the coherence of the wave picture. It is in fact precisely the intermediary between the ray picture and the wave picture. And you can show that wave propagation under specific circumstances at high energies does rigorously give an expansion which to the leading order gives the eikonal approximation. But roughly the idea is that if there is a region where there is absorption, then the incoming wave front in going through this is described by rays. When it arrives at this point, it no longer has the same amplitude here, here the amplitude is 1, here the amplitude is less. It is not very much different from 1 at this point but is very much different from 1 at this point because it has gone through a lot of matter. And if we describe that by s, s is of course also the high energy physicist jargon, namely the s matrix or the survival amplitude. Then one would have a survival amplitude which is a function of b, b is the impact parameter. For large b, s is essentially 1 because there is nothing to obstruct it. For small b there is a lot of absorption. It turns out that for proton-proton scattering at large b, b, this characteristic distance, is of the size, is about 0.7 x 10^-13 centimetres. When you are much beyond that, s is essentially 1. When you are at the dead centre, s turns out to be approximately 24%. Now when you write s as s-1+1, that's tautology. However we can now use Huygens principle because if you use, start from this 1 here and further propagate, of course you get the undamped incoming wave. If you use now s-1 which vanishes at large distances and is finite in the small distance, that's precisely what comes out of a wave in going through a slit. So the subsequent scattering after that goes by the ordinary Huygens principle. And if you remember Huygens principle, you would immediately find the following result, d\sigma/dt = pi x the Fourier transform of this s-1. Because in a gradient or in any slit experiment it is always the Fourier transform, that's the outgoing amplitude. So this now is the formula which derives from the Eikonal approximation which expresses d\sigma/dt as a function of the s matrix. Now feature b that I emphasised before says that this is a function of t alone. That translated into this equation in the box merely says that s becomes a function of b alone. Let me repeat, select feature b merely says that the opaqueness of the proton when you go to very high energies become only dependent on the impact parameter. It's then a geometrical picture. So from the experimental data on d\sigma/dt which we exhibited in one of the earlier transparencies, you would be able to invert this equation and obtain s. And that's why I said before that s is about 24% in that centre. Now this kind of study has led to a picture of what the proton looks like and I will not go into the further mathematical details but they're all extremely elementary and it borrows from Huygens principle and eikonal picture and doesn't need any really drastic new mathematics. But among other things a most important question is what should this s(b), what should this, the previous formula relates the differential cross section to the survival coefficient s as a function of the impact parameter. But what is s as a function of the impact parameter? Here one has to do some hand waving argument and an essential feature which we had previously not built into the discussion must come to the fore. And that is that when we were talking here we were thinking of the passage of a structureless wave through a medium. And we would get this formula. However we know in proton-proton scattering, we have two extended objects going through each other. Therefore, this picture has to be modified a bit or rather it has to be supplemented a bit by the simultaneous existence of finite size of each of the colliding objects. Now this is a topic which is extremely complicated and there is no universally accepted picture at this time. But let me first present you with a hand waving argument which in the last five years has received amazing support from field theoretical calculations. And that idea is nearly the following. Let us think of P-P scattering and let us now emphasise that we have a stationary proton in the laboratory system. And we have an incoming proton which now I draw as a flattened disc because of Lorentz contraction. And this incoming proton goes through the stationary proton. Now you immediately recognise of course the different parts of the incoming proton would see different thicknesses of the stationary proton. And as a consequence, this picture of the survival coefficient becomes a much more complicated matter. Well, a hand waving argument which is a very simple one would say the following: That the survival coefficient first has to be an exponential. That is in the simple tradition of saying that if you have thickness, where you double the thickness the survival coefficient would go down geometrically. When the thickness increases arithmetically the survival coefficient of course goes down geometrically. And that is the exponentiation. The next statement is what should be this omega as a function of b, b in this case is the distance between the two centres. This b is now a complicated thing due to the fact that you have two objects but a moment of reflection would convince you that the easiest assumption is to say that there is a density distributing in the first proton, there is a density distributing in the second proton and this is nothing but a convoluting integral of the two. It's an overlap, convoluting integral means the overlap integral. And that's of course the averaging process. So one would write this as the density distributing in the first one, in the first proton, convoluted with a density distribution of the second proton. Now this is of course not based on a detailed field theory when written down. Nor was this one. These two were hand waving arguments made some six years ago. The very interesting thing was that through a series of theoretical studies of the infinite energy behaviour of field theory in high energy scattering, diagram by diagram. Both of these two features are found to be in amazing agreement with these studies. Unfortunately these studies are not rigorous. They have been carried out at great lengths requiring enormous lengthy and powerful calculations but nobody has succeeded in summing all diagrams. Nor is it clear what is meant by the summation of all Feynman diagrams because there are so many of them, therefore there's a question of in which order you should sum them. But let it be stated that these intuitive ideas were very much supported by much of this very complicated mathematical analysis of Feynman diagrams. Now what I want to say now is that from the experimental data one can obtain this, from this one can obtain the omega which is the opaqueness of the proton, of the proton-proton collision. If you substituted that in here, you would be able to get the density distribution of the proton. It was already speculated about 10 years ago that perhaps this density distribution is proportional to the charge distribution of the proton, which one has measured à la Hofstadter. And this was of very good experimental agreement. It is not precise but it was an excellent agreement, considering the crudeness of these arguments, the agreement was really amazing. In particular, if you do this process backwards and feed the experimental charge distributing into here and go into this computation backwards and arrive at the elastic scattering, you would find that the elastic scattering has a dip, which up to the moment of the calculation, which was in 1968, was not found. But that dip was later discovered at high energies because when one went at CERN to about 20 GeV, one began to see what was known as a shoulder, at t=1.3. Now the most recent ISR result which to my knowledge has not yet been published in fact shows an extremely good dip at this point. It is natural to extrapolate this and argue that at high energies in fact this dip would become more pronounced. All current ideas to try to fit this type of data proceed along the lines of approach with minor variations that we have been discussing in the last few minutes. If we depart now from the question of elastic scattering, next I would like to mention inelastic scattering. And this inelastic scattering discussion which is also called the fragmentation process was what Professor Hofstadter referred to when he was discussing the inclusive reaction process. The inelastic scattering, the idea behind this discussion of the inelastic scattering is a direct generalisation of the ideas that we have been discussing about elastic scattering. I should say I recall with great pleasure especially the fact that in developing the idea of the fragmentation picture at Stone Brook, one of our collaborators is Max Benecke of the Max Planck Institute at Munich. Now let me merely outline what is the general idea here. The idea is that if you have a proton with let's say a pion impinging on it at high energies, we said that for elastic scattering, the survivor amplitude here at s becomes a function only of the impact parameter. That's the zero order picture that we have been emphasising up to now. However we all know that the elastic scattering is only a very small fraction of the total cross section. At 1,500 equivalent GeV/c elastic scattering is only about 16 or 17% of the total cross section. The rest all go into inelastic scattering. That's of course a very natural thing in this picture too, namely you can draw it symbolically like this. That after the scattering this is the pion, this is the proton. This proton becomes vibrating or doing something on its own. And the question I would like to raise now is, is it conceivable to have a picture in which this incoming pion after passage through the proton remains intact in the original elastic state and this amplitude becomes a function of b alone and independent of the incoming energy? At the same time for an inelastic process not to have this inelastic s matrix, also becoming a function of b alone, independent of the incoming energy. In other words if the elastic s is independent of the incoming energy, shouldn't the inelastic s also be independent of the incoming energy? Now, if you try to make any model, any mathematical model with many constituent parts here, with any kind of interaction between them, you will rapidly come to the conclusion that it is not possible for a theory to have this approaching a limit and this not approaching a limit. Because in the process of passage there is a lot of energy and momentum and quantum number exchange between the constituents here and the constituents here. And the elastic scattering is only one of the outcomes. If the elastic scattering invariably comes out independent of the incoming energy, then the inelastic which represents the other channels must also do the same. If you do accept this, then of course after this pion has passed this vibrating proton would not remain stationary for very long. And it would finally disintegrate and that process was called fragmentation. And you in particular would arrive at a statement that the fragmentation product would have a momentum distribution in the logarithmic system which is independent of the incoming energy. This we called the hypothesis of limiting fragmentation. Let me repeat, what the hypothesis of limiting fragmentation says is the simple statement that if in a laboratory there's a proton and if I hit it with another hadron, after that hadron has passed by, this passing hadron would fragment. This laboratory proton would fragment but the laboratory proton's fragment distribution, momentum distribution, would be not dependent on the incoming energy. If you sweep the laboratory proton at 10 times the incoming momentum, the fragmentation product would still remain the same. Now this idea has recently gone through a precision test at the ISR. During the last 3 or 4 years, it has gone through many other tests, some of which was very much like the data that Professor Hofstadter was presenting us. But at ISR through an idea due to ... that one can do this experiment in a precision way. Because the statement we said before can be translated into the following picture. If I consider a 27 GeV collision on a 27 GeV proton beam, in that picture this proton is not addressed. But in its rest frame it will fragment into particles and therefore in the ISR frame it will become outgoing particles like this and this would become outgoing particles like this. These fragments, they are confusing fragments in the centre but let's forget about them. These forward moving ones are a spray which come from the fragmentation of this and these are the fragmentation of this. Now suppose I collide them in the following way: In an asymmetrical collision with 15 GeV on one side and 27 GeV on the other side. Since the fragmentation of this 27 GeV particle is the same for this one and this one, that's the concept that we are just developing. And since the Lorentz transformation, from the rest system of this proton to the ISR frame is exactly the same as that for this proton at rest to the ISR frame independently of this one. So therefore this forward distribution in this case should be the same as this case, while this distribution on this side would not be the same. Now this is something which can be tested in a precision way because the geometry of the detectors for these and the geometry of the detectors of these are exactly the same. And therefore you need no geometrical corrections and that is an experiment which has recently been performed. I think the papers have not yet been published. I have several slides but I don't think I have the time. So I'll just present you with one. And this is not the most interesting one. This is an angular distribution of the charged fragments of two ISR beams of the outgoing products in an ISR collision. And the circled points are 15 GeV on 15 GeV and this is a forward angle, these are backward angles. It's plotted with log of tangent of Theta/2 for convenience. But for our purpose we need not worry about that, let us look at the 15 GeV which are the circles and look at the 27 GeV on 27 GeV which are the triangles. Then they switched to the asymmetrical mode of 27 on this side and 15 on this side. And they found the black points. You see the black points follow very closely the triangle on this side and follows very closely the circle on this side, showing that the fragmentation of a 15 GeV particle or of a 27 GeV particle are respectively independent of the energy of the particle which is hitting it. And that's of course the basic premise that we were discussing. Now there were other tests which I shall not bother you with. I will now skip to the third topic, which is the question of, is there a darker and darker proton? In a totally unexpected manner, within the last half year there has been great excitement at CERN because of the simultaneous results of two experiments. One is a Rome-CERN collaboration, the other is a Pisa-Stone Brook collaboration, both of them measuring the total cross section at the highest ISR energies. And it has been sort of generally believed without real grounds that perhaps cross sections would remain fixed for P-P at about 39 millibars. One believed this because the lower energy data was clearly pointing in this direction. But the two sets of ISR data, both showed, that that does not seem to be true. What is plotted here is the data from one of the two collaborations, this is the Pisa-Stone Brook collaboration. The other one gave exactly the same data, same result within the comparable errors. What is plotted here is the laboratory momentum and that's 100 equivalent GeV, that's 1,000 equivalent GeV. And what is plotted here is the total cross section, this is not 0, this is 35, 40 and 45. And these were the earlier data including the Serpukhov data up to about 70 GeV. These circles are the not very accurate national accelerator laboratory data with bubble chambers. And as you know bubble chambers give not good enough statistics. So these have large error bars. These are the new ISR data. If this is about 39 millibars, this one, these have clearly indicated a 10% rise to about 43 millibars. Currently these experiments are being pushed to the limit of the CERN ISR machine at 31 GeV, that would be equivalent about 2,000 GeV. Preliminary indications were that indeed the cross section has risen a little bit more. Now that has led to many, many discussions because it was while not generally expected, it was not discussed either. The first person who seemed to have discussed this was Professor Heisenberg. Who in an article, I think it was in the early '50s but I am not absolutely sure, discussed the possibility of a cross section, a total cross section through increasing importance of bremsstrahlung-like processes which would rise with incoming energy like log of the laboratory incoming energy squared. To my knowledge this discussion was not later picked up very much and then in about 1968, '69, '70, T.T.Wu of Harvard and Hung Cheng of MIT, in their massive studies of the high energy limit of Feynman diagrams which I previously referred to were led to some sort of impasse. The impasse derived from the fact that some insight they had about how to sum the diagrams, led to results which were inconsistent with unitary. By looking at this for a long time and debating between themselves for a long time, they finally became very bold and came to the conclusion that everything would be consistent if you make the hypothesis that the total cross section would in fact rise like log of e^2 as Professor Heisenberg had discussed. The picture that they discuss is very complicated because it comes from asymptotic summations of enormous number of Feynman diagrams and they have not been able to make any rigorous arguments at all. I personally am pessimistic that any rigorous argument is to come out of such kind of pictures. But nevertheless the general features, the physical features are quite obvious from their discussions. And it is that the eikonal approximation and this general exponentiation picture that we were discussing and the convolution integral ideas were all amazingly born out by these insights that derived from their mathematical studies of Feynman diagrams. If you take these pictures, then the picture is in fact a physically extremely simple one. The increasing cross section at high energy derives from the fact that the proton appears to another proton as darker and darker an object. So that if I now plot the opaqueness as a function of the impact parameter, you would find that it becomes more and more dark an object. I didn't draw it symmetrically, it should be symmetrical of course. If you go to higher energies, very much higher energies, the opaqueness rises. Now, however the opaqueness is, as we define it, is a dimensionless quantity. When it becomes very large compared with 1 of course e^-omega is quenched and becomes very small. So if I now plot the survivor coefficient which is nothing but e^-omega, then those two pictures translate into nothing but this, this is 1, that means no opaqueness. At the centre there's an absorption. Now if the opaqueness becomes very large, now of course it would look like this. Now, if real opaqueness becomes enormous, then this picture becomes a dish, eventually it approaches something like this. This radius increases because the opaqueness is everywhere pushed up. But as soon as the opaqueness is large compared with 1, s is insensitive to it because s is already 0. And this is s. Now if you take 1-s, that's the shadow, that's how it scatters and that is what gives the elastic scattering. So if you take this picture, you would find that if you put a little bit more mathematics into it, you would find this radius increases like log of the incoming energy. And therefore the cross section which is the total cross section which is two times the geometrical size of this, would increase like log of e^2. The elastic scattering compared with the total cross section would be like that of a black disc, which is well known to be 1 to 2. There would be minima of scattering which are just zeros of beta function. These were all predictions which were made by Cheng and Wu in 1970 and there is now a lot of discussions whether one must buy the whole package from them, if we now believe that the total cross section is increasing. Now this is a topic which is under intensive discussion and there is no generally accepted consensus yet. Also the question of the implication of the increasing blackness on the elastic events is another topic which is under intensive discussion. Now I don't have much more time, so let me skip to the fourth item, which is some general remarks. What is it that one has learned? Now this question if you address it to high energy theorists today, you would get different answers from different people. Because as is evident to everybody who has observed the field, the field is by no means near the end of its efforts. We have learned a number of very interesting things with lots of details which do fall into some sort of picture but different people would put emphasis on different points of this rather complex phenomenon which is high energy collision. And as a consequence, different people would make different predictions what the future would tell us about the structure of the proton. So what I will be saying to you is a highly personal view of how things perhaps would be going. In my opinion what is clear is that the concept that a hadron is a particle of a dimension of 0.7 x 10^-13 centimetres with many internal degrees of freedom is so obvious from all the experimental data that perhaps that should be taken as the zeros order approximation. Now you may object and you may say that that maybe a dangerous thing to do because after all when we say that there is a geometrical size of 0.7 x 10^-13 centimetres that is a theoretical statement. What we study is always angular distribution. Namely we study the momentum transfer. The momentum transfer is the canonical conjugate to the coordinate space. And since nobody has succeeded in lining a series of protons together, crystal fashion, how do we know that the idea that the momentum transfer variable and the coordinate space variable is not just for theoretical convenience? In other words is the coordinate variable a really meaningful concept other than the fact that it's a Fourier transform of the momentum transfer which is directly exponentially observable. I think that's a very important question but I believe that there is ample reason to believe that we should take the coordinate space description seriously. Because as we heard from Professor Hofstadter, quantum electro dynamics experiments independently of hadronic structure has told us that quantum electro dynamics, namely the marriage of quantum mechanics, field theoretical concepts and Faraday-Maxwell's electro dynamics, have given a picture of electromagnetism in space time which is accurate down to at least let us say 10^-14 centimetres, perhaps better in fact. That means that the classical concept of space and time at least a la electric magnetism certainly holds down to much smaller sizes than the 0.7 x 10^-13 centimetres which is the characteristic hadron size. So therefore, coordinate space can meaningfully be described through electromagnetism like a euclidian space as we have been extrapolating from atomic sizes. Once you accept this, you will then try to describe, or if you accept this, you will then try to describe hadronic structure as a manifestation of additional hadronic degrees of freedom on the space time structure which is based on field theory and electrodynamics. At least I believe that this is a reasonable attitude. However we must bear in mind that this hadronic structure which is 0.7, the hadron distribution, this is about 0.7 fermis in size, is a drop and has many degrees of freedom. So in that sense, it has many features which are very similar to what we have been expecting from a drop. And we have lots of experience with drops. We have experience with drops of water, we have experience with drops of nuclear matter. But I want to emphasise that we must always bear in mind that this drop which is the hadron has some features which are also extremely different from that of the earlier droplets that we have been familiar with. For example like a drop of water or a drop of nuclear matter. And that difference is best described in the following fashion. In the earlier drops you can always think of cutting that drop into two. You can take a larger nucleus and cut it in two and it becomes two geometrically smaller nuclei. In fact that's nothing but fission, or fission is nothing but one type of that cutting-into-two-process. Similarly you can do the same thing for a drop of water, but not a proton. If I take a proton and take an imaginary knife and cut it into two, what would happen is that each region would then grow back to the original size and one of them would become perhaps a proton. The other would become a pion. Or one of them would become a neutron, the other would become a pion or a rho. But the proton and the rho or the proton and the pion each would have the original size. So in some sense this 0.7 fermis is a minimum quantum size of hadron physics. There are variations to it, this order of magnitude, about 0.7 fermis. And therefore any theory with which you try to explain hadron physics must have this feature in it. Otherwise you are having constituents which is not in agreement with what we have so far observed. Is that inconceivable, is that a consistent idea, that you can have some sort of a matter or many degrees of freedom inside and yet this exhibits a minimum size beyond which, lower than which you cannot go? The answer is yes, this is not strange at all. When you have an infinite degrees of freedom system, strongly coupled with each other, it is very easy, in fact the most natural thing is to have element ray excitations which have a finite size and a finite minimum order of magnitude size. However, this is an extremely complicated mathematical subject and we are only beginning to look into this. But ideas like this give many of the practicing high energy physicist great excitement. And we hope that slowly we'll learn to deal with this and understand more about the structure of the proton. Thank you. Applause.

# Chen Yang (1973)

## The Structure of the Proton

# Chen Yang (1973)

## The Structure of the Proton

Comment

Cite

Specify width: px

Share

COPYRIGHT

Cite

Specify width: px

Share

COPYRIGHT

Related Content

Chen Ning Yang | Physics | 1957 | ||

23rd Lindau Nobel Laureate Meeting | Physics | 1973 |