Paul Dirac attended the first 10 physics meetings organised in Lindau and gave talks at all except one. In 1979 he choose to talk about a subject based on a more than 40-year old love of his, the so-called Large Numbers Hypothesis from 1937. This hypothesis emanated from the fact that the strength of the electric force today is about 40 orders of magnitude larger than the gravitational force and that this is of the same order of magnitude as the age of the universe (in atomic time units). In his lecture Dirac assumes that the two numbers always have been proportional to each other. From this he draws a number of interesting conclusions. One of them is that the gravitational constant varies with the age of the universe and is decreasing. Another conclusion is that the only viable model of the universe is based on the one that Albert Einstein and Willem de Sitter published in 1932. Dirac’s assumptions lead to effects that should be observable. One of them is a difference between time as measured by an atomic clock and time as measured by the motion of the earth. Another effect is a very slow spiralling of the planetary orbits towards the sun. Due to a revival of interest in cosmology in the 1950’s, observational techniques for high-precision measurements of astrophysical effects had been developed. Irwin Shapiro had bounced radar signals on the planets and measured different parameters and the Apollo collaboration had bounced laser pulses from the moon. At the end of his lecture, Dirac discusses the status of the observations and concludes that they cannot yet confirm or rule out his theory.

Anders Bárány

- Mediatheque
- Meetings

I am very happy to be here in Lindau again and to have this opportunity of talking to you about scientific questions that interest me. Today I want to talk about the possibility that quantities which are usually considered to be constants of nature, are really not constant but are varying slowly with the time. It may be that they do vary and that this variation is so slow that it does not show up in ordinary laboratory experiments. This idea, that maybe constants of nature are really varying, was first introduced by Milne 50 years ago. It’s not a new idea. Now, Milne supported his arguments with philosophical considerations, which I believe are not very reliable, but still he introduced a new idea into physics. An idea which has excited many people since then, and which is being very actively studied at the present time. Now, if we are going to think about constants of nature, we must first of all make sure that the quantities which we are interested in are dimensionless. That is to say that they don’t depend on what units we use. If we have a quantity which depends on whether we use centimetres or inches, then such a quantity will not be very fundamental in nature. It will depend on our choice of units. So we have to make sure that we discuss only quantities which are independent of the units, dimensionless quantities. Now, many people who write about this question rather forget this. You see printed papers now, where people discuss, can it be that the velocity of light varies. Now, the velocity or light depends on your unit very much. A unit of distance and a unit of time and the question whether the velocity of light varies or not will depend on how these units are defined and it is not a fundamental question, not a fundamental problem. One usually in theoretical work takes one’s units of space and time to make the velocity of light equal to 1. Then of course there is no question, it has to be a constant. Let us now focus our attention on dimensionless quantities. There are some which are well-known in physics. One of them is the fine structureconstant, whose reciprocal is e2 over... the reciprocal is hc, Planck´s constant, over 2 Pi, c the velocity of light over e2. A very famous constant in atomic theory and its value is approximately 137. Another constant of nature which will immediately spring to your mind is the ratio of the mass of the proton to the mass of the electron. mp over me: and that is something like 1840. Then there is another constant which suggests itself at once. If you consider a hydrogen atom where there is an electron and a proton, they attract each other with a force inversely proportional to the square of the distance, an electric force. There is also a gravitational force between them inversely proportional to the square of the distance. The ratio of these 2 forces is a constant. It is a dimensionless constant, it has the value e2 over G me mp. G here, capital G is the gravitational constant. If you work this out you get roughly 7 times 10 to the power of 39, an extremely large number. Now, physicists believe that ultimately they will find an explanation for all the natural numbers that turn up. There ought to be some explanation for these numbers. If we had a better theory of electrodynamics it would presumably enable us to calculate this number 137. If we understood elementary particles better, there ought to be a way of calculating this ratio of the masses and getting this number here. These calculations would just involve mathematical quantities for Pis and similar factors like that. So we should imagine these numbers constructed from these simple mathematical numbers. But what about this number here? This enormous number here? How can that ever be explained? You can’t hope to explain it in terms just of for Pis and things like that. Maybe this number should not be explained by itself, but should be connected with another large number which is provided by the age of the universe. It seems that the age of the universe had a very definite beginning - a Big Bang as it is called – that is pretty well universally accepted so that there is a definite age of the universe. That age is provided by Hubble’sconstant, which gives the ratio of the speed of recession of distant matter with its distance. We get in that way an age of the universe somewhere around 18.000 million years. It’s not very well-known, there might possibly be an error of a factor as much as 2 in it, but still it is somewhere of that order, let us say 18 times 10 to the 9th years. Well, this involves the unit years and we ought to have something more fundamental in our theory. Let us take a unit of time provided by electronic constants, e2 over mc3, let us say. This is a unit of time. If we express this age of the universe in terms of this unit time we get the number 2 times 10 to the 39. We get a number just about as big as this number here. Now, you might say that is a remarkable coincidence. Well, I don’t believe it is a coincidence, I believe there is some reason behind it, a reason which we do not know at the present time, but which we shall know at some time in the future, when we know more about atomic theory and about cosmology. And we shall then have a basis for connecting these 2 numbers. These 2 numbers should be expressed, one of them as some simple numerical factor times the other. Now, I’m assuming that there is such a connection and that it is not just a coincidence, that is the whole basis of the theory which I am now going to give you. Now, this number here is not a constant, it is continually increasing as the universe gets older, and if these 2 numbers are connected it means that this number here must also not be a constant and must be continually increasing as the universe gets older. We see that this number must be increasing proportionally to the time t, the age of the universe. These things on the left here are usually considered as constant. Now we can no longer consider all of these things as constant. It is usual to suppose that the atomic ones are constant, e, m, the two m´s, and then we have G varying proportionally to t to the minus 1. To express this result accurately, one should say the gravitational constant in atomic units is decreasing according to the law t to the minus 1. You mustn’t just say generally that the gravitational constant is decreasing, because by itself it is a quantity with dimensions and one must avoid talking about quantities with dimensions, one must say that, one must specify it by G in atomic units varying according to this law. Another way of looking at it is like this, the gravitational constant is, the gravitational force is very weak compared with the other forces known in physics, very weak in particular, in comparison with the electric force. Why is it so weak? Well, we might suppose that when the universe was started off these forces were roughly equally strong and that the gravitational force has been getting weaker and weaker according to this law. It’s had a long time in which to get weaker and that is why it is now so weak we get quite a unified principle in that way. Now the question arises, how are we to fit in this assumption of G varying with our standard physical ideas? That is one question. Another question is: Is there any experimental evidence in favour of it? These are the 2 questions which I shall want to be discussing at length. First of all there is the serious question of how we are to fit this in with Einstein’s theory on gravitation. Einstein’s theory on gravitation has been extremely successful, we can’t just throw it overboard. We have to amend our theory in such a way as to preserve all the successes of the Einstein theory and for that purpose the obvious thing to do, in fact the only suggestion that has been put forward is that we have 2 different metrics which are important in physics, one metric which one has to use for the Einstein theory and then a different metric provided by the atomic constants. So that will be a basic idea of this new theory, these 2 metrics, which are both of importance in physics. The idea of 2 metrics was first introduced by Milne 50 years ago, but his relationship between the 2 metrics is not in agreement with the relationship which I’m going to propose now. I would like to say a little bit more about this connection of a universal constant with the time. We are here making a new assumption and we should make this assumption quite generally and say that all the fundamental constants of nature which are extremely large are connected with the time. Not just this one, any other constant which is extremely large should also be connected with the time, otherwise it’s a very artificial assumption. Now, there is one other constant which immediately springs to mind, namely the total mass of the universe. Now, it could be that the universe is of infinite extent and then the total mass is infinite but in that case we must modify our number and make it precise in this way. Let us talk just about that part of the universe which is receding from us with a velocity less than half the velocity of light. I take this fraction, half-arbitrarily but it would not affect the argument if we used another fraction, 2/3 or 3/4, the argument would run the same. It is just to have some definite quantity which we can use even if the universe is infinite. Now let us consider the total mass which is receding from us with a velocity less than half the velocity of light, express this mass in, let us say proton units. We get a very large number, a large number which is not very easy to measure because we don’t know how much dark matter there is, intergalactic gas or black holes or things like that. But it seems that this number is somewhere around 10 to the power of 78, I will call this number N, number of proton masses in that part of the universe which is receding from us with a velocity less than half the velocity of light. And it is roughly 10 to the 78, and according to this assumption that we are making about the large numbers, the Large Numbers Hypothesis I call it, this must vary according to the law t2. This assumption is just as necessary as this one if we are to have a consistent picture. Now, how can we understand this continual increase in the amount of matter which is in the observable part of the universe? People for a long time supposed that there was continuous creation of new matter, I made this assumption myself but I feel now that it’s a bad assumption, it’s very hard to develop this theory in any consistent way and there are also observational grounds for disbelieving in it. So I want to keep to the assumption that matter is conserved, the usual old fashioned conservation of mass. Then this continual increase in N is to be understood in this way. We suppose that the galaxies are not receding from us uniformly but are continually slowing up so that the number of galaxies included within this sphere of radius recession velocity less than half the velocity of light is continually increasing. We can then easily arrange it to have this number N increasing proportionally to t2, and at the same time keep conservation of mass. We have now quite a definite cosmology where we have the galaxies always moving apart from each other but their velocity is decelerating all the time so that there are always more and more galaxies being included within this sphere v less than a half c. Now, these ideas which I have been talking to you about immediately give us some definite results concerning the model of the universe which is consistent with them. We get a model of the universe when we imagine all the local irregularities to be smoothed out, all the stars and galaxies to be replaced by an equivalent continuous distribution of matter. What model of the universe must we adopt? People have often supposed a model of the universe which increases up to a certain maximum size and then collapses again. Very many models have been worked out consistent with Einstein’s equations. But we can now assert that all those models in which the universe increases to a maximum size and collapses again, are wrong, because such a model would provide a large number, namely the maximum size before the collapse starts, this large number, expressed in atomic units, would be a constant. It won’t be something that can change as the universe gets older. Now, any constant large number is ruled out by our general hypothesis that all these very large numbers must be connected with the age of the universe. So that means that a large number of models are immediately ruled out. For quite a while people were working with a steady state model of the universe but that must also be ruled out because a steady state model cannot have G varying like this. There are many models which do get ruled out and one finds that there is only one model which is left which is in agreement with this Large Numbers Hypothesis. This was a model which was proposed in 1932, jointly by Einstein and de Sitter, let us call it the Einstein-de Sitter model, it´s not very much heard about but I think that it should receive much more importance and I believe that this is the correct model. This model involves a metric ds2 equal to etau2, I’m using tau for the Einstein time, minus tau to the power of 4/3 , vx2 plus dy2 plus ez2. Now you may wonder why we have this strange power over time occurring here, with any such model, with any function of the time here we would get uniform density and a uniform pressure, the pressure is caused just by electro magnetic radiation or similar radiation and in our actual universe the pressure is very small, very much smaller than the effects produced by the static matter. This particular power of the time occurring here is the one which we have to choose in order to have zero pressure, that is the reason why there is this rather strange factor. Now, this model of the universe gives an average density roughly in agreement with the observed density that was pointed out by Einstein and de Sitter when they introduced the model in 1932. This model I would like to propose to you as the correct model for describing our universe, it is the only model which fits in with the Large Numbers Hypothesis. What is the law according to which the galaxies separate from us, what is the law of expansion of the universe. If we take the distance of a galaxy and consider it as a function of the time, let's say a galaxy corresponding to the velocity of recession half the velocity of light, then the distance of this galaxy varies according to the time or is proportional to t to the power of 1/3. That is quite a big change from our usual ideas that the galaxies are receding from us roughly uniformly. It means that the distance, although it always continues to increase, increases according to a slower and slower law as time goes on. The galaxies never stop receding but they continue to recede from us more and more slowly. This is a consequence of this model which we are using here. Now, I talked earlier about the 2 metrics which we have to use, an Einstein metric, ds, I put the suffix E here to say that this is referring to the Einstein unit of distance and then there is a dsA, atomic distance. What is the relationship between them? That relationship is very easily worked out just from the requirement that G is proportionate to t to the minus 1. From a purely dimensional arguments one finds that dsE equals tdsA. That is the relationship between these 2 metrics. And the argument is simple, definite and just involves discussion of the dimensions of quantities. We may take this ds to refer to a time interval and then if we use variable tau to stand for time in the Einstein picture and use t to stand for time referred to atomic units as I have been using all the time, up to the present, we have dtau equals tdt, tau is proportional to t2, in fact tau equals a half t2. We have then these 2 times which should come into physics, a time which one has to use in connection with the Einstein theory, the time tau and a time which is measured by atomic clocks and things like that, time t. Perhaps I better pass on to the discussion of whether this theory is going to lead to any effects which one can check by observation. Our usual application of the Einstein theory is concerned with the Schwarzschild solution of the field equations of Einstein. This Schwarzschild solution was worked out on the assumption that at great distances from the singularity where the masses, at great distances space becomes flat, like Minkowski space. But according to our present picture that must be modified by the requirement that at great distances space goes over to the space of the Einstein-de Sitter metric described by this equation here. One can work out how one has to modify the Schwarzschild solution. I’ve done that in a paper which was published recently. Then one gets the results that if you take a planetary orbit which is roughly circular, even in the new theory, that planet will continue to circulate around the sun with constant velocity and with a constant radius for its orbit. These results are not changed by the new theory, however there are some results which are changed, these results that the velocity remains constant and the radius of the orbit is constant refer to the Einstein units. If we pass to atomic units then the velocity will still remain constant, the velocity is the same in all units because we are keeping the velocity of light unity and any velocity is just a certain fraction of that, independent of what units we use. But the radius of the orbit will be affected by this transformation and it means that the radius of the orbit of a planet expressed in atomic units is continually getting less, the radius is proportional to t to the minus 1. The planets are all spiralling inwards, that is a cosmological effect which is to be superposed on all other physical effects. That is an effect which one should be able to observe if one makes sufficiently accurate observations of the planets. Then another effect which one should be able to observe is in connection with this formula which gives a difference between dtau and dt. dtau, the time which one uses in the Einstein equations is the same as the time which is marked out by the motion of the earth around the sun. That time is what astronomers call ephemeris time and they’ve been using ephemeris time for centuries in order to express the results of their observations. This ephemeris time, according to this theory, should not be the same as time measured by atomic clocks. Well, there is an effect which one should be able to observe and the inward spiralling is another effect. I would like to discuss what the experimental information is about these subjects. If you want to compare the 2 time scales, the best way is to study the motion of the moon. The moon’s motion through the sky can be observed very accurately but it is subject to very many perturbations, perturbations caused by the tide, perturbations caused by other planets, these perturbations are not negligible for the accuracy which we need in this work. This study of the motion of the moon has been considered for a good many years by Van Flandern who works at the Naval observatory in Washington. I have put down here a summary of the results: N is the symbol used for the angular velocity of the moon, the units it is expressed in are seconds per century, cy means century. The acceleration, N dot is in seconds per century squared. This acceleration is negative, it means a deceleration and here are figures which have been obtained recently. Can you see it or is the light too strong, can you perhaps reduce the lights? There’s a whole set of figures which have been worked out by different investigators. There’s a result given by Osterwinter and Cohen in 1972, where the figure is 38 plus or minus 8, I have ignored the still earlier estimates which are not so good. There’s a figure given by Morrison and Ward in 1975 just from observing the transits of Mercury, the time when Mercury passes in front of the sun. Then there’s a figure given by Muller from studying ancient eclipses. There are records of eclipses going back to before the time of Christ. Now, you might say that observations made so long ago will not be useful to us because their methods of observation were very primitive, they had no accurate instruments. But still if you see an eclipse, a total eclipse at a certain place on the earth, that is a very definite piece of information, you’ll know just when the eclipse took place, it took place when the earth sun and moon were all in a line. And if you're told that at a certain place on the earth surface the eclipse was total, that is a very definite piece of information. Now, Muller has been studying the records in monasteries about eclipses going back to more than 2000 years and you have to get a suitable understanding of the language used by the people recording eclipses in those days. These observations have the advantage that they are made over a very long time base. Well, the result that Muller gets is 30 plus or minus 3. Now there are some recent observations which were made with the help of lunar models and also of parameters which were obtained from satellite observations and 2 different workers have given the result 30.6 and 27.4. Well, Van Flandern has considered all of these figures for the acceleration of the moon and he considered the weighted average of the above to be 28 plus or minus 1.2. These are all observations with ephemeris time. There are observations with atomic time which have been made since 1955. Since 1955 people have been observing the moon with atomic clocks. The result, the most recent result of Van Flandern of the atomic time figure is 21.5. Van Flandern in previous years had given results differing quite a lot from that figure. But he finds that there were systematic errors in his earlier results and the systematic errors have been, he hopes, eliminated by new calculations and also by new recent observations. Now, a different kind of observation has been made by people working at the Jet Propulsion Laboratory using the Lunar Laser Ranging technique. They send light to the moon and observe the reflected light and see how long it took for the journey and get very accurate estimates of the distance of the moon and Calame and Mulholland have been working on that and they got the figure 24.6. Now I just heard a few weeks ago from Van Flandern that these people think that there was an error in their calculation and with a new correction that should come down to 20.7. Now, Williams and others also usingLunar Laser Ranging get the figure 23.8. Now, Van Flandern has considered all the results and gives as the weighted mean 22.6. Now there is a difference between this figure and the one above, this figure referring to atomic time and the figure relating to ephemeris time. And that difference provides evidence for the correctness of this theory. If G was not varying, if one just stuck to the old ideas then one should get the same lunar acceleration whether one uses ephemeris time or atomic time. And the observations do seem to support the idea that there is a difference. Now, there’s another way of checking on this effect by seeing whether there is this inward spiralling of the planets. That has been worked on by Shapiro and his assistants some years ago, in 1975, they published some figures. Their method is to send radar waves to a planet on the far side of the sun and then observe the reflected radar waves. These reflected waves are extremely weak but still they can be observed and one can measure the time required for the to and fro journey and that gives us accurate information about the distance of the planet, one measures this time in atomic units. Those were some figures that were published by Shapiro and his assistants in 1975 N dot, lunar acceleration, they divided by N, the velocity, and those figures are given for the 3 inner planets, Mercury, Venus and Mars. Well, you notice that the results are all plus, a plus result there means that the planet is spiralling inward, if the planet was spiralling outward it would be minus. These results are all in favour of inward spiralling but you see that the errors, the probable errors are quite large, as big as the effect that you are measuring. Shapiro rather emphasizes that he has really no evidence for inward spiralling, his results would be consistent with the old theory of constant G. Still there is perhaps some weak evidence for inward spiralling. Now, at the present time one can make very accurate observations on the distance of Mars with the Viking lander, which was put on to Mars in 1975. And Shapiro is working on the results of this lunar lander. He says that just the time since the lander was put on Mars is not a sufficiently large time base in order to be able to work out this effect. One has to combine these Viking observations with the earlier Mariner observations on Mars. I met Shapiro last March and he told me that he is now working on this problem of combining the recent Viking observations with the Mariner observations and it will be 6 months before he has a definite answer. Shapiro, I might say, is a very careful observer and won’t publish anything until he is pretty certain that it is right. So we have to wait a few more months and then we will have definite results on this question of whether there is this inward spiralling of the planets or not and that would probably make it pretty certain whether this new theory is right or not. I wonder how much more time I have, can I go on another 5 or 10 minutes or should I cut it short? Another question which I might refer to is concerned with the microwave radiation, there is this microwave radiation, just a few centimetres wavelength, which is observed to be coming out of the sky in all directions and falling on the earth, it is a very cold radiation, it is a black body radiation so far as it can be observed corresponding to a temperature of about 2.8 absolute, people have explained this radiation by saying that it is the result of a big fireball which was in existence in the early stages of the universe. Extremely hot to begin with but it has been cooling all the time because of the expansion of the universe and that this very cold radiation, 2.8 degrees is what is left of this original fireball. Now, I would just like to show you howthis microwave radiation gives a strong support for the ideas I’ve been presenting here, in particular for the Large Numbers Hypothesis. Let us consider this radiation temperature 2.8 degrees. That provides us with a large number, kT,Boltzmann constant, kT, the energy, sort of average energy of this microwave spectrum divided by, let us say the rest mass of the proton mpc2. If you work this out you get a number somewhere around 2.5 times 10 to the minus 13. You may consider that 10 to the minus 13 is the reciprocal of a large number and that this should therefore be varying with the time, varying according to the law t to the minus 1/3 as being roughly the cube root of 10 to the 39. That means that the temperature of this microwave radiation should be decreasing according to the law t to the minus 1/3. Now that fits in with the law of expansion which we had previously. We had the expansion with the distance of a galaxy as proportionate to t to the 1/3, the wavelength of each of the components in the black body radiation should also be increasing in proportion to t to the 1/3, frequency t to the minus 1/3 and the temperature of the black body radiation should therefore be decreasing according to the law t to the minus 1/3. We do get consistency in that way, these figures are quite different from the old theory, according to the old theory we had the galaxies receding with a roughly constant velocity and the microwave radiation having its temperature going down according to the law t to the minus 1. Well, the t to the minus 1/3 law is provided by the Large Numbers Hypothesis and it fits in with the law of expansion. I feel that this is quite a strong confirmation of our ideas, we might easily have had things going wrong. It means that the fireball has been decreasing according to this law since the time, very close to the Big Bang. According to the older ideas, this law for the cooling of the black body radiation only started at a certain time when the fireball got decoupled from matter, this idea of a sort of decoupling taking place at a certain stage in the evolution of the universe is of course quite foreign to the idea of a Large Numbers Hypothesis, according to which there must not be any time coming in in an important sense into the theory, in such a way as to provide a large number which is a constant. There is a lot of further work to do in connection with the development of the theory and several people have been studying the question. Also unfortunately a good many people have been writing papers about it who have not correctly understood the basic ideas of the theory and who have come to wrong conclusions. People have said that this theory is untenable because it would mean that in the geological past the temperature of the earth would have been much too hot to allow life at a time when life is known to have existed. Well, they haven’t taken into account the inward spiralling effect, in fact I think inward spiralling was only worked out about a little over a year ago and that is going to help quite a lot with thistemperature of the earth because it means that at these times in the past the earth was quite a bit further away from what it is now. I better conclude here and I would like to say that there are many people working on this subject. In particular there’s a whole school which is run by Canuto at the Space Research Center in New York, and the result of a lot of calculations that he has made is that up to the present there is no irreconcilable discrepancy which has showed up. Thank you.

# Paul Dirac (1979)

## Does the Gravitational Constant Vary?

# Paul Dirac (1979)

## Does the Gravitational Constant Vary?

Comment

Cite

Specify width: px

Share

COPYRIGHT

Cite

Specify width: px

Share

COPYRIGHT

Related Content

Paul Adrien Maurice Dirac | Physics | 1933 | ||

29th Lindau Nobel Laureate Meeting | Physics | 1979 |

Related Videos