Heidelberg Lecture: Vinton G. Cerf (ACM A.M. Turing Award 2004)  (2016) - The Origins and Evolution of the Internet

Well, I can see already, I have to convince the computer that it's me, so one moment. (Laughter) There we are. First of all, you have no idea what this is like, I think. Well, some of you have been up here before. But for me this is very daunting. I’ve never had this many Nobel Prize winners and smart people in the audience at the same time in this scale. So I hope I do reasonably well. Stefan Hell did an absolutely spectacular lecture on optical microscopy last year at the Heidelberg Laureate Forum. So I’m feeling a certain amount of pressure. Let me start by saying, I’m going to try to cover some of the history of the internet, just to give you a sense for how it came about. And then some ideas about where it’s going in the future. Let’s start out by reminding you that there was a predecessor system called the ARPANET, the Defence Advanced Research Project Agency, which is part of our Department of Defence in the US, began to explore the possibilities of computer networking. And it had a very practical reason for doing it. It was funding a dozen computer science departments during the 1960s to do research in artificial intelligence and computer science. And everybody kept asking for the latest computing equipment every year. And they said, we can’t buy 12 new computers for 12 computer science departments every year. So we’re going to build a network and you’re going to have to share your resources. People were reluctant to do that initially, because they thought, well, if we have to share resources, the other people will see our work. And they’ll steal our software or steal our code or use up all the cycles. And Arpa said, just relax, we’re funding all of you. This is no longer a competitive issue. We want you to share your experiences and your expertise, so that we can accelerate the rate at which the research progresses. So they built the ARPANET, and it was successful. Then they realised, after the success of the ARPANET, that computers might be useful for command and control. Because if you could manage your resources better than an opponent, owing to the use of computers. You might actually be able to defeat a larger opponent with a smaller size group, because you were managing to extend their capability better. We call that a force multiplier. But, if that were going to be the case, then the computers would wind up having to be in aircraft and ships at sea and mobile vehicles, and, at that moment, only fixed installations had been built for the ARPANET. So that’s sort of the background for why the Defence Department, Arpa in particular, was interested in pursuing this. So this is all based on the concept of packet switching. And you’re all using it, whether you know that or not. But some people don’t fully appreciate how it works, and it’s actually very simple. If you know how postcards work, you know how the basic internet packets work as well. First of all, just like a postcard, they have no idea how they’re being carried. A postcard doesn’t know if it’s going over in an aeroplane, or a ship at sea, or on the back of a postman, or a bicycle. In the internet, packets don’t know how they’re being carried. They don’t know whether it’s an opticial fibre, or a satellite link, or a radio channel, or a hard wire Ethernet - and they don’t care. The design of the system was carefully done to make sure that these internet packets were unaware of how they were being transported. Also, like a postcard, the internet packets don’t know what’s written on them. So that’s turned out to be a very powerful tool. Because the meaning of the internet packets is only interpreted at the edges of the net, by the computers that are either sending or receiving them. What that means is that when you develop a new application, you only have to change the software at the edges of the net where the host computers are. You don’t have to change the network, because the network doesn’t care what the application is, which opens up a huge number of possibilities. When you want to invent a new application, you don’t have to change all of the network. Postcards don’t stay in order either. You put 2 postcards into the post box. And if they come out at all, they might come out in a different order. And, in fact, sometimes they don’t all come out. And so you have this disorderly, not reliable system of electronic postcards. And that’s basic packet switching. So you might wonder, how would anybody make anything useful out of that. And I hope I’ll show you how that can work. The original initial implementation of the ARPANET had 4 nodes. I was at UCLA at the time, as a graduate student like many of you are, and wrote the software to connect the Sigma 7 computer to the first packet switch of the ARPANET in about 1969. The Sigma 7 is in a museum now somewhere, and some people think I should be there too, but I’m here. So that’s the beginning - just the 4 node network. It rapidly grew over time. What you see on the left is what a packet switched looked like. IMP stood for interface message processor. We call them packet switches now or routers. This was the size of a refrigerator at the time. And you can tell that things have changed over the intervening 40 years or so. You can hold a router in your hand now, it’s about the size of a simple Ethernet connector. And, of course, it costs a lot less - these were $100,000 devices back in the day. So that’s how things have changed over a 40 year period. This, by the way, it a classic observation: usually anything that you do that’s new is big and expensive. And with experience, over time, if it continues to work, it gets less and less expensive and often a lot smaller. And so you see this trend from expensive equipment, that’s owned by institutions, to maybe just departments, and then eventually individuals who own these devices and carry them around. We were also concerned, of course, about mobile communication. And so we built a packet radio network in addition to the original ARPANET. This ran in the San Francisco Bay area. We had repeaters up in the mountain tops, and this particular non-descript van that was carrying the equipment inside, which I can show you here. If you see, these things over here and over here are packet radios. They’re about a cubic foot in size. They cost $50,000 each, back around 1975. And they were the sorts of things that you stick in your pockets now. But back then the equipment was not quite as dense. We were able to get something on the order 100 to 400 kilobits a second operating at 1710 to 1850 megahertz. And we were modulating this stuff at fairly high speeds. So one of the things that we tried to do in addition to transmitting just data around, we recognised that for command and control we’d need voice and video. And so back in the 1970s, we were experimenting with packetised voice and packetised video. I have to tell you that the voice experiments were kind of interesting, because a typical voice channel is 64,000 bits a second. You sample the voice stream at 8,000 times a second, taking 8 bits of information per sample. And we only had 50 kilobits were second available in the ARPANET back bone, and 100 kilobits in the packet radio net. So we decided, in order to get more voice channels in the system, we would compress the voice down to 1,800 bits per second. And in order to do that we modelled the voice track as a stack of 10 cylinders that would change their diameters as the voice, or speech, was being generated. And that little stack of cylinders was excited by a pitch reforming frequency. We sent the diameters of all of those model cylinders over to the other side. And that got the data rate down to 1,800 bits per second. Although you can imagine that the quality of the speech kind of reduced some, when you went from 64 kilobits down to 1,800. So anyone who spoke through this system sounded like a drunken Norwegian - and I hope I haven’t insulted any Norwegians in the audience. They were understandable but it was a very peculiar kind of sound. So the day came when I was, by this time, working in the Defence Department, and I had to demonstrate this system to some generals at the Pentagon. And I remember thinking, ok, how am I going to do this? And then I remembered that one of my colleagues, who was working on the packet voice, was Ingvar Lund from the Norwegian Defence Research Establishment. So we had Ingvar speak first through the ordinary voice switch system. And then we had him speak through our packet voice system, and it sounded exactly the same. We didn’t tell the generals that everybody would sound that way through this system. (Laughter) Well, as you can see on the right hand side, that we’ve gone from these big bulky pieces of equipment, that required a huge van to carry around, to things that we put in our pockets or even strap to our wrists. And once again, this trend of going from big and expensive to small portable, and often affordable by an individual, is still holding true. In addition, because we also wanted to deal with ships at sea, we decided that satellites would be the appropriate technology, because you could go long distances. So we put a standard satellite system, Intelsat IV-A - or used it, we leased some capacity on that network. And we had multiple ground stations, all contending for the same satellite channels, so it was kind of like an Ethernet in the sky. And that allowed us to experiment with wide area packet switching over a satellite channel. So we now had 3 different kinds of networks to deal with: packet radio, packet satellite and the ARPANET. And they all operated at different speeds, they had different error rates, they had different packet sizes. And yet, the problem that Bob Kahn and I had to solve was, how do we make all of those diverse networks appear to be one network, even though they were all very distinct. So just to give you a sense of the course of this effort: In 1969, the ARPANET begins construction. And then in ‘73/’74, after having about 3 or 4 years of experience with the ARPANET, Bob Kohn and I did the first design of the TCP protocol, which later became TCP/IP. And then during the 1975 to ’78 period, we went through multiple iterations of implementation, test and refining, and correcting of the protocols. And we found a number of mistakes that we had not anticipated. We had many different institutions working with us at the same time. And so, if someone tells you that, you know, the internet is purely an American invention - that’s incorrect. We had colleagues from everywhere: people in Europe, people in Asia, some in my lab at Stanford before I came to Arpa, and elsewhere. So there was a lot going on. And then, finally, after we settled on the versions that you’re mostly using today we began implementing those protocols and every operating system that we could find. So that in 1982, we could announce to everyone who was on all of the systems, that they would have to switch over to the new TCP protocols in order to stay in the programme. So on January 1st 1983, we turned the internet on and it has been running since 1983, although it wasn’t widely visible to anyone except the research community and the military. Now just to give you another important point about the design of the system: it’s layered. So the lowest layers are, you know, physical transport over optical fibre or radio links, and things like that. The internet protocol layer is the electronic post cards. And those are the things that are forwarded through the network, and go back and forth between the hosts. Above that layer are the protocols that make this a more disciplined environment. And, finally, protocols that implement applications. So this is a layered architecture. And it's roughly 5 layers, if you like: physical layer, the data links, the discipline, the bits and then, finally, the IP layer and transport and application. So the way it physically works is that the hosts at the edge of the net implement all of the layers of protocol, up to, and including, the application layer. But you’ll notice that the things in the middle of the net, that are responsible for switching internet packets, don’t know anything about transport layer protocols or application protocols. All they see are internet packets, those little electronic postcards. So their job is very simple. When they get a postcard, they look at it to see where it’s supposed to go. They look in a table which is generated by a routing protocol running in the background, and they just send it in whichever direction that table tells them to go. And so it’s a very simple concept. And the simplicity, I think, has helped make this a system which is not only scaled over time, but it’s persisted over a 30-plus-year period. So this is another picture of the layered protocol architecture. And what’s important is the little guy in the middle called the internet protocol. The thing which I want to emphasise in this picture is that, because the internet protocol layer doesn’t care how the underlying layers work. The assumption is, if you hand the underlying layers a packet, that they will somehow pass along that channel, and they go from router to router to router to the destination host. The consequence of that decision is that every time a new communication protocol came along, or a new transmission technology came along, we just swept it in to the internet. The internet didn’t care, didn’t notice that it was anything new, it’s just another way of carrying bits. By the same token, the absence of knowledge of the applications in the internet protocol meant that when somebody wanted to invent a new application, they didn’t have to go get permission from every internet service provider in the world. All they had to do was to go implement it at the edges of the net and proceed to send packets back and forth. So Larry and Sergey, when they started Google in their dorm room at Stanford University, did not have to negotiate with every internet service provider in the world. They just put the service up on the net and let people try it out. That’s why we’ve had such a cornucopia of applications coming out of the internet, despite the fact that there are literally hundreds of thousands of internet service providers all around the world. So in order to make the internet protocol layer more disciplined – remember it’s lossy, it loses things, it gets them out of order and everything else -, we put another layer of protocol on the top called TCP. And you can understand very easily how it works, if you imagine the problem of sending someone a book through a post office that only carries postcards. So imagine, what would you do? Well the first problem is you have to cut the pages up to get them to fit on the postcard. And then you’d notice that not all the postcards have page numbers on them because you cut the pages up. And you know they’re going to get out of order, and so you number every postcard 1, 2, 3, 4, 5, 6, 7, 8, 9, 10. And you also know that some of the postcards are going to get lost. So you hang on to copies in case you have to retransmit them. And then you want to know, how do I know when to stop sending, how do I know when to throw away the copies you’ve kept. Because, you know, the guy on the other end has received them. And you get this brilliant idea: you have the guy at the other end send you a postcard saying, I’ve got everything up to, you know, postcard number 420. And then you realise, that postcard might get lost. So now you decide, well, I guess I’ll just look at my watch. And if I haven’t gotten any responses back from the other guy, I’m going to start sending copies until I DO get a postcard that says, I got everything up to number 420. So that’s the basic timeout retransmission recovery mechanism. You can filter things out, because you know which postcards you got. If a duplicate shows up, you can ignore that. Then the only other thing to worry about is the case where you have 1,000 page book, and you cut it up into 2,000 postcards. And you take them to the post office, and you give them to the post office all at once. And by a miracle the post office tries to deliver all of them at the same time on the same day. And, you know, they don’t fit in the post box at the destination, and some of them fall on the floor and the dog eats them or they get blown away by the wind. So you have an agreement with your friend that you won’t send more than 200 at a time. Until you get a postcard back saying, I got all of those, you can send some more. That’s called flow control. So now you know how the internet works, that’s all there is too it. Well, I left out a little bit. There’s this domain name system. You know, how when you type URLs and things like that, in the middle are domain names like www dot google dot com. There’s a whole hierarchical structure of servers that are scattered throughout the internet that your browser essentially applies to, or your email application, and says, I’m trying to send an email to someone at google.com. Or I’m trying to go to google.com to do a search. You can’t get there using the domain name. Your computer actually has to go and consult with the domain name service and say, what’s the numerical IP address of the destination? So domain name look-ups produce a numerical address, which the TCP/IP protocols use in order to send packets back and forth across the network. And so that turns out to be a very important part of the network as well. So you have the basic transport stuff. You have the internet packets. You have the TCP protocol to maintain things in order and to recover from failures. And you have the domain name system to figure out where things are supposed to go, but it’s easier to remember the domain name than it is the number. And finally you have the routing algorithms. And that’s basically the components of the internet that make it work. So in 1977, I’m now at the Defence Department running the programme - and I’ve been at it now since 1973 at that point. And I really, really wanted to be able to demonstrate that this stuff actually worked. We’d only done pairwise tests among the various nets. And to be honest with you, if you take 2 packet networks and you’re just trying to connect 2 of them together, you could probably build a box in between them and do some crazy thing, and it would make it work. But I wanted to show that a standardised box - which we called a gateway at the time, because we didn’t know they were supposed to be called routers - would work with multiple networks of different types. So we had the mobile packet radio network in the Bay area, with the van going up and down the Bayshore Freeway, radiating packets through the gateway into the ARPANET. By this time, the ARPANET had been extended by an internal synchronous satellite link all the way to Sweden and Norway. And then down to London by landline, where there was another gateway that connected the extended ARPANET, back across the packet satellite network over the Atlantic. Back to the ARPANET in the US into another gateway. And then all across the ARPANET to USC Information Sciences Institute in Los Angeles. Now, between the mobile packet radio van and Los Angeles it's about 400 miles. But if you follow the path of the packet, it’s gone 100,000 miles, because it’s gone up and down through synchronous satellite level twice, and then all across the Atlantic and the US as well. And it worked. And I remember jumping up and down saying, it works, it works - like it couldn’t possibly have worked. It's software and it’s a miracle when software works, so I was pretty excited about that. So this was the most important demonstration in the 1977 period. And, as I say, we standardised shortly thereafter. So another kind of time line, which I wanted to share, shows you some of the other participants in the growing internet programme. The ARPANET comes in ’69. The University of Hawaii did an ALOHANET system, based on shared radio channel in 1971. And they called it ALOHA because, basically, you transmitted whenever you wanted to. And if there was a collision and you didn’t hear a response, you assumed there was a collision and you retransmitted. And you were careful not to retransmit at a fixed time later, otherwise you’d have continuous collision. So, instead, what they did was to have a variable time out, so that if you had a collision with someone, the next time you each retried, it would be at a different time. And so ALOHA is a sort of a very hang-loose-kind of network. The packet satellite system was based on that same idea, using the synchronous satellite channel. A guy named Bob Metcalf visited the ALOHA facilitaty and decided that he could do the ALOHA system on a piece of coaxial cable. And he invented the Ethernet at Xerox Park in 1973, at 3 megabits a second - that was fairly impressive. And, of course, then we were going through all of our software development. We turned the internet on, on January 1st 1983. The National Science Foundation in the US decided, maybe this packet switching idea would be good to connect all of the universities in the US. So they built the NSFNET backbone to connect 3,000 universities around the United States, including some intermediate level networks. NASA and the Department of Energy also decided that they would adopt the TCP/IP protocols, replacing their High Energy Physics Network, HEPNET. It was for doing experiments with high energy physics. And so they replaced all of their networks with TCP/IP. And then, around 1989, I got permission to connect some commercial systems up to the government backbone. And in that year several commercial networks got started, UUNET, PSINET, and CERFNET. There’s a whole story behind CERFNET, and I won’t bother you with it right now. But I didn’t build it, they just borrowed my name and stuck it on it. They did ask permission. And I remember thinking if they mess it up, will I be embarrassed? And then I thought, wait a minute, people name their kids after other people, and if the kids don’t come out right, they don’t blame the people they named them after. (Laughter) So I said, go ahead. Go ahead and do it, it's ok. And it actually worked out alright. So this is what the internet looks like now. It’s this giant big ball of hundreds of thousands of networks. The reason that I wanted to show you this is that the colours represent just different networks. We call them autonomous systems, formally. But the thing is this is not a top-down system. And so the people, that run pieces of the internet, pick which hardware they want to use. They pick which software they want to use. They decide who they’re going to connect to and on what terms and conditions. And it just all comes together almost organically - which is what Bob Kahn and I hoped in the first place. We published our results in 1974 saying, if you can build something that works this way and find someone to connect to, it should work. And so what happened is the network grew in a very organic way. It only works because everybody is using the same protocols, even though their implementations might vary. So that’s how the internet has grown up until now. The World Wide Web is one of the most important applications of the network. And I want to distinguish between the 2, because some people get confused about that. The World Wide Web is what many of us use all the time. Tim Berners-Lee and Robert Cailliau at Cern built the first hypertext mark-up language and hypertext protocol implementation of browsers and servers at Cern, and nobody noticed, to be quite honest. However, 2 guys did notice. They were at the National Centre for Supercomputing Applications, Marc Andreessen and Eric Bina. They did what was called Mosaic, which was the first graphical user interface for a browser. It made the internet, or the World Wide Web, look like a magazine: It had formatted text, it had images; eventually it got video and audio and things like that. It was a very spectacular change in the interactions that people could have. And so a guy named Jim Clark, who had built Silicon Graphics in the Silicon Valley, saw this and brought Marc Andreessen and others from NCSA to the West Coast. They started Netscape Communications around 1994. They had their initial public offering in 1995, the stock went through the roof. It was the most spectacular IPO in history. And that started the Dot-Boom, which meant every venture capital company in the Valley and elsewhere, would throw money at anything that looked like it had something to do with the internet. It didn’t matter if they had a business model or anything else. There was a huge amount of money that went in. And a lot of companies failed around April of 2000, that was called the Dot-Bust. But what was interesting as I was tracking, you know, how big is the internet, and how much is it growing. And it was doubling every year for quite a long time - even through the Dot-Bust. Because people had real use for the underlying internet, even though a lot of companies failed because they didn’t really have a realistic business model. Just to point out to you about economics 101: Don’t spend any more money than you have, that’s point number 1. And the second one is: Don’t confuse capital for revenue. Revenue is continuous and capital runs out. And when you run out of capital and you don’t understand the difference, then you’re not surprised, or you are surprised that you’ve just gone bankrupt. So the system grew very, very rapidly in spite of the Dot-Bust. And, of course, all of you are using it today for many different purposes, including much of your scientific research. The internet itself, even though it was designed 40 years ago, is not static. It has continued to evolve in many ways, especially new protocols and new applications. Now I have to confess to you, that Bob Kahn and I did make at least one fairly major mistake – apart from little details in the protocol design. We were trying to figure out how many termination points that we’re going to need for this internet thing. And so, remember, it's 1973, and we’re writing our little design. And so we said, ok, how many networks will there be per country, because we were already thinking global. We’d just finished doing the ARPANET. And it was not cheap to build that, you know, on a nationwide scale. So we thought, well, maybe there’ll be 2 networks per country, because there’s sure to be some competition. And then we said, how many countries are there. We didn’t know, and there wasn’t any Google to ask. (Laughter) So we guessed at 128 because that’s a power of 2 and that’s a programmer's thinking. So 2 times 128 is 256, and that’s 8 bits. And then we said, how many computers will there be per network. And we thought, you know, let's go crazy, 16 million and that’s another 24 bits. So that’s 32 bits of address base that’s required. And we’re making this decision at a time when computers were great big expensive things. They were in air-conditioned rooms and they did not move around - so 16 million was pretty ambitious. And that actually worked ok. The 4.3 billion terminations of IP version 4, which is what you’re currently using mostly, lasted until 2011, and then we ran out. So, fortunately, the engineers, including me, started to get panicky around 1992, when we started seeing Ethernets all over everywhere. And, as I say, we developed the new version of protocol called IP version 6. And it has 128 bits of address space. I don’t have to tell you, you can do the math. You know, 3.4 times 10^38 addresses, which is the number that’s, you know, big enough for – even the Congress would appreciate that number. So we ended up – oh, by the way, some of you would wonder what happened to IP version 5? That was a protocol that was designed to do streaming video and audio. But it didn’t scale very well, so we abandoned that, and the next number was 6. So IP version 6 is the production version of the network. And that’s what you should be using. And if your ISP is not providing you with IPv6 servers, please pound on the table and say, give me a date certain when I can have IPv6 addresses. Because I need them for the internet of things, and the mobiles and everything else. So we were a bunch of Americans, and we only spoke English anyway. So all we had was ASCII characters for the domain names. But it was pointed out to us later, that there are some languages that can’t be expressed with ASCII characters. And we said, oh yeah, we forgot about that. So we added Unicode, which is what’s used in the World Wide Web in the domain names. So now you can have domain names in Russian and Cyrillic and Arabic and Hebrew and so on. The original generic top level domains were only 7, like .com, .net, .org, .edu, .mil, .gov and so on. But the Internet Corporation for Assigned Names and Numbers decided to open up the generic top level main space a couple of years ago. And they got 2,000 applications for new generic top level domains, things like .travel, .corporation and so on. Oh, and they charged $185,000 each, so $350 million came in upon opening up the top level domain space. There are additional things, which I don’t have time to go into much, except to say that there were security risks in the system, which had been designed in a very friendly environment, mostly engineers. We didn’t want to ruin everybody else’s stuff, and so we weren’t attacking anything. But once you released the internet into the global community, the bad guys are out there too. So we’ve been adding more mechanisms for defending against various forms of attack: against the domain name system and against the routing systems - those are the last 2 on the list. We’ve also been pushing 2 factor authentication. Especially at Google, where you have to have a device that will generate a cryptographic password, in addition to your user name and password. So even if somebody guesses your user name and password, they don’t have the little gadget that does the crypto password generation as well, and so they can’t penetrate the account. We’ve added transport layer security which encrypts the traffic on the TCP layer. And that inhibits the ability of somebody to snoop on what you’re sending through the network. And, of course, mobile smart phones and the internet of things have become part of the environment. Just a brief footnote on smart phones: The mobile phone was actually developed in 1973 by a guy named Marty Cooper working at Motorola. Bob Kohn and I didn’t know about that. But in 1983, Marty Cooper turned on the first mobile phone service. And, of course, his phone was about this big, it weighed 3½ pounds, and had a whip antenna on the top. And I called him up to ask some questions with one of his phones. And one question that I asked him was, how long does the battery last? And he said, 20 minutes - but its ok, you can’t hold the phone up longer than that anyway. (Laughter) They’ve gotten better since then. So we got launched at the same time. Mobil phones and the internet started, officially and formally, in 1983. But they really didn’t have anything to do with each other until 2007, when Steve Jobs came up with the iphone. And at this point now the phone is capable of interacting with the internet. And what’s interesting about this, of course, is that the 2 systems mutually reinforce their utility. The mobile phones apps use computer power on the internet. The internet is accessible from any mobile phone and any smart phone, and so the 2 making themselves more useful. And finally, of course, as time has gone on, people have started to use software instead of electromechanical devices for control, and that leads to the internet of things. Now, I will confess to you that in 1973, it did not occur to me that someone would want to attach their refrigerator to the internet, or a picture frame. I used to tell jokes that someday every light bulb will have its own internet address, ha, ha, ha. Except, now I can’t tell those jokes anymore. Philips makes one called Hue, which you control from your mobile, both the intensity and the colour of the bulb through the internet. So I did wonder, you know, what would you do with an internet-enabled refrigerator? And well, in America, the way we communicate in our families is to put paper and magnets up on the refrigerator door. So this improves things because now we can communicate with websites and blogs and email and things like that. But we also thought, what would happen if the refrigerator knew what it had inside? I mean, what if everything had a little RFID chip on it, and you could sense what was in the refrigerator. So when you’re out at work or off at school or something, the refrigerator is surfing the internet, looking for recipes that it could make with what it has inside. So when you come home you see the list of recipes on the display. That sounds pretty good, but a good engineer will always extrapolate to see what other things might happen. So you can imagine that you’re on vacation and you get an email. It’s from your refrigerator and it says that milk has been in there for 3 weeks, and it’s going to crawl out on its own if you don’t do something about it. Or maybe you’re shopping and your mobile goes off, it’s the refrigerator calling. And it says, don’t forget the marinara sauce, I have everything else I need for spaghetti dinner. I’m sorry to tell you that our Japanese friends have spoiled this idyllic vision of the future, because they invented an internet-enabled bathroom scale. And when you step on the scale it figures out which family member you are based on your weight. And it sends that information to the doctor and it becomes part of your medical record. Which is probably ok except for one thing: the refrigerator is on the same network. So, you know, when you come home and you see diet recipes coming up on the display. Or maybe it just refuses to open, because it knows you’re on a diet - it’s really bad. So in the lower right you see version 1 Google glass being modelled by Sergey Brin, the co-founder of Google. The reason I put this up is, of course, one thing: it’s an internet-enabled device. But what’s interesting about it is that it allows the computers to see what you see and hear what you hear. And that’s an interesting experiment. Because the possibility that the computer could understand what it was seeing and hearing, which again is pushing some of the limits of artificial intelligence, might mean that the computer could become part of the conversation. And so while you are having a dialogue with your colleagues and trying to argue over some particular design point or other speculation, you might be able to invoke the computer which would have context as a result. So it’s very much like Star Trek, you know, when Captain Kirk would say, computer. And you would hear Majel Roddenberry’s voice floating down from the ceiling. So this is actually an important experiment. And we’re in the middle of designing a new version, the Google glass. I left the guy in the middle for laughs though. This is an internet-enabled surfboard. I’ve not met this fellow. But I imagine him sitting on the water, waiting for the next wave, thinking, if I put a laptop in the surf board, I could be surfing the internet while I’m waiting for the next wave. (Laughter) So he put a laptop in the surfboard and he put a WiFi server back at the rescue shack, and now he sells this as a product. So what else is coming? Well, sensor networks are already with us. Some of you have them at home already: Sometimes it’s a security system, sometimes it’s heating, ventilation, and air conditioning. In my case I have an IPv6 self-organising radio network at home. Each room in the house has a sensor which also doubles as a little radio router. And every 5 minutes it's sampling temperature, humidity and light levels in the house and records that information through the network to a server down in my basement. I know, only a geek would do this. But the whole idea is, that at the end of the year, I now have good engineering information about how well the heating ventilation and air-conditioning works, so we can make adjustments instead of relying on anecdotal information. Now, one room in the house is the wine cellar. And there are 2,000 bottles of wine in there. So I care a great deal about keeping the temperature below 60 degrees Fahrenheit and the humidity up above 30 or 40%, to keep the corks from drying out. That room has been alarmed: if the temperature goes above 60 degrees Fahrenheit, I get an SMS on my mobile. And at one point, my wife Sigrid and I were away from the house, and I got the message saying, your wine is warming up. And nobody was there to reset the cooling system. So every 5 minutes for 3 days, I kept getting the message saying, your wine is getting warmer. So it got up to like 70 degrees or something which is not the end of the world, it’s not great. So I called the guys that made this system and I said, do you guys make remote actuators. And they said yes. So you know I’m thinking, I could remotely reset the cooler. And I said, do you do strong authentication? And they said yes. And I said, good, because there’s a 15-year-old next door and I don’t want him messing around with my heating and air conditioning system. So we installed that. And then I got to thinking, I can tell if somebody went in to the wine cellar because I can see that the light went off and on. But I don’t know what they did in there. So I thought, what can I do about that? And I said, aha, I put an RFID chip on each bottle, and then I will put an RFID detector in the wine cellar. So I can do a remote inventory, no matter where I am, to see if any bottles have left the cellar without my permission. So I’m boasting to one of my engineering friends about this brilliant design and he says, you have a bug. I said, what do you mean I have a bug? And he says, well, you could go into the wine cellar and drink the wine and leave the bottle. (Laughter) So now I have to put sensors in the cork. (Laughter) And as long as I’m going to do that I might as well sample the esters, which tell you whether or not the wine is ready to drink. So before you open the bottle you interrogate the cork. And if that’s the bottle that got up to 75 or 80 degrees, that’s the one you give to somebody that doesn’t know the difference. So this is actually, I mean, the future really is going to be heavy in sensor systems – all around, the buildings will be instrumented, the cars will be instrumented, manufacturing facilities. And even ourselves, our bodies, will be instrumented as well. And that will all be part of this vast quantity of information flowing around in the network. If we look over the next 20 years’ time - these are, of course, just guesses. But today we think there could be 10 to 15 billion devices that are capable of communicating on the net. They typically are not all on at once, necessarily, but there could be that many. And in 2036, 20 years from now, the numbers could reach a trillion. There will be on the order of maybe 8½ billion people in the world in 2036. And they might have anywhere from 100 to 110 devices, either on their persons, or at home, or in other places that they inhabit. So these numbers are not totally crazy. But they do certainly motivate the need to get to IPv6, because we need all that address space for all these devices. Here are some of the things that we’re experimenting with at Google. Our Verily Company, it’s experimenting with - it’s not manufacturing, but it’s experimenting with a contact lens which can sense the glucose level in the tears of your eyes. That’s related to the glucose level in your blood, although there’s a delay function associated with going from blood glucose to the tears of your eyes. The idea is that if you’re a type-1 diabetic and you’re tired of pricking your finger to take blood samples all the time, this is an alternative way of gathering the data. And since it’s a potentially continuous monitoring system, we can establish a baseline of what is normal for you. And then excursions away from the baseline can be detected very quickly. And so you can recover either by adding more insulin or eating something with sugar in it. So this idea of continuous monitoring, I think, is a very important theme, which you will see repeated over and over again. Continuous monitoring lets you see anomalies that you would not normally see if the sampling is too low a rate. If you think about the guys who never go to the doctor until they’re sick, really sick. And so the doctor’s model of you is you’re always sick. Because you never see the doctor when you’re healthy, you only see him when you’re sick. So this continuous monitoring could make a big difference. There are also Google self-driving cars. What you’re seeing here is model number 2. But I have a video showing you the first model, that we were testing with one of our blind employees, to see whether we could actually have the car drive our employee to work. So if I’ve done this right, I should be able to get the video to play. Nathaniel: Good morning Steve. Steve: Hey Nathaniel, how are you? Nathaniel: Doing just great. Nathaniel: Go ahead, Steve. Nathaniel: Here we go. Steve: Here we go. Steve: Look Ma, no hands. Nathaniel: No hands anywhere. Steve: No hands, no feet. Nathaniel: No hands, no feet, no nothing. Steve: I love it. Steve: So we’re here at the stop sign? Nathaniel: Yes, car is using the radars and laser to check and make sure there’s nothing coming either way. Steve: I find myself looking. Nathaniel: Old habits die hard. Steve: Hey, they don’t die. Anybody up for a taco? Nathaniel: Yeah, yeah, what do you want to do today Steve? Steve: I’m all for Taco Bell myself. Nathaniel: Alright, well let’s go get a taco at the drive-through. Steve: Now we’re turning into the parking lot? Nathaniel: Yeah. Steve: How neat. Nathaniel: There we go, now we kind of creep along here. Does anybody have any money? Steve: I’ve got money. Nathaniel: No, I’ve got my wallet right here. If you roll down your window and order a burrito. Yeah, push that. Waitress: How are you today? Steve: I’m doing very well, how are you today? Waitress: Good, thank you. Steve: This is some of the best driving I’ve ever done. Steve: 95% of my vision is gone, I’m well past legally blind. You lose your timing in life, everything takes you much longer. There are some places that you cannot go, there are some things that you really cannot do. Where this would change my life, is to give me the independence and the flexibility to go the places I both want to go and need to go when I need to do those things. Steve: You guys get out, I’ve got places now I have to go. Nathaniel: Bye now. (Laughter) Steve: And it's been nice. It's been really nice. You can imagine we’re pretty excited about all that, and hoping to keep at it. We have new models, as you can see, under development. There are other things that are happening, drones, for example, are everywhere. It sounds like we are still runing the video, doesn't it? Sorry about that. Oh no, it wants to take me all the way back to the beginning, I don't want to do that. Well, here we go, fast forward. (Laughter) There we are, drones - you all know about these, they’re all over the place. In the US it’s very exciting. The Federal Aviation Administration has been going a little crazy trying to figure out, what the rules are for what may become 27 million drones flying around in the US. I know Jeff Bezos wants to use the drones to deliver things for Amazon. I had dinner once with him. And I had this image of a cartoon that showed a drone hovering in front of somebody’s door. And sending a text message inside saying, I’m here with your delivery. And if you don’t open the door in the next 30 seconds I’m blowing it down. (Laughter) And Jeff laughed and I got nervous, that he might actually do that. I hope he doesn’t. So that’s another part of this internet of things. And then there’s project Loon, because it’s looney. These are balloons that Google has built, that are operating up to about 60,000 feet in the stratosphere. As they move up and down they’re blown in different directions, depending on the wind. And so we actually steer them by changing the altitude. The idea is that they are doing WiFi or LTE, long-term evolution radio communications, from the stratosphere. And the idea is that they circulate roughly given latitude. They look for tail winds to get to the next service point. And then they look for head winds to hover where they are. But they continue all the way around the world. We are in operation now in Sri Lanka. And we’ve been experimenting with this for about 4 years now. So the idea here is to provide access to internet in places that would be very difficult to bring fibre to – up in the Andes Mountains for example, or the Sahara desert, or the middle of the ocean for that matter. And finally, I thought I would show you the Boston Dynamics latest little robot. And this one is called Little Dog. That’s Big Dog behind him. Oh, we get the advertisements here - that's Google. That’s amazing, the stability of the head is incredible. Look at that, this is really impressive. There is a camera in the mouth that turns out. You don’t have to build these the way dogs are built. This is also pretty impressive, going upstairs is fairly straightforward for it. By the way there are 51 videos of these things on YouTube, in case you want to go look. Not just of the dog, but all the other robots these guys make. I have a little bit more that I want to show you. But what I am going to show you now is not a Google project. It’s a project that was started in the Jet Propulsion Laboratory in Pasadena. My colleagues and I began this work in 1998. I’ve been a visiting scientist there since that time. So this is the design and implementation of an interplanetary internet to support man, and robotic space exploration in the remainder of this 21st century. So we met right after the Pathfinder robot landed in 1997, successfully, on Mars. And this is after many, many attempts to get to Mars after the 1976 Viking Landers. And we got to speculating about this, because the communications to support the Pathfinder was a direct point-to-point link between Earth and Mars. Of course, that was a pretty limited kind of networking capability. We thought, what would happen if we had a richer networking environment, kind of like the internet? And we actually started out thinking that we could use the TCP/IP protocols - they worked on Earth, they ought to work on Mars. But when we started thinking about this, it didn’t work very well. Those protocols didn’t work very well between the planets. And it should be obvious to you, the speed of light is too slow. Between Earth and Mars, when we’re closest together, 3 ½ minutes one way delay. And when we’re farthest apart its 20 minutes one way, round trip time is 40 minutes. The TCP protocol was not designed to deal with a 40 minute round trip time for flow control. The flow control is really simple. When you run out of room you tell the other guy stop sending, I’ve run out of room. And if that’s only a few hundred milliseconds that’s great. But if it takes 20 minutes for that signal to get to the other guy, and he’s blasting stuff at you for 20 minutes full speed, the packets are falling on the ground and falling all over everywhere. So flow control didn’t work. Then there’s this other problem. The planets are rotating and we don’t know how to stop that. (Laughter) So if you’re talking to something on the surface and the planet rotates after a while, you have to wait till it comes back around again. Or if it’s an orbiter, it’s a problem. So by 2004, we had the 2 Rovers that were sent to Mars, Spirit and Opportunity. The original plan was to transmit data from the surface of Mars, the way we had with the Pathfinder. And the radios overheated, and everybody got all worried about that. And they were only rated at 28 kilobits a second. So the scientists were not too happy about that kind of data rate coming from the surface. And then we said we’re going to have to reduce the duty cycle, because we don’t want the radio to overheat and harm any of the other instruments or itself. So they were all upset. And one of the engineers at JPL said, well, there’s an expand radio on the Rover. And there’s also an expand radio on the Orbiters, which we had sent earlier to map the surface of Mars to figure out where the Rover should go. So we reprogrammed the Rovers and the Orbiters, so that when the Orbiter came overhead, the Rover would squirt data up to the Orbiter. And the Orbiter would hang on to the data and transmit it when it got to the right place in its orbit to reach the deep space network, which has big 70 metre dishes at 3 places on the surface of the Earth. That’s store and forward, and so we developed a whole suit of protocols, called the Bundle Protocol, to do that. The prototypes are actually running now, pulling all the data back from Mars. And when we dropped the Phoenix Lander on to the North Pole there wasn’t any configuration that had a direct path back to Earth. So we used the same set of protocols. And when the Mars Science Laboratory landed, we did it again. So all the data that’s coming back from Mars is going through the prototype, Bundle Protocols of the interplanetary system. We’ve uploaded those protocols on to the international space station. We’ve had the astronauts using the Bundle Protocols, the Interplanetary Protocols, to control Rovers on the surface of Earth in real time, because the distance is fairly small. They’re using the interplanetary protocols which work just fine over short delays, just like TCP does. But they also work over these long invariably connected systems. So what we’re hoping, frankly, over time is that as – oh we’ve standardised the protocols with the Consultative Committee for Space Data Systems. So now anyone can get access to them: They’re fully standardised. The protocols are available. The implementations are available on source boards for free for anyone who wants to download them and use them. In fact, there’s a German university that’s implemented these for android mobile phones as well. So what we’re hoping is that, as new missions get launched by the spacefaring countries of this planet, that they will adopt these protocols and use them for their missions. Or even, if they don’t use them for the scientific mission, if they could reprogram the spacecraft after they’ve finished their primary missions, they could become nodes in an interplanetary backbone. So we will literally grow the backbone over time as new missions get launched to the solar system. So that’s the up to the minute story on the interplanetary internet. And that is my last slide. So I’ll finish and thank you all very much for your time.

Heidelberg Lecture: Vinton G. Cerf (ACM A.M. Turing Award 2004) (2016)

The Origins and Evolution of the Internet

Heidelberg Lecture: Vinton G. Cerf (ACM A.M. Turing Award 2004) (2016)

The Origins and Evolution of the Internet

Abstract

This talk will explore the motivations for and design of the Internet and the consequences of these designs. The Internet has become a major element of communications infrastructure since its original design in 1973 and its continued spread throughout the societies of our planet. There are many challenges posed by the continued application of the Internet to so many new uses. Safety, Security and Privacy are among these challenges especially as the so-called "Internet of Things" continues to evolve. The increased digitization of our information poses a different challenge: the preservation of digital content for hundreds to thousands of years. Might we be facing a Digital Dark Age?

Cite


Specify width: px

Share

COPYRIGHT

Cite


Specify width: px

Share

COPYRIGHT


Related Content