Campus Communication Strategies
|
TechTalk | Virtual Seminars | Glossary Creating Internet2 TranscriptInternet2 Technical ChallengesDoug GaleVice President for Information Systems and Services George Washington University dgale@gwis2.circ.gwu.edu In this section I would like to discuss some of the technical challenges facing the implementation of Internet2. Before considering the technical challenges, however, we should review some of the engineering objectives of Internet2. First, we want to enable the type of advanced applications described by Ted Hanss in another section of this seminar. Second, we want to strengthen universities' ability to accomplish their research and education missions. Third, we want to pioneer specific technical advances in data networking, such as Quality of Service. Fourth, we need to establish gigaPoPs as effective service points capable of supporting an expanding number of institutions. And finally, we must always keep in mind that applications motivate engineering changes, and those engineering changes in turn enable new applications. This realization may mean that additional engineering objectives will need to be considered. Our objectives are clear, but we are faced with trade-offs between cost, functionality, and speed of implementation. In this section, we'll consider in more detail four key engineering issues confronting the implementation of Internet2: How to deal with delay-bandwidth products, how to introduce Quality of Service across a network, how to expand multicast support, and how to balance the advantages and the limitations of a new network protocol, IP version 6. Let's first turn to the issue of large delay-bandwidth products. The delay-bandwidth product is the mathematical product of the delay, or transit time for a packet, and the bandwidth of a network. As the delay-bandwidth product grows, it becomes more difficult to sustain a steady stream of data from end-to-end. This conclusion might seem counter-intuitive, so I'd like to explain what's happening in a little more depth. Delay is made up primarily of two components: the distance divided by a figure slightly less than the speed of light, and the delay introduced by the network electronics. Both quantities are typically 10's of milliseconds. Assume for a moment that we are sending a large amount of data across the country. What happens if the network bandwidth is increased by a factor of ten, but the delay remains constant? Our intuition tells us more bandwidth is better, but as the graphic points out, when the delay-bandwidth product grows we have a problem. So what's going on? The problem lies in the way TCP/IP acknowledges to a sender that a packet was received OK. For transcontinental fiber, there is typically a 50 millisecond round-trip propagation delay for a packet and its acknowledgment. In other words, the sender will be sending additional packets for 50 milliseconds before he finds out if the first one got through. Some of the packets won't get through and will be unacknowledged. So if the bandwidth increases by a factor of ten, the number of unacknowledged packets also increases by a factor of ten, and here lies the problem. At some point the entire network is consumed with the handling of these unacknowledged packets. Perhaps an analogy might be helpful. If we were to increase the number of runways at our nation's airports and simultaneously increase the number of airplanes by a factor of ten, we probably wouldn't see the passenger traffic increase much at all. In fact, because our air traffic control systems would be so overloaded, passenger traffic would probably decrease. It's another example of when too much of a good thing isn't always good! So we have a problem when the product of packet delay and bandwidth grows because so does the number of unacknowledged packets, and an increase in unacknowledged packets in turn makes it more difficult to sustain a steady stream of data from end-to-end. This has several consequences. First, it requires that we increase the number of direct physical paths, which increases network complexity. In other words, the network requires more fiber and more wire. Second, it exacerbates the trade-off between buffering and the variation in delay, or jitter. We know we can reduce jitter by adding additional buffering. That buffering, however, increases the delay, which in turn increases the number of unacknowledged packets, which in turn increases jitter, setting up the whole vicious cycle again. In order to support applications such as video conferencing for distributed learning, we must address the limitations associated with the delay-bandwidth products. Introducing Quality of Service within a network creates both technical and administrative challenges. For example, providing Quality of Service only in some portions of the network is not sufficient. Users want end-to-end Quality of Service that spans multiple networks and service providers. Users don't care what it takes to deliver desktop-to-desktop service, as long as what shows up on their screens looks and sounds good. Unfortunately, Quality of Service technology is still in its infancy and we have a ways to go. Establishing end-to-end Quality of Service may be even tougher than maintaining it on the campus level or between campuses, because individual desktop computers themselves may be inadequate. For example, you can establish Quality of Service right down to the PC, but then you can hit a proprietary operating system that may not support Quality of Service. The greatest loss in Quality of Service actually may take place in the last hundred microns of transmission -- on the processor chip itself. Introducing Quality of Service may be just as hard at the administrative level. In any situation where you're giving some applications more bandwidth and resources than others, the question of resource allocation naturally arises. On an administrative level, you need to develop some sort of process that says "yes" to some applications and "no" to others. Authentication becomes even more important when you're saying "yes" to some folks, and "no" to others. You have to have some way of knowing who's asking for the service. This is one of two or three areas in the Internet2 project where authenticating users will be very important. Finally, measuring and monitoring network use and performance are important for two reasons. The first is to ensure that the network meets performance requirements. The second is to support differential charging for network services. The Internet2 will be too expensive to be treated as an unlimited free resource. This graphic shows a sketch of one approach to implementing Quality of Service. It's very non-technical, but it does illustrate a few of the points I'd like to make. The two clouds, labeled A and B, represent campus networks, each implementing their own Quality of Service. The two networks are linked to each other in the center of the graphic. In a typical network there really will be many more clouds on a path between end-users, but for simplicity's sake, I'm only showing two. There are three questions to ask in implementing any Quality of Service. First of all, what level of Quality of Service is needed to support the application? Second, how can that Quality of Service be implemented within the various network segments? There are different ways of providing Quality of Service, and the appropriate technology might differ between network segments. For example, cloud A may be an ATM network, and cloud B a massively over-built IP network. The last question arises when we glue different network segments together -- will we get the end-to-end Quality of Service needed to support the application? We also need to improve our support of multicasting. Multicasting is largely the ability to send the same packets from a single source to multiple recipients. On the current Internet, multicasting is supported by something called the MBone. Right now, however, the number of people using the MBone is relatively small. Many of the advanced higher education applications we envision for Internet2 are naturally multicast. For example, one user broadcasting to many is characteristic of distance education; a few users broadcasting to a few is characteristic of a graduate seminar or a small conference. Unfortunately, it's hard to scale our current multicast technology. In other words, while we can support multicasting for small communities with the MBone, it will be challenging to extend that capability to large communities. The final engineering issues we must consider are the limitations of the Internet Protocol, or IP. There are currently millions of users on the Internet using IP version 4. The current Internet has some difficulties, however. Although it theoretically has enough addresses to meet current and near term needs, the way addresses are assigned in blocks means that only a fraction of these theoretically available addresses are actually useable. Also, the routing tables in the current Internet have become unmanageably large. Imagine the problem the telephone companies would have if -- instead of grouping area codes by location -- area codes were assigned based on the alphabetical letters of a person's last name. Each local telephone exchange would need a directory containing everyone in the country. Finally, from our perspective the current Internet is limited because IPv4 does not support Quality of Service. A greatly enhanced version of IP, IP version 6, has been developed to address these limitations. IPv6 dramatically increases the number of useable addresses, compacts routing tables, and when combined with another protocol called RSVP, it will probably provide Quality of Service adequate for our needs. Moving from IPv4 to IPv6 will require a great deal of effort, however, because of the size of the Internet. In addition, there are alternative technologies, such as ATM, that address IP's limitations. Because ATM is heavily supported by the telecommunications industry, we need to plan any transition very carefully so that we don't paint ourselves into a corner. IPv6 will help solve some of our current problems, so we believe that it should be adopted as quickly as possible. Software supporting IPv6 is now available, and we expect version 6 to become increasingly common. Just where are we in our progress in dealing with these engineering objectives? I am pleased to say that our 1997 aspirations were met. Because they currently are uncongested, our advanced networks are operating well at high speed using best-effort IPv4. DS3 and OC3 links are typical, and there has been some use of OC12. About 15 gigaPoPs are currently operational, and about 50 universities are connected. In addition, 22 universities and Advanced Network and Services (ANS) have deployed "surveyors" at 23 sites in the United States to continuously monitor the performance, health and reliability of the Internet. In 1998, our goal is to introduce Quality of Service, improve multicast support, introduce IPv6, and establish gigaPoPs as effective service points. These service points will serve as the nucleus which ultimately will provide services to all of higher education. This final graphic depicts the number of gigaPoPs in early 1998 and is dramatic evidence that the Internet2 is rapidly becoming an operational reality.
| ||||
[Top of Page] |