Participants:
Judith Boettcher (JB)
Greg Marks (GM)
Doug Gale (DG)
Greg Palmer (GP)
Topics covered include: To find a specific term or phrase from within this transcript, use your browser's search function.
JB: Welcome to the CREN Virtual Seminar Expert Series, Campus Communications Strategies, for Spring, 1998. Whether you are joining us by phone or by Internetville Audio, you are here because it's time to discuss some of the leading strategies for funding campus networking. This is Judith Boettcher of CREN, one of your hosts for today's session. Today's co-host is Greg Marks, from Merit. Good-afternoon, Greg. How's the weather in Ann Arbor today, by the way? GM: Weather? Ordinary, just gray, in the middle 30's, not doing much of anything. JB: I'd rather that. We've had at least two inches of rain here in Washington, but we're glad that it hasn't turned to snow. We are pleased to say that our guest expert today is Doug Gale, the Associate Vice President for Information Systems and Services at the George Washington University. Doug is nationally known for his work in building and funding national and campus networks, and he is also the seminar leader for the CREN Virtual Seminar on Campus Communication Strategies. Welcome, Doug. Thanks for being here today to discuss some of the leading strategies in funding networks. We're pleased that you could be here today. DG: Thanks, Judith. I'm glad to be able to participate. You know, it's interesting. If we look in the newspaper for the last couple days, we see an ever-growing interest on the part of the telecommunications companies to provide Internet services -- and that's good, because it's going to help us provide connectivity into the home and perhaps will allow us to focus our attentions on rebuilding our campus networks. GM: That would be a real good thing, particularly having the residential service taken care of. Let me take this moment here just to remind everybody who's listening in that there are two ways that they can join this session, one of which is by dialing 734-7673-1533 and joining us live on this conference call. If you make that call, you'll find yourself right here in the midst of us talking. No one will be reviewing and connecting you after you call. You're just right in there with us. The other option is to go to the link off the CREN homepage at www.cren.net. For those of you joining us -- whichever way -- please also recognize that you can send us e-mail during the session to the address ccs@cren.net. You could include your name and title, if you like, and when that e-mail reaches us, we'll respond to your questions or comments during that session. The process itself: we will go through some questions and answers, but as yours come in, we'll definitely turn to those and try to respond to what those of you out there would like to know from Doug. JB: Okay, very good. And it sounds like we have had some activity on the line. Let me ask right now, is there anyone else who's on the conference call right now? GP: Greg Palmer from Drexel University. JB: Welcome. GP: Thank you. JB: Glad that you are choosing to join us in this way. Do join in when you have a question to ask. GP: All right, thank you very much. JB: Okay, Doug, this is a question now for you: I think as far as networking is concerned, we're always talking about how rapidly technology is changing. Since we've had the Virtual Seminar on Campus Communication Strategies come out on CD, how do you feel the content is holding up? Do we have to re-do it right away? DG: I don't think so, because I think the content has held up very well. The reason, of course, is that we based the seminar on our academic requirements -- what it is we needed to be able to do -- and then we focused on the fundamentals of how we would go about doing it. We didn't get too far into the technical nitty-gritty. For example, the original definition for our requirements came from the Monterey Futures Group in 1995, and that workshop essentially said we wanted to be able to support real-time audio and video to every desktop. It didn't specify precisely how that would be done, but it said that's what we want to do, and then aggregated those requirements at the campus and the national level. One of the things that came out of that aggregation is a campus networking strategy based upon a star architecture using switching technology, and that's as true today as it was two years ago. JB: That's kind of exciting, Doug. So in other words, some of the basic principles for all of the planning are remaining constant, although some of the details are changing. But the basic principles and architecture is in fact consistent. DG: That's correct. JB: Okay, great. Now listen, you know, one of the things that everyone is struggling with is the funding of the campus network. What's been happening the last six months? Are there any new creative approaches surfacing to address this problem? DG: Unfortunately, no. We continue to wrestle with the issue of how to fund campus networks. We know, for example, that the data network is far more expensive than we would like to admit, and we see really three strategies that are being employed by various institutions: One strategy is to try to bill the end user for the total cost of the data network connection, and quite frankly, I don't know of any institution that has totally implemented that approach. The other extreme would be to centrally fund the data network connection. The difficulty, of course, is that as a free commodity, the demand tends to grow without bound. And of course, the third alternative is some mixture of those two. That's the approach being taken by most institutions, and they're having a great deal of difficulty in trying to find the balance that's appropriate for the institution. What I typically see happening at most institutions is that as they struggle with developing a long-term funding strategy, they make use of ad hoc solutions -- one-time moneys, emergency funds -- and they manage to just get by and push the problem into the next year. JB: And in terms of that kind of a mix of dollar amounts and all, is it growing overall? I mean, as we see so many advances in other technology with computers and all the rest, we see that really the amount of money that we're spending on a fairly high-end, multi-media kind of computer, that capacity is really increasing, but the cost of the computer is staying about the same. Is the same kind of phenomenon happening in networking infrastructure costs? DG: If we focus on traditional IP technologies (the kind of network connections that we have now) what we're seeing is that to a first approximation, the increased user-demand is roughly being offset by the improved performance cost-ratio of the equipment. So to a first approximation, our expenditures and our costs are staying constant. JB: Okay. So the expenditures on a yearly basis, then, are staying fairly constant. GM: You started out that comment, though, Doug, with a qualifier that sounded like: "If you're doing straightforward commodity IP, then everything's okay." What was it that was worrying you as you said that? JB: Good question. DG: Well, I know on my campus, we're looking towards being able to provide video to the desktop sometime later this year. My concern is that the cost of providing these services may actually increase in absolute terms the cost of providing the data connection. That's one reason that I think we need to be looking at the integration of voice, video and data services -- so that we can develop a workable and viable funding model. Rather than saying we need to spend $25 to $35 a month for a telephone connection and $50 to $70 a month for a data connection and perhaps $10 a month for a video connection, perhaps we can lump all three together on that high-performance data connection. I'd much rather try to sell that to my campus than to try to sell all three added up separately. GM: You also, in that response, referred to the patchwork of campus funding. Are there any campuses that have found ways that follow any pattern for working out of it -- that is, getting to a really deliberate life-cycle and recurring funding? Or has it really just come down to particular administrators biting the bullet and doing the reallocation within the base buDG:ets of the institution -- when you finally have the administrator who's up to that task? DG: Well, Cornell and MIT have gone a long way towards implementing a strategy based on the user paying for the cost of the service. I would argue that they don't recover all of their costs, but they recover a large percentage of them. I think that's unusual for most campuses, however. I know of one large west coast university that for a number of years has been struggling with the issue of developing a scaleable funding model for the data network. They made a decision several years ago that they would no longer charge for data access, and they saw the number of connections quadruple in two years -- which was the result that they wanted. At that time, though, they realized that they needed to have a better funding model, so the president of the institution formed an ad hoc task force made up of himself chairing the group and the vice presidents, because he realized that this would be a terribly divisive issue. At the end of several years, he retired and left the problem in the envelope for the next president, who in turn has established an ad hoc task force made up of the president and the vice presidents to try to wrestle with this terribly difficult issue. JB: So this funding strategy is called the President's Envelope Strategy, is that right? DG: Yes. It is a terribly difficult problem because it involves extremely large transfers and reallocation of resources. GM: Often when there's a financial crunch like that, it's at the staffing levels -- the amount of support staff who is there to maintain, install, and so forth. Is that what you see? Can you think of any strategies for trying to do more than say that that's unwise? DG: It is one of the things that happens, and I don't have a good strategy for addressing it other than to continually point out the relationship between the quality of services that the staff provides and the productivity of faculty and students. You did raise another point, as we look at these strategies and the funding, and I'll come back to it later. Go ahead. JB: Maybe it's a good time to ask our guest on the line, Greg Palmer -- did you have a question? GP: Yes. JB: Okay, great. GP: We were rather fortunate here at Drexel University in Philadelphia to get a substantial amount of funding to completely upgrade the infrastructure. But this is a one-shot deal, and I'm fairly new to the university environment. I've been in the corporate world for the past 16 years, so I'm finding out that there's a cyclical effect here where money becomes available, things get done, and then it lags for a certain period of time. I'm trying to overcome this sort of thing with such concepts as life-cycle management of the equipment here. We want to deliver the same things that you were talking about. We want to deliver video to the desktop. We have partnered with Threecom Corporation in ways to do that and to bring us up to an infrastructure that will allow us to do those sorts of things. But the funding issue on an on-going basis is a blank slate right now. Because I've gotten a lump sum of money to improve the infrastructure, that gives me about two years to come up with a way of supporting maintenance costs, supporting life-cycle management, supporting staff that's going to make all of these services available to the university as a whole. So I'm listening very closely on all these different ways of coming up with an on-going means of supporting everything that we put in place. JB: That's a really good question, Greg. Doug, do you have any advice with part A, the windfall: What to do with the windfall, and then how to plan for the on-going? DG: Yes. I think we've all wrestled with that issue, and if any of us had "the answer" it'd make a lot of money, and we would make a lot of people very happy. I don't have the answer. I can tell you some things that people have done. For example, the spreadsheet that is part of the CREN Campus Workshop was actually developed in an attempt to try to convince campus financial officers that a certain amount of money needed to be set aside as on-going money. So there is the approach of trying to explain in rational terms the needs for life-cycle management. That has had marginal success at most institutions. Another approach that's frequently employed -- one that I've employed myself -- is to take that one-time money and use it to lease equipment, with the understanding that when the period of the lease ends, someone is going to have to pick up the maintenance or the network ceases operation. We have managed to convert almost all of our public labs to leased equipment. Our program for putting computers on faculty members' desks is based entirely upon leased equipment, and the small incremental cost that one pays an external leasing agent may have internal political benefits by forcing the university to view these expenses as on-going expenses. So I think a big problem is upgrading equipment on a timely basis, and I think there are some ways that we can address that. The other problem that you mentioned is even more difficult, and that's how do we get operating money to hire additional support staff? And I don't have a good solution for that. GP: I have a possible solution to that. I guess Drexel is a mid-sized university, and all the colleges within the university have their own plans as far as what they would like to do for networking. There's a big distance-learning initiative going on here. We have a grant proposal in for vBNS, and it looks fairly good that we're going to get that -- a decent-sized research community here. And if we're going to offer the infrastructure, the various schools will have to help support that. I have gone to the deans and found them very cooperative here. The money that they currently allocate for their own staff -- networking staff -- but these are people that do a multitude of different things. Some of them are willing to relinquish that money to the central computing group here, saying, "Fine, let us focus on what we do best, which might be physics or engineering or chemistry. You take care of the networking problems. We're willing to support some funds towards that effort." It looks like that may work. GM: It would be marvelous if it does work. DG: I think you have an excellent strategy and an excellent idea, and you are obviously doing a very persuasive job. That may be something that the rest of us should try and see if we can emulate. JB: Perhaps it may be even at the point where we're cycling that perhaps. Sometimes these things happen so that when the deans have had the problem for a while and are dealing with it, they'd just as soon hand it over if they think that there's a solution in hand. They don't have to deal with it. It certainly is a strategy that's worth trying. GP: The difficulty that they've experienced is that there's a disjointed effort: you have different colleges, different departments, all going in different directions, and the attempt here is to centralize those services so that everybody's going in the same direction. It tends to slow the advancement of technology down a little bit because everything has to be flexible enough to accommodate 15 different ways of doing things. JB: That's true, and speaking of some consistency, maybe we can ask Doug to address some of the challenges we're facing with the various competing architectures. Doug, you mentioned IP technology, and other places are certainly looking at ATM solutions. Do you have some recommendations as far as those two technologies are concerned? Can they coexist on campus? DG: Yes, they can coexist. The difficulty is when we want to look at providing services to the desktop. For example, if we extend ATM all the way to the desktop but we have other users who are running IP at the desktop, we do have problems in moving information between the two. Now, a lot of campuses are currently using ATM on their backbone, but their basic network infrastructure is still IP. They're simply using ATM as the fat pipe between various routers. That's being done in many campuses now and works very well. That's how we're using the vBNS -- it's essentially an ATM fat pipe between IP routers. There is a problem, however, when we try to move all the way to the desktop, and I don't think at this point in time it's clear whether enhancements to IP (such as IP version 6 with RSVP) or ATM will ultimately prevail. In other words, we don't know whether it's going to be Betamax or VHS. The good news is that as we make our investments in infrastructure, if we're careful our core infrastructure is going to be the same for supporting both technologies. The difference will be in the software and some of the equipment that we put on that infrastructure. And since we need to replace that equipment basically every three and a half years, we don't have to necessarily guess right. If we install an infrastructure based on a star architecture and based on switching technology, if we guess wrong tomorrow, we're going to change it out in three and a half years anyway. So I think the good news is that, even though we can't predict which one of those technologies will ultimately predominate, it doesn't make too much difference. GM: You've recently been making comments, Doug, that specific technology even within that framework, largely star architecture to get to the desktop, has changed radically -- specifically fiber to the desktop. Would you comment on that? DG: Yes. I think everyone has for many, many years always dreamed about running fiber all the way to the desktop. There are a number of advantages (and I'll come back to them in a minute) but the initial advantage that we thought about was simply the potential for extremely large amounts of bandwidth. We always rejected the fiber to the desktop because the initial installation cost was extraordinarily high. The fiber itself was a little bit more expensive. The cost of putting connectors on the end was much more expensive. Media adapters were required to translate between the optical signal and an electrical signal that was needed by the computer. When we added all of those costs up, fiber to the desktop was simply impractical. There have been a few locations that have said that if you did an honest job of accounting the cost of building wiring closets -- even with those limitations, fiber to the desktop was still competitive. Those folks raised, I think, the key issue, and that is the cost of building wiring closets. In traditional copper-based architectures -- for example, Category 5 -- you're limited to a 100 meter run. On my campus, that means that I need to have 140 wiring closets. Each of those wiring closets requires (if we use high-performance networks) fairly expensive electronics. That means air handling and electrical power. If, on the other hand, I have a fiber to the desktop approach, I can run a signal 2,000 meters -- a little more than a mile. With that distance, I can reduce the number of wiring closets dramatically, so my construction costs go down. The other thing that's happened in the last year has been the introduction of less-expensive fiber and less-expensive connectors on the ends that can be installed in about two minutes by a relatively unskilled technician. In fact, that product was announced generally yesterday at ComNet. It was originally developed by 3M. They're now licensing it to another 20 companies. If people are interested, they can look at the 3M website, which is www.3m.com/volition. We've been installing that on a beta basis for the last year and have found that it's working quite well. When we did the economic analysis for rewiring our entire campus, what we found was that fiber to the desktop was actually less expensive in Category 5 because of the cost of building the wiring closets. Now, other campuses might not have that same experience because the cost of building might be less. They might have steam tunnels that they could use to reduce some other costs. But for us, fiber to the desktop was actually cheaper. The performance advantage is obvious. We included in our cost analysis the cost of building in a media adapter at every outlet with the thought that everyone can accept an electrical signal now, and as users upgraded their network cards, we would simply throw away the media adapter. The other advantage refers to a comment that the gentleman from Drexel made earlier. It has to do with the difficulty of developing a coherent network architecture for an entire campus. Our current design calls for using 11 switch rooms to service the entire campus. We would be able to reduce that to one switch room if we did not have some zoning problems in putting conduit under DC city streets. But we are going to be able to reduce everything on our campus to 11 wiring centers. From each of those wiring centers there will be a massive star with hundreds and, in some cases, thousands of pairs of fiber leaving that particular room. That is going to allow us to mix and match. For example, we can provide a faculty member in one office 100 megabit EtherNet; the person in the next office, we could provide 10baseT. The person in the next office, we might provide gigabit service, since all of these fibers come all the way back to our master wiring center. There's also a management advantage in that it allows us to try to provide a coherent service for the entire campus. If an individual college, on the other hand, wants to run their own Local Area Network, we'll simply give them a connection and allow them to provide their own connection. JB: That sounds really exciting for any number of reasons, Doug, both in terms of economics as well as in terms of service and flexibility -- it gives people who really need the big pipes access to it, hopefully at a reasonable price. Maybe we can link what you were just talking about there with the fiber to the desktop strategies with a question that has come in from Eric Granstaff at North Central Michigan College. He wanted to know if you could address the question about the fiber to the desktop strategies, which I think you have, but then also there's the follow-on questions about do we really need that kind of bandwidth at the desktop? DG: The answer to that is that there's two answers: Immediately, do we need the kind of bandwidth that could ultimately be provided by fiber? The answer is no, we don't need it now. The way we designed our network, however, was that we initially are looking at providing each user with their own dedicated 10 megabit EtherNet connection. We believe that that will provide almost all of our users all of the bandwidth that they need for probably the next four to five years. There will be a few exceptions, of course. Now, we could do that with Category 5 using traditional technology. It turns out we can do it with fiber optic just as cheaply. What we'll do in five years, though, now we have the option of taking out the media adapter, throwing it away, plugging the fiber into a high-performance NIC card on the back of the faculty members' now much-more-powerful PC and providing them 100 or 500 or 1,000 megabit connection, as their requirements demand. So in terms of immediate requirements, no, it isn't really necessary. In terms of building in scaleability to allow us to meet future needs, I think it becomes very attractive. GM: In your experience -- you said you've been essentially in a beta mode with this for about a year: Are there any downsides to the installation process, vulnerabilities that were surprises that one ought to think about in the trade-off vis-a-vis the long history of working with copper? DG: We were concerned about that, and we put the fiber through a fairly brutal test. For example, in my office they took the piece of fiber and plugged it into the wall outlet, then simply laid it on the floor so that every morning I would roll the wheels of my desk chair over the fiber. And as you know, Greg, I am not a lightweight. It has survived! In the machine room, as we connected the machines -- you would be amused to see the back of the wiring panel because the guys would take an extra ten feet or so of fiber and they would just go around wrapping it tightly around posts that were sticking out of the back of equipment. The posts would be sticking out and they would go wrap wrap wrap wrap wrap wrap, yank! Pull it over to the next post, wrap wrap wrap wrap wrap wrap, yank! They'd just wrap and tie that ten meters of extra fiber and just wrap it around something to get it out of the way, and then they'd plug it in the back. So we deliberately tried to abuse it, and we were amazed to find the stuff continued to work. GM: I'm concerned that a campus like yours has a level of sophistication in the installation crew and so on, but as you go to smaller institutions that may contract this out and so forth, it gets to be -- and it's brand new, so history says there will be surprises. It's tempting to say, "Well, let's let some other people try this for two or three years before we jump on the bandwagon." DG: I have to be honest. When we first started installing the connectors on the end, our yield rate was not 100%, and sometimes it took a little bit longer than we had anticipated. However, the company has gone back and redesigned their installation equipment so that now a relatively untrained technician can put the connector on in about two minutes with fairly close to 100% success rate. Many of those bugs did exist when we first started testing it, and our success rate -- our yield rat -- was not anywhere close to 100%, I have to confess. JB: Okay, we had one other question also from Eric Granstaff, Doug, and it referred to Internet2: What should campuses be doing if they want to prepare for the connectivity and the higher bandwidth that will be coming with Internet2? DG: Judith, I think a lot of folks know that I have cautioned many campuses not to rush into joining Internet2, because it is quite expensive. Instead they should focus their efforts on developing their campus networks. It sounds to me like that's exactly what Eric was asking. I would recommend that a campus focus on its campus network, look at how it would provide 10 megabit, switched connectivity to the desktop. That means moving towards a star architecture using switched technology -- whether it's ATM or mass switched Ether or switched FDDI -- it's not terribly important, but a switched technology running over a star architecture. I also think that campuses need to be working with their leading-eDG:e users. What applications do they have that might make sense in an Internet2 environment? Right now, Internet2 is very much a research and development project. Many of us are very active in it. We're making huge investments, both in terms of money and in terms of our own time. Very honestly, it's not a production network yet, and so I think most campuses would probably be best served by holding back, building their campus infrastructures -- which can be the expensive part -- and then joining Internet2 in a year, a year and a half, when there really is a network to connect to. JB: That same question is also related to what we do with all the bandwidths to the desktop, right? So just in terms of what are we going to do with that, then, you're saying it's really important for people to look at the applications on their campus that in fact are pressing and work at making the network work pretty hard, and then look at Internet2 from the perspective of those kinds of applications? DG: I think some people need to look at Internet2 now. I don't mean to imply that we don't need to put a lot of effort there, because as you know, many of us are putting a great deal of effort there. I think a campus needs to look at the resources that they have available and determine how many things they can do well. A large research campus may be able to mount a major effort in Internet2 and a major campus initiative to upgrade the campus network and a major initiative in administrative computing -- a major initiative using techniques such as broadbanding to upgrade their personnel system. They may have the resources to do multiple things. I think most campuses have to make a choice as to which areas they will focus on, and I think today, for most campuses -- most smaller campuses and mid-size campuses -- the most return on investment is going to be upgrading the campus infrastructure. JB:. Okay. GM: In terms again of the campus infrastructure, I'm thinking about having either one central switched site or even a few. What level of EtherNet switches are you putting in there? You talked before of a three and a half year life cycle. Are you pretty much installing on the assumption that, yep, we're going to do stuff that runs right now -- we can't guess what the technology is going to be. It's going to be radically different in a few years, so we're just going to have to swap it all out? DG: The three and a half years comes from two observations: One is that it seems to be a historical trend that we have to replace our network equipment about every three and a half years. The second is a little more quantitative, and that comes from a study that we did for the common solutions group in 1994 in which we looked at the growth in bandwidth utilization over the preceding ten years. And what we found was that the data was exactly a straight line on a semi-long plot, which means that it was exponential growth and had a doubling time of 12 months. So the historic rate of growth in the network is a factor of two every 12 months, or a factor of 10 about every three and a half years. Now, that data was before the advent of the World Wide Web and the extensive use of graphics, so those numbers may in fact be too conservative for the environment we're in now. But if we assume that the rate of growth will continue to be based on the file transfers and Telnets that we all know and love, then we can look at needing to have a network infrastructure whose capacity requirements will double every 12 months. So that's where the original three and a half years came from. I think, quite frankly, we will be lucky if we can get by with stretching our equipment for three and a half years. So what we typically do is to try to buy something that gives us more capacity than we need now, and know that in three and a half years, we're going to have to replace it. It doesn't make good economic sense to install something now that has more than 10 times the power and the utility that we need at this point in time. So it simply isn't economically feasible to overbuild by much more than a factor of 10. JB: And certainly what you're all saying, Doug, just reemphasizes the earlier part of our conversation about the importance of funding the network on a rational basis, and planning for these kinds of campus network upgrades every three and a half years. DG: Yes. JB: Did you want to say anything more about the spreadsheet and some of the data, and perhaps rules of thumb that are in that spreadsheet that might be helpful? DG: Well, the nice thing about the spreadsheet is that the individual user can go back and play with the input parameters. GM: We should probably be clear, for anybody who's listening in, that the spreadsheet is on the www.cren.net website, and you'll see right on the homepage -- I think it's in the upper left-hand corner -- a link to it where you can either download in Excel format or Acrobat PDF so that you can open it up to look at. JB: Thanks for mentioning that, Greg. It is there primarily -- it is going to be out there and available for about another six weeks, so I would encourage folks to go and pick it up. DG: As we set up the spreadsheet, what we did is we took each of the individual components and we identified the cost for that component. The person using the spreadsheet then can enter what they believe to be the useful life-span of that particular item -- whether it would be ten years, 15 years, or three and a half years. The third column projects the monthly cost. So for example, the monthly cost of personnel is one month's salary, since there really isn't any amortization there. On the other hand, if we look at the fiber to the desktop infrastructure, we can feel pretty confident that that's going to be good for 10 or 15 years, so the monthly cost goes down. If someone feels that, "Well, we can make our equipment last a little bit longer," they can change the three and a half years for a router to four years or five years -- whatever they think is reasonable. The entire spreadsheet goes through each component of the campus infrastructure, and the user has the opportunity to provide input. For example, in terms of support costs, the spreadsheet broke out inter-building and intra-building network support technicians. We made the assumption that one network technician could support 600 LAN users within the building. I think everyone would say that that's a pretty good technician who can support 600 LAN users, but we still get very large aggregate numbers when you do that calculation. We made an assumption that one network technician can support 600 people in terms of the backbone capabilities, the upgrades to the backbone that would have to be made, the changes to the routers, and so on. But if an individual campus feels that they can be more efficient, then that ratio can be changed. I find that when I show those numbers to the deans and department heads, they look at me and think I'm a little bit strange because they say, "There's no way one person can support that many people!" I smile and say, "Well, I agree with you." But those numbers are all variables that can be entered into the spreadsheet. When I ran them for my campus, I got a number of approximately $60 per connection per month forever. The spreadsheet does make the assumption that inflation is at zero percent, and that the improvements in the performance-cost ratio of the equipment will exactly off-set the increased bandwidth requirements of individual users. The data suggests that that probably is not a valid assumption: real costs are probably increasing about eight percent a year. JB: Doug, that sounds like a really useful spreadsheet, so I will leave it up to folks to go find that. Our time is coming to a close here, and we have two questions that we have yet to address. One is fairly brief. It's asking just what is the video standard that you see direct to the desktop? I'll let you answer that before asking the second one. DG: And without digging into my notes, I can't give a concrete answer. JB: All right. Is that a good follow-up question that we can respond to by e-mail? DG: Yes. JB: Okay, good. Either that or maybe the website. The second question -- and it's from Greg from Drexel, who happened to ask it off-line: Do you think Internet2 will become available to K-12 institutions within two years? That's a toughie, huh? DG: If we were looking at Internet 1, I know the first Internet 1 service was offered in roughly the fall of '86 / spring of '87 time frame. I was involved with hooking up one of the first elementary schools in the nation, the second, I believe, in 1989. So three years went by before we were able to start connecting the elementary schools. I think the time frame for deployment of Internet2 will be more rapid. On the other hand, Internet2 is in the same stage of development now as the Internet was in 1985. So two years, I don't know. Three years, I think the answer is yes. JB:. Okay, very good. Greg, did you have any other questions at your end? GM: No, I think we've covered the ground quite well. JB: All right. Well, very good. I think what we'll do is -- oh, there is just one other question that we might look at here: Is planning for gigabit networks economical at this point? DG: It's going to be more expensive than what we're spending now. However, if we look at the projected expenses for gigabit networks compared to what we spend for all of telecommunications, the numbers are not out-of-line with what we're currently spending. I think that as we look down the road and develop networks capable of providing gigabit service, we'll see that many of the things that we're currently doing can be rolled into that environment, which means that our total expenditures may go up slightly. But they're not going to be radically different from what they are now, if we look at our total communications bill. If we include everything -- voice, video and data -- if we look at the total, I see that rising slightly but it will not be radically different from what it is today. JB:. Okay. Well, Doug, I think we've really covered the gamut of many funding and cost issues today, and I want to thank you and all of our participants for being here today. I also want to remind folks that they can send e-mail to ccs@cren.net with other follow-up questions that they might have for you. Also I would like to invite everyone to the next session of this particular series on Wednesday, February 11, at 4:00, which is just about two weeks from today. Our guest expert on February 11 will be David Wasley from the University of California, and he will be focusing on issues for on-campus, off-campus and Internet2 issues. You might want to review the sections on the Virtual Seminar CD on current networking issues before that session. We've been sending out announcement messages about these events to a number of people. If you didn't get your own announcement message, please send an e-mail to cren@cren.net. We will add you to the list for your own direct announcements. Doug and everyone, thanks for being here, and also thanks to everyone who helped make this possible today. That includes the board of CREN, the Corporation for Research and Educational Networking, guest expert Doug Gale, Greg Marks and also our audio and coding engineer, Jason Luke, at Michigan, at the UM Online. You were here because it's time. Thanks very much, Doug, and thanks, Greg. DG:. Good afternoon. GM: Thank you. JB: Good-bye. |