The VolNet Project

 

A New Communications Infrastructure

for

The University of Tennessee

 

 

 


Table of Contents

 

Overview............................................................................................................................. 4

 

1. Introduction..................................................................................................................... 5

1.1 VolNet’s History: The Planning Process...................................................................... 5

1.2 The NetEffect Report.................................................................................................. 6

1.3 The National Laboratory for Applied Network Research (NLANR) Report................ 7

1.4 Digital Connections, Inc.............................................................................................. 8

1.5 Internal Workshop to Review Plans and Analyze Proposed Networking Designs.......... 9

1.6 IBM International Networking Design Center Workshop............................................. 9

1.7 Conclusion                                                                                                                 .......................................................................................................... 10

 

2. UT’s Current Network................................................................................................. 11

2.1 Current UT Network Map........................................................................................ 11

2.2 Current Network Description.................................................................................... 12

2.3 Current Network Issues............................................................................................ 13

2.4 Conclusion                                                                                                                 .......................................................................................................... 14

 

3. VolNet: Justification.................................................................................................... 15

3.1 Why a Network Upgrade is Crucial.......................................................................... 15

3.2 Some Critical Examples............................................................................................ 16

3.3 VolNet: Designing A Campus Network for the Future............................................... 17

3.4 Conclusion                                                                                                                 .......................................................................................................... 18

 

4. VolNet: Technical Overview....................................................................................... 19

4.1 Introduction                                                                                                               .......................................................................................................... 19

4.2 Network Convergence.............................................................................................. 19

4.3 Network Design Criteria........................................................................................... 20

4.4 Evaluation of Proposed Network Designs.................................................................. 21

4.5 VolNet Network Design and Architecture................................................................. 22

4.5.1 VolNet Proposed Logical Layout........................................................................... 23

4.5.2 Characteristics of Design........................................................................................ 23

4.5.3 From the Core to the Periphery.............................................................................. 24

4.5.4 Key Elements of the Design.................................................................................... 25

4.6 Transition from Legacy Network to New Network.................................................... 26

4.7 Wireless Technologies............................................................................................... 27

4.8 Conclusion                                                                                                                 .......................................................................................................... 28

 

5. VolNet: Implementation Overview............................................................................. 29

5.1 Introduction                                                                                                               .......................................................................................................... 29

5.2 VolNet Project Phases.............................................................................................. 29

5.3 Projected Schedule................................................................................................... 30

 


6. VolNet: Budget............................................................................................................ 32

6.1 Phase 1 – Initial Rollout of Core Backbone............................................................... 32

6.2 Phase 2  - Final Stages of Building Cut-Over............................................................. 32

6.3 Phase 3 – Intra-building Design................................................................................. 32

6.4 Budget Total Phase 1 - 3.......................................................................................... 32

6.5 Phase 4 - Intra-building Construction......................................................................... 33

6.6 Ongoing Funding Model............................................................................................ 33

6.6.1 Network Costs Are Operating Costs, Not Capital Costs........................................ 33

 

7. VolNet: Ongoing Operations..................................................................................... 34

7.1 Institutional Commitment........................................................................................... 34

7.2 Standards                                                                                                                  .......................................................................................................... 34

7.3 Best Practices 34

7.4 Administrative Responsibility..................................................................................... 34

7.5 Staffing                                                                                                                      .......................................................................................................... 35

7.6 Space                                                                                                                        .......................................................................................................... 35

7.7 Cost Recovery.......................................................................................................... 35

7.8 Continuous Improvement.......................................................................................... 35

 

Index of Appendices....................................................................................................... 36

Appendix I            An Assessment of Information Technology

Appendix II          Presentation at the Dean’s Retreat, August 1999

Appendix III         Current Network Logical Layout

Appendix IV         NetEffect Report

Appendix V          NetEffect Proposed Design

Appendix VI         DCI Proposed Design

Appendix VII        NLANR Proposed Design

Appendix VIII      CNS Proposed Design

Appendix IX         NLANR/ES Report

Appendix X          DCI Report

Appendix XI         NS Networking Workshop Agenda

Appendix XII        Proposed Network Design

Appendix XIII      IBM International Networking Center – Workshop Agenda

Appendix XIV      IBM Networking Report

Appendix XV       Article from EduCause Quarterly: “Guiding Principles for Designing and Growing a Campus Network for the Future”


Overview

 

This document provides an overview of the VolNet project.  All phases of the project are covered: past, present and future.

 

In the Introduction, background information is supplied on the process by which the new UT network design was determined.  The various reports supplied by the industry experts UT has consulted are briefly discussed.  Workshops held to refine the final design are also detailed.

 

In the section dealing with UT’s Current Network, the overall design of the existing network is presented, and the serious shortcomings of the network are discussed.  Critical pieces of UT’s operations highly reliant on the new backbone, instructional, research, and administrative, are presented. Testimonials from key UT network users across campus are included.

 

In VolNet: Technical Overview, the proposed design for the new UT Network is presented and discussed.  The decision criteria used to derive the selected design and the design itself is explained.

 

The actual implementation of the VolNet project is covered in VolNet Implementation Overview.  Details on specific workgroups, tasks, and timeframes for the VolNet project are presented.

 

VolNet: Cost covers the capital and operating costs required for the new network.

 

The importance of operational support for VolNet, critical to the long-term viability of the new network, is discussed in the final chapter. 

 


1. Introduction

 

 

"Information technology is the force that leads many of the changes we encounter in the world around us. It is changing the way we work, the way we play, and it will change the way we learn.“

 

“Universities on the cutting edge in education, research and service must be leaders in information technology.”

 

                                                            J. Wade Gilley, Inaugural Address

 

 

The University Of Tennessee aspires to join the ranks of the top twenty-five public research universities. To achieve this goal, the Office of Research and Information Technology (ORIT) must build and support the underlying infrastructure that is fundamentally necessary to establish leadership in Information Technology. An essential element to achieving leadership is having a well-designed communications infrastructure, a first-rate cable plant (fiber optic, coaxial, and twisted copper pairs), conduits, raceways, and connecting electronics that provide connectivity between end users and systems.  A well-designed core network infrastructure can provide an essential foundation for mediated teaching, research, library and administrative services.  A well-designed network easily integrates advances in technology and ensures a painless path for network growth and renewal.

 

VolNet is a critical step in helping UT achieve its vision. Well-implemented, VolNet will provide UT with the infrastructure to support the University’s mission.  However, we must ensure we do not stop with merely implementing a new network design.  We must examine and understand the systemic processes involved with running an excellent communications infrastructure and implement “best practices” that ensure VolNet will remain viable for many years to come.

 

1.1 VolNet’s History: The Planning Process

 

[AP1] Efforts have been made since 1994 to implement a comprehensive upgrade to the campus network.  The original impetus to creating a Student Technology Fee in 1996 was to support the cost of this network upgrade.  However, it was recognized that there were a myriad of needs for information technology on the Knoxville campus, and thus funds from the Technology Fee were allocated to upgrading computing labs, providing support services, improving student access to software, and supporting the faculty use of technology in the instructional program, as well as improvements to the network.

 

In August 1999, Chancellor William T. Snyder challenged the campus community to transform UTK through Information Technology (See Appendix II for the presentation made at the Dean’s Retreat, August 19, 1999). Also, in October 1999, incoming President Gilley commissioned a study by then Vice President of the UT Space Institute Dwayne McCay to assess the state of information Technology in the UT System (See Appendix I). 

 

In response to this challenge and report, the VolNet project was initiated, and the planning process has included the following steps.

 

·        A formal assessment of the current network by NetEffect Corp. provided a complete inventory of the backbone equipment and satellite equipment rooms.   In addition, NetEffect conducted a series of thirty-one focus group meetings with campus constituents to document current problems and future needs.

 

·        A formal external review of the NetEffect document was conducted by two nationally recognized networking organizations: The National Laboratory for Applied Network Research (NLANR) and Digital Connections, Inc. (DCI).

 

·        Internal workshops were conducted to review plans and analyze proposed networking designs.

 

·        A two-day workshop was hosted by IBM’s Global Network Design Center to validate the proposed technical design, outline network policy requirements and develop detailed implementation plans and timeframes.

 

 

1.2 The NetEffect Report

 

NetEffect Corporation (www.neteffectcorp.com) of Atlanta is a certified Cisco Professional Services Partner with expertise in communications infrastructure design and measurement.  In late Fall 1999, NetEffect was charged with producing a gap analysis of the Knoxville campus legacy network infrastructure, usage, capacity, and application requirements, as well as making recommendations based on analysis of current inventories, data, and infrastructure.   A comprehensive survey and analysis of the Knoxville campus communications infrastructure was produced.

 

On February 3, 2000, NetEffect delivered a report titled UT VolNet 2010 Network Infrastructure With Design Recommendations (See Appendix IV). The study provided a campus-wide analysis that described a grim picture of the Knoxville campus legacy network infrastructure.  The study revealed that the legacy backbone network is unable to meet the existing minimal needs of the university let alone what focus groups revealed as upcoming needs. 

 

This situation resulted from ad hoc network growth over the preceding years and insufficient institutional planning and investment in the increasingly critical campus network.  The NetEffect report recommended:

 

·        Accelerated conversion of the campus legacy backbone;

 

·        Elimination of several single points of failure in the current network design;

 

·        Deployment of a hierarchical topology design involving a network core, distribution layer, and building access layer;

 

·        Replacement of ATM technology with Gigabit Ethernet as the core backbone technology;

 

·        Implementation of best-practices policies to maintain the network in compliance with specifications;

 

·        Traffic filtering to isolate non-IP protocols to specific regions of the building access layer;

 

·        Recognition of IT staff as a strategic resource with emphasis on professional development and retention;

 

·        Deployment of DHCP, DDNS, Directory Services (using commercial code);

 

·        Utilization of commercial network management and monitoring software.

 

The NetEffect report was a milestone in initiating a new program to replace Knoxville’s legacy network.

 

 

1.3 The National Laboratory for Applied Network Research (NLANR) Report

 

The National Laboratory for Applied Network Research Engineering Services group (NLANR/ES; www.nlanr.net) is a National Science Foundation funded group whose primary goal is to provide technical, engineering, and traffic analysis support of NSF High Performance Connections sites and HPNSP (high-performance network service providers) such as the NSF/MCI very high performance Backbone Network Service (vBNS). In late February 2000, Mr. Basil Irwin and Mr. Matthew Mathis, network engineers from NLANR, met for two days with personnel from the Division of Information Infrastructure at The University of Tennessee in Knoxville.

 

Both Mr. Irwin and Mr. Mathis were asked to review the NetEffect Report, as well as make other recommendations regarding Knoxville’s networking infrastructure, both present and future.  The result of their review and recommendations are contained in the NLANR/ES report (see Appendix IX).  The NLANR/ES report, while validating most of NetEffect’s recommendations, also produced additional short-term and long-term recommendations for critical areas, and provided an external review and analysis of specific recommendations made in the NetEffect Report.

 

Key recommendations made in the NLANR/ES report include:

 

·        Replacement of the legacy network with a modern cable plant (Layer 1 architecture) as the foundation for future campus networking infrastructure;

 

·        Promotion of a scalable, flexible, and robust packet switch (Layer 2) and routed (Layer 3) network;

 

·        Proposals for improving morale and retention of networking and other technical support personnel;

 

·        Continued development of campus-wide service-delivery standards and security policies via grassroots university-wide advisory bodies composed of technical personnel;

 

·        Establishment of networking as a strategic management and institutional priority;

 

·        Continued deployment of a Directory Services Infrastructure (DSI);

 

·        Continued deployment of DHCP and dynamic DNS;

 

·        Caution regarding NetEffect's emphasis on “buzzword technologies” – technologies that are either just now starting to emerge, or that do not yet exist.

 

 

1.4 Digital Connections, Inc.

 

Digital Connections, Inc (DCI; www.digitalconnections.com) is a Tennessee-based communications consulting firm with offices in the Southeast. In March 2000, DCI was asked to review the work to date and provide additional insight into UT’s communications environment.  DCI was also asked to begin work on establishing “best practices” within the University that would help in developing a stable network.  Recommendations included moving towards an IP-only environment while supporting current distribution locations, protocols, and providing a core to distribution design that will meet the demands of expanding networking requirements. Refer to Appendix X for more information on this report.

 

Similar in many respects to the designs proposed by NetEffect and NLANR, DCI’s recommendation suggested a consolidated central network core housed in two locations on campus for the purpose of redundancy and disaster recovery.

 

1.5 Internal Workshop to Review Plans and Analyze Proposed Networking Designs

 

On April 13 and 14, all-day workshops were conducted in Knoxville to:

 

·        Review and identify critical network design criteria,

·        Review proposed network designs offered by all outside consultants (NetEffect, NLANR, and DCI) and those offered by internal staff,

·        Select and refine the network design based on work done to date, and

·        Develop an implementation timeframe.

 

Attendees included DII Network Services taff, Mr. Dewitt Latimer (DII Computing and Network Services), Mr. Steve Keys and Mr. Steve Henderson (DII Telephone Services), and Mr. Basil Irwin  (NLANR), Mr. Joe Cooper (DCI), Mr. Joe Efferson and Mr. Jim Priest (IBM Global Services).

 

Refer to Appendix XI and XII for results and diagrams of these sessions.

 

1.6 IBM International Networking Design Center Workshop

 

On April 26 and 27, UT representatives involved with the project attended a two-day workshop at IBM’s International Network Design Facility in Cary, NC.

 

The main goals of this workshop were to:

 

 

The IBM workshop was a success on all points and represented an excellent capstone to the six-month design effort.  Refer to Appendix XIV for IBM’s report.

1.7 Conclusion

 

From late 1999 to early May 2000, there has been intense scrutiny of the UT communications infrastructure.  It was a positive experience that identified past barriers to success and resulted in a much stronger product.  The process was open, iterative, and peer-reviewed to ensure that all the necessary questions were asked and issues covered.

 

To this end, all reports concur in the following points and recommendations.

 

·        UT needs to upgrade its current legacy network and lay the foundation for present and future campus networking needs with a modern cable plant with inherent flexibility to accommodate future technologies as well as, replacement of legacy network and a move towards Gigabit Ethernet as the core backbone technology.

·        UT needs to move towards an IP-only core network and strategically derive a new IP numbering scheme to successfully continue the development of campus-wide service-delivery standards and security policies.

·        UT needs to implement best-practices policies to maintain the network in compliance with specifications.

·        UT needs to recognize and establish the communications infrastructure as a strategic resource and institutional priority.

 

 

The leadership of UT recognizes that the challenge of competing nationally and internationally rests with its ability to capitalize on the growing Internet movement to advance its mission and maintain its credibility as a premier institution.  To do so requires a well-conceived, deliberate approach to network design, implementation, and operational management.  Furthermore, this approach must be executed in a reasonable and disciplined manner. The VolNet project represents such an approach.  VolNet will provide UT with the map to achieving and supporting a networking infrastructure with high reliability and service quality.   

 

This planning process has been a critical milestone in the articulation of UT’s IT strategy to become a top-rated public institution.  When all four phases are completed by the fall of 2003, the VolNet project will provide a highly available, reliable, and scalable communication infrastructure.  It will be the high-performance utility for UT’s future.


2. UT’s Current Network

2.1 Current UT Network Map


 

 


2.2 Current Network Description

 

The current Knoxville campus network is primarily based on two technologies: ATM and FDDI.  The backbone consists of an ATM core consisting of several OC-12 fiber runs, with OC-3 links to smaller ATM switches and a FDDI (Fiber Distributed Data Interface)  ring.  Links between routers on the FDDI ring are connected with multimode fiber.  There are Ethernet interfaces on the FDDI ring routers, which attach to either Ethernet switches or hubs.

 

There are approximately 130 buildings on campus that require network connectivity. Within these buildings, there are roughly 281 wiring closets, referred to as SERs (Satellite Equipment Rooms). Equipment arrangements inside SERs vary across the campus because no “standard” installation practice has been enforced. SERs also vary in suitability to the task with size, electrical, and cooling capabilities being problematic in many areas. Furthermore, many wiring closets are “shared” with other organizations such as facilities maintenance. For an inventory of current status of SERs, refer to the NetEffect Report.

 

Within UT’s network, data may flow over several different types of network equipment, including hubs, switches and routers. The edge devices in the campus network are for the most part hubs. These shared devices feed into routers, which are then used to connect buildings within the campus via fiber or, in some cases, leased line copper. Data traffic may also flow through several LAN switches.  Many of these switches reside in remote buildings and serve as connection points to UT’s core network.

 

UT has a diverse network that produces a variety of traffic.  IP, IPX and AppleTalk protocols all coexist on the network.  HTTP, FTP, Telnet and SMTP traffic comprise the largest portion of the IP traffic, while IPX traffic primarily consists of SAP updates. Other types of traffic, such as LAT and NetBIOS, are present to a lesser extent.

 

Knoxville maintains four connections to the outside world – DS-3 service (45 Mbit/sec) to the commodity Internet, DS-3 service to the I2/Abilene research network, and six T1 lines (9.24 Mbit/sec) to the commodity Internet as backup to the primary DS-3.  An OC-3 ATM connection (155 Mbit/sec) to ORNL is provided by MCI’s local public access SONIC ring in greater Knoxville.

 

Knoxville also leases an OC-12 SONIC ring from BellSouth to carry local-loop data/voice traffic from BellSouth’s central office in downtown Knoxville to the campus.  SONIC technology is inherently robust and designed for fault-tolerance to minimize any local outages between campus and local carriers such as Qwest, MCI, BellSouth, US LEC, and others.

 

 

2.3 Current Network Issues

 

Currently the campus network is a mixture of technologies developed over a fifteen-year period that does not provide high availability and reliability. The network is not available for all members of the community, has insufficient bandwidth, and is difficult and expensive to maintain.  The UT network has grown in an ad hoc fashion and without the guidance of consensus-based technical policies that apply equally across all segments of the university. Best practices and guidelines have not been established for the effective use of the network.

 

Below are listed a few of the serious issues that affect UT’s current network.

 

1.      Due to the current design of the network, the ATM switches in the backbone encounter a good deal of overhead as they must translate between FDDI, ATM and Ethernet in order to carry data to the different types of end stations in the campus network.  This overhead consequently affects UT current bandwidth, producing relatively high latency and yielding low transfer rates to users. 

2.      UT’s current network design contains multiple single points of failure, which raise reliability concerns, particularly at the edge of the network.

 

3.      A multi-tiered security implementation is needed to protect the UT network.  UT’s current network design lacks adequate tools to interpret network traffic analysis or perform intrusion detection.  Reasonable places where intrusion may occur should be identified so that significant traffic can be monitored and filtered with routers or firewalls.  A well-documented diagram of the new network topology is required.  Security policies and procedures need to be revised by a security advisory board to define desirable identification and authentication schemes.

 

4.      There are currently no filters used to prioritize IP traffic across the network.

 

5.      With a small networking staff, it has proven difficult to support ATM core technology, management of multiple Internet connections, support for a wide-range of equipment, while trying to keep a steady knowledge base.

 

6.      ATM is a very complex technology and it is difficult to establish and maintain a technical knowledge base.  Often UT network engineers who become knowledgeable with ATM design/support are lured away by other employers willing to pay a premium for such knowledge.

 

7.      Currently, there are no best-practices policies to keep the network in compliance with specifications.

 

8.      The current network cannot meet the growing demand for services and does not provide a reasonable upgrade path.

 

For a more comprehensive survey of the issues with the current network, refer to the following appendices: NetEffect Report, NLANR/ES Report, and IBM Networking Report.

 

2.4 Conclusion

 

UT’s current network has evolved during a period of rapid change, including volatility in campus IT leadership; unpredictability in base funding; explosive growth in network usage; rapid advances in technology resulting in the ever shortening half-life of equipment; unprecedented low levels of un-employment in the national IT workforce placing larger burdens on IT salary budgets.  All these factors have combined to give UT a network with high support costs while delivering an unacceptable level of performance.

 

These problems are systemic and not easily corrected.  Simply deploying new technologies without addressing the underlying problems will result in a network in similar shape in only a handful of years.

 

VolNet is not merely a new network design, but a new way of doing business that ensures UT will be provided with a robust, scalable, and highly available network that can be supported with reasonable resources for years to come.


3. VolNet: Justification

3.1 Why a Network Upgrade is Crucial

 

"A robust, stable, high-speed campus network infrastructure is an extremely critical item for the Department of Computer Science.  For example, the new SInRG research program with its over $5M of funding relies upon ultra-high speed intra-building and campus backbone computer communications.  Without such an infrastructure, our research, and even our instructional, programs would quickly die."

 --- Dr. Robert C. Ward, Professor and Chair,
Computer Science

 

”This is an area where I feel we need significant improvement fast.  I see two related areas that are critical to us.  The first is bandwidth. Business schools are having to react to our constituencies who want more customized electronic delivery of needed curricula and training.  At this time we are looking at replicating the Physicians' EMBA type delivery to other graduate programs and executive education.  As we move more toward synchronous contact, both domestic and international, it is envisioned that we will need far greater bandwidth than presently exists on campus.  The second area where we need continuous improvement is in responsiveness to failures.  Here I think we're making good strides as we attempt to have our Consumer Resource Group work closely with DII.  By training our staff with what is needed to keep the network operating, we can provide you a front line response.  However, as we move to a 24/7 environment, we will clearly need more help.”

--- Dr. David Schuman, Associate Dean,
College of Business Administration

 

The current network backbone is unable to meet even the minimal needs of UT, and its unreliability causes frequent disruption of university business.  “Start and Stop” network implementation, mismatch and age of technology, poor design and maintenance, lack of standards, policies, enforcement, and documentation, and loss of senior technical staff, have all had a serious impact on the UT network. 

 

As stated in all the reports, UT is at a crossroads. Our network needs to be upgraded not only to provide high reliability, availability, and functionality to our constituents (students, faculty, researchers, and staff), but most importantly, an upgraded network is the only path to maintain current university needs and to scale and gracefully accommodate future needs.

 

With the Internet revolution, distance learning, security, cost containment, web-enabled administrative services, the rise of e-commerce and web-enabled services, and on-demand collaborative environments are no longer simply desirable applications, but are demanded and expected of today’s universities.  At UT, this is expected, but unless our network is upgraded, UT will not be able to support present and future applications. Planned UT initiatives raise the network upgrade to a level of critical importance that UT cannot afford to postpone. Refer to Appendix IV (NetEffect Report) for specifics on applications needs from constituents.

 

3.2 Some Critical Examples

 

A new network is required for UT to:

 

·        Provide enhanced network resources to researchers with high-performance computational and networking needs.

 

-          Within UT’s Computer Science department, SInRG (The Scalable Intra-campus Computational Grid) will bring a new set of opportunities and requirements.  Within our Engineering, Math, and Physics department the transfer of very large files (in the order of Petabytes) is becoming an important issue and planning for a structure to handle these sizes is a priority for continued research growth.

 

-          Virtual Rounds, an application under development at UT’s College of Veterinary Medicine, proposes the sharing of live clinical caseloads with the colleges of veterinary medicine at Auburn University, The University of Georgia, and North Carolina State University by means of H.323 video conferencing. The success of this application (in terms of the benefits from the interaction and participation with these universities, and the access to the expertise and resources available) requires a reliable, robust, high performance network.

 

-          For UT to continue as an active participant in initiatives such as Internet2 (I2) and The Next-Generation Internet (NGI), UT needs to upgrade its current network to accommodate next-generation technologies and applications, as well as to keep a leadership role within I2 and UT’s peer institutions.

 

·        Support the UT/Battelle management of Oak Ridge National Laboratory. 

-          A component of this partnership involves collaborative research between with six “core” universities (Duke, Georgia Tech, Florida State, Virginia, Virginia Tech and North Carolina). Consequently, multi-institutional projects, research and high bandwidth applications (also between UT scientists and ORNL) will clearly necessitate a highly available and advanced network.

 

·        Enable media rich instruction and provide access to a wide range of information resources.  Only with a new and robust network can UT be able to handle the increase in digital media file types, such as video and audio, which necessitate low latency and guarantees in data integrity.  Such a network will allow for the following:

 

-          Plans are being made to increase the amount of photographic data stored on servers within UT’s Engineering department. They will be working with police departments throughout the US and will require dependable delivery of these larger files;

 

-          To expand access to a wide-range of Library information database and multimedia resources;

 

-          To incorporate streaming video and audio in instruction, accessible from classroom, dorm, and home;

-          To make use of network reliant communication tools in everyday instruction and peer-to-peer communication such as chat, whiteboard, desktop video conferencing, audio-tutorials, one-way audio transmissions and eventually two-way audio and video communication;

 

-          To download, share, and manipulate in real time large data sets, simulations, applications, and graphics and interactive media files; and

 

-          To operate and experiment with remotely located and virtual scientific tools.

 

·        Expand the instructional core of courses to an electronic form. A successful delivery of electronic courses will not take place unless there is a network upgrade.

 

-          UT Knoxville, Health Sciences, Space Institute, and Outreach currently use Blackboard's CourseInfo. At UT Knoxville, as of May 2000, over 550 instructors, spanning every college and 184 departments, requested more than 600 CourseInfo accounts serving more than 7000 students.

 

-          UT is investing in the CourseInfo Enterprise Edition, which will be hosted in Knoxville, but serve the statewide system.  A robust network is imperative to insure fast and transparent exchange of information statewide.

 

·        Support the implementation of administrative projects and online business processes, such as SAP/IRIS, and to accommodate emerging e-commerce trends at UT.

 

3.3 VolNet: Designing A Campus Network for the Future

 

In defining and designing the new network infrastructure for UT, industry experts were consulted and accepted best principles in network planning and design were utilized.  According to Philip E. Long, author of Preparing Your Campus for a Networked Future, (see Appendix XV) the following principles should be followed in network planning and design:

 

·        Planning should be ongoing

·        Network design should be based on standard building blocks

·        Network costs are operation costs, not capital costs

·        Networks should be continuously renewed

·        Networks should grow gracefully

·        Network investments should be value based

·        Networks should use open standards

·        Networks require active management

·        Networks need appropriate redundancy

·        Outsourcing should be used judiciously

 

By employing these guidelines in designing the new network for The University of Tennessee, the resulting network should meet the stated goals of being truly Reliable, Accessible and Scalable (RAS).

 

3.4 Conclusion

 

The VolNet project will provide a basis for the university to work toward the goals established by President Gilley to double federally sponsored research, expand the electronic course content to at least twenty-five percent of the instructional program, and attract the best students.  To compete nationally and internationally and advance its mission, and maintain its credibility as a premier institution, the Information Infrastructure must be upgraded on the main campus. A strong networking foundation will provide an essential element for UT to become a leader in the public university environment.

 

Although metrics will be obtained to document the increase in available bandwidth, number of ports, demand on the physical interface, network availability, and other benchmarks, the true measure of the value of the network upgrade will be the contribution to achieving national recognition for utilization of information technology and the quality of instruction and research at UT.

 


4. VolNet: Technical Overview

4.1 Introduction

 

As indicated in Section 1, VolNet’s planning process has included an open review of our current network by various outside entities, all of which provided reviews of UT’s current networking capabilities and recommendations for a new networking architecture. In April, all-day sessions were conducted in Knoxville to finalize the network design criteria, review proposed network designs offered by all outside consultants (i.e., NetEffect, NLANR, and DCI) and those offered by internal staff, select and refine the network design based on work done to date, and develop an implementation timeframe.

 

4.2 Network Convergence

 

Converged networking is an emerging technology thrust that integrates voice, video, and data traffic on a single network. The market drivers for converged networks are cost reduction; support for sophisticated, highly integrated applications; and the provision of greater network flexibility and functionality.

 

As networking technology becomes pervasive, opportunities arise for using it in new and more creative ways. One example is that of using data networks, rather than the traditional circuit switched networks, to carry voice and video traffic. The generic term for this kind of use is converged networking. Converged networking offers many benefits, including cost savings and the enabling of new, tightly integrated, multimedia applications.

 

Several emerging forces are driving interest in converged networks:

 

 

Organizations will replace their existing voice, data, and video infrastructures by a converged network only if they anticipate substantial savings in both capital expenditure and day-to-day operational costs. At the same time, a converged network must deliver service at least equivalent to existing facilities.

 

Achieving the necessary cost/service objective requires the use of emerging and anticipated technologies. Organizations are generally leery of using proprietary technologies when faced with major upgrades to their networks. Consequently, the fundamental technologies of converged networks must be standards-based, and the network deployment must be incremental.

 

Certain applications are difficult to support on existing communication infrastructures. For example, coordinating customer voice calls with database accesses that manipulate customer records currently requires specialized application hardware and software. Over a converged network, packet voice and database access use a common network, allowing software applications to provide this service and eliminating the need for specialized application hardware.

 

Another motivation for converged networks is indirectly related to integrating voice, video, and data on a single network. Many of the characteristics necessary for a converged network, such as robustness, manageability, availability, and so on, are also desirable characteristics for legacy networks. As the features that support these characteristics are developed, organizations without a pressing need for converged networks will be attracted to products containing such features in order to improve their existing legacy networks.

 

Independent on whether UT is ready to deploy applications in the next two to three years that can utilize the inherent advantages of a converged network, it is imperative that we design in the necessary elements in the network that will permit such innovation to take place in the future without a major overhaul or expense in do so.  Perhaps the most important element in this consideration is ensuring the fiber cabling plant is of sufficient robustness and adaptability to permit such an endeavor.

 

4.3 Network Design Criteria

 

The network design criteria that was used is based on the following attributes and specific needs to UT:

 

1.      Availability – A robust network, available 24 x 7, that includes pro-active scheduled maintenance periods and experiences minimal down time.

 

2.      Reliability – A network with excellent response time and the ability to accommodate heavy loads without degradation of performance.

 

3.      Scalability – A network that has the ability to accommodate new technologies and to grow as the university’s needs grow or change.

 

4.      Manageability – While the utilization of the UT network will only increase over time, it is not expected that the staffing levels will be increased proportionally. Thus, it is vital that the network be maintainable with minimal staff and be easily monitored and managed.

 

5.      Standards Compliance – A network that complies with  “open” standards and avoids proprietary standards. Standards compliance lessens cost and provides high availability.

 

6.      Security  – A secure network to prevent security attacks, virus attacks and lost revenue.

 

7.      Reduced Institutional Risk – A fully redundant and highly available network that eliminates single points of failure that may cause interruption of the business of the university.

 

8.      Convergence – The physical network must be inherently flexible to permit UT to quickly and inexpensively migrate to a converged network environment when necessary.

 

4.4 Evaluation of Proposed Network Designs

 

During the UT sessions, four network designs were evaluated by UT key networking personnel and by outside networking design experts (Section 1.5). 

 

The following table details some key elements from each presented design.

 

Network Design

Summary of key elements

 

NetEffect Corp.


(Refer to Appendix IV for NetEffect Report and VI for NetEffect proposed design)

 

·        First design to propose a hierarchical topology design involving a Layer 3 Core and Distribution, and a building access layer.

·        Strengths: scalability, redundancy.

·        Weaknesses: cost, manageability.

 

Digital Connections, Inc. (DCI)

 

(Refer to Appendix VI for design and to Appendix X for DCI Report)

 

·        A consolidated central network core housed in two locations on campus for the purposes of redundancy and disaster recovery.

·        Strengths: redundancy at Core, ease of operation.

·        Weaknesses: cost, does not map to physical layout (cabling), availability and flexibility.

 

National Laboratory for Applied Network Research (NLANR)

 

(Refer to Appendix VII for NLANR design and to Appendix IX for NLANR Report)

 

·        A de-coupled design (Layer 2 completely decoupled from Layer 3) that relies heavily on VLANs.

·        Strengths: scalability, flexibility, robustness, high performance Core.

·        Weaknesses: manageability and institutional risk.

 

UT Computing and Network Services (CNS) Design

 

(Refer to Appendix VIII for CNS design).

 

·        A hierarchical topology with a simpler configuration than NetEffect design. Layer 2 Core and Layer 3 Distribution.

·        Strengths:  High performance Core, manageability, less expensive topology.

·        Weaknesses: does not map to actual physical cabling infrastructure.

 

 

All designs were thoroughly reviewed and presented to the group for discussion.  Each of the four designs was presented by their authors and discussed at length in relation to the critical design criteria identified up to that point.  The best elements of each design were then identified and incorporated into a final network design that all attendees felt was the most optimal solution.  However, even within this consensus design, there were a few lingering technical issues that still needed to be resolved.  See Appendix XI and XII for results of this workshop.

 

On April 26 and 27, UT representatives attended a two-day workshop at IBM’s International Network Design Facility in Cary, NC.  At this point, the consensus network design that emerged from the Knoxville workshop was polished and lingering technical concerns resolved.  In addition, a transition plan from the legacy network to the new network was developed, timeframe identified, and responsibilities assigned for key elements of the project.  The workshop agenda is included in Appendix XIII and IBM’s final work in included in Appendix XIV.

 

4.5 VolNet Network Design and Architecture

 

The VolNet Network Design incorporates the best elements from all of the considered network designs. It builds upon all the strengths of the proposed designs, while utilizing the actual physical cabling infrastructure currently in place. It is a highly available, reliable, scalable, yet simple design that is redundant and that fulfills the network design criteria previously stated and accepted best principles in network planning and design.

 


4.5.1 VolNet Proposed Logical Layout

 

 

4.5.2 Characteristics of Design

 

The VolNet proposed design is a five-level design. The technical design is based on a gigabit Ethernet backbone (1000 Mbps) with 10/100 Mbps switched network to the desktop.  The cable plant is a single mode fiber backbone with Category 5/6 copper wire to the desktop and fiber to the desktop for special applications.  The design utilizes two hardened operations facilities located in Stokely Management Center (SMC) and Humanities to provide redundancy to maximize availability and reliability.

 


4.5.3 From the Core to the Periphery

 

 


 

The core is made up of two Layer 2 switches and two Layer 3 switches, all interconnected. Two hardened operations facilities (SMC and Humanities) constitute the core “locations”. Each core location will be fully redundant, i.e. each location will have each one of its Layer 2 switches connected to the other location’s Layer 2 switch in alternate pathways so that both locations share the network load.

 

The design is optimized for speed and performance, with on average two hops and no more than three hops point-to-point so that latency across the entire network will be extremely low.

 

Utilizing UT’s existing conduit flexibility, the proposed design maps to the physical cabling infrastructure using a star-topology. From the core, point-to-point  connections will be made (using single mode fiber) to the majority of buildings on campus.  The core is composed of extremely fast Layer 2 switches (two at each location) connected to Layer 3 routers that provide distribution to individual buildings. At the buildings, Layer 2 switches will deliver switched 10/100 Mbps to the desktop. High priority buildings will have fully redundant connections to both core locations (at SMC and Humanities). Servers for DNS/DHCP, LDAP, WINS will be mirrored at each of the core locations.

 

Wide-area network (WAN) connections will be connected into the core through Layer 3 routers (one in SMC and one in Humanities) with at least DS-3 connectivity to the Internet and to Abilene (Internet2). A second commodity Internet connection will also be established to provide backup redundancy.  WAN traffic is brought into the core locations via a leased BellSouth SONIC ring from the main BellSouth central office in downtown Knoxville to SMC and Humanities.

 

Special “scenarios” in which there are specific geographic factors, such as ResNet (the Residential/Dormitory Network) and the Agricultural Campus (Ag Campus) will be treated as pseudo-WAN connections, pulled back to a single point in the core using gigabit Ethernet.

 

4.5.4 Key Elements of the Design

 

Key elements of the proposed design include:

 

·        The network design is for an IP-only core implementation.

 

·        Non-IP protocols, e.g. IPX and AppleTalk, will be supported within the building.

 

·        A collapsed backbone into two locations creates a network optimized for speed and performance.  Moreover, two locations will ease support and upgrade requirements.

 

·        Only Layer 2 distribution switches will be located in buildings.

 

·        Management and support features are incorporated in the original design and not an add-on afterthought.

 

·        The network design has flexibility to incorporate new technologies as they mature.  Converged network applications can be integrated easily and efficiently.

 

4.6 Transition from Legacy Network to New Network

 

Once the core network is established, buildings will be cutover at approximately ten buildings per month.  There are six general categories of buildings on campus:

 

  1. CAT5 cable plant that meets specifications and switched 10/100 electronics, ready for IP-only network protocol egress.
  2. CAT5 cable plant that meets specifications and switched 10/100 electronics, not ready for IP-only network protocol egress.
  3. CAT5 cable plant that meets specifications and shared network electronics, ready for IP only.
  4. CAT5 cable plant that meets specifications with shared electronics, not ready for IP only.
  5. Non-CAT5 cable plant, ready for IP only.
  6. Non-CAT5 cable plant, not ready for IP only.

 

Building types 1 and 3 represent the easiest buildings to be cutover.  Buildings 2 and 4 represent the next hardest to convert, with building type 5 being the most difficult.  To the extent possible, type 2 and 4 building occupants will declare when they can operate in an IP-only core environment.  Finally, building type 5 and 6 will be connected as best as possible until its intra-building wiring is addressed in phase 4.

 

Some type 5 buildings whose occupants cannot wait the likely 18-24 months before their building will be rewired might be eligible for accelerated wiring outside the scope of phase 4.

 


4.7 Wireless Technologies

 

Wireless technologies represent a rapidly emerging area of growth and importance for UT.  As with any emerging technology, the standards for wireless are not fully mature.

 

There are two basic types of wireless technologies.  What is sometimes referred to as “local-area wireless” operates at 11 Mbit/second shared throughput and is defined by the IEEE 802.11b standard.  A 54 Mbit/sec implementation of the 802.11 specification will be will be shipping around Q1-Q2 2001.  This form of wireless is most commonly used for computer (i.e. laptop) communications within a ± 200-meter range of the access point.  Leading vendors include Cisco/Aironet, Lucent, and Cabletron.  UT has been using 802.11 technologies for approximately 2 years.  The Division of Information Infrastructure has been evaluating different vendor’s products for much of the Spring 2000 semester and will assume a leadership role in deploying 802.11 wireless technology to ensure transparent end-user “roaming” from one access point to the next.  After extensive testing, DII has selected technologies from Lucent Technology using the 128-bit encryption standard and authenticating against the central LDAP directory/authentication server.

 

A second form of wireless communications is in the spectrum of low-bandwidth “wide-area” wireless.  This form of wireless is most commonly used for held-held PDA devices and web-based cell phone communications within a  ± 2-kilometer range.  At this time, this technology has no interoperability standard even though it represents a rapidly growing segment of the data wireless market.  The most prevalent technology used (from a market share perspective) is CDMA as developed by Qualcomm.  U.S.-based cellular/voice providers are increasingly using CDMA as their preferred technology.  Europe and Asia have standardized on a rival standard referred to as GSM.

 

CDMA-based wireless technology is several orders of magnitude slower than 802.11b and is generally considered low-bandwidth (28 Kbit/sec).  At this time, CDMA-based wireless devices are typically limited to transaction-oriented operations such as checking e-mail, checking stock quotes, etc.  Even though there are newer CDMA-based technologies under development that will lead to higher throughput over the next 18 to 24 months, these technologies are still be developed in the absence of any emerging standard.  Deployment of CDMA based transmission technology is expensive and reserved to those companies that have been assigned a slice of the cellular spectrum by the FCC.

 

Deployment of wireless will force UT to reconsider its cost-model for recovering networking costs.  Currently UT utilizes a “port-based” cost model meaning that a person/department is charged by the number of activated ports they utilize.  As our user community becomes increasingly more mobile, a “user-based” cost model will need to be considered.  Here, operational costs are recovered based on the number of “users” (vs. “ports”) connected independent of how he/she might be receiving data/information at any given time.  Thus a user might move from his “wired” desktop in his office where he uses his “IP-based” telephone to using a “802-11 based wireless” laptop in the Library to using a “CDMA-based wireless” PDA as he walks in between destinations.  Thus the “user-based” cost model would view this individual as one connected individual whereas the “port-based” model would view him as having four connection points.  If wireless technology is to become an integral part of the UT culture, the cost model employed must not represent an illogical impediment to its implementation.

4.8 Conclusion

 

The proposed core design is very simple to understand and maintain yet robust and elegant in its ability to handle large campus environments with equally large bandwidth requirements.  However, one should not under estimate the challenges the campus will face implementing such an important undertaking.  In summary, these challenges include:

 

 

It should be noted that buildings without the appropriate intra-building CAT5 wiring plant and switched electronics will not benefit from performance increases that appropriately wired buildings will receive.  Thus it is essential that phase 2 be closely followed by phase 3 and 4 to ensure maximum return on investment.

 

Wireless technology will be judiciously deployed on the UT campus as demand dictates.  Since wireless technology represents one of the fastest growing segments of the communications area and therefore shortest technology half-life, it would be unwise to deploy it universally until our user community is prepared to utilize such an extensive infrastructure and the technology has had time to mature and decrease in price.  However, the University must take a leadership role in overseeing its deployment to ensure interoperability between wireless “zones” and provide maximum usability and satisfaction to our community.

5. VolNet: Implementation Overview

5.1 Introduction

 

The core backbone implementation will be initiated as soon as funds are released (anticipated in early summer 2000) and will be completed within a twelve to eighteen month period.  At the completion of this task (Phase 1) all buildings on campus will be connected to the new backbone and the legacy network comprising ATM and FDDI will no longer be in use.  Phase 2 will begin in parallel to cut all buildings over to the new backbone.  Phase 3 will involve hiring an Architectural and Engineering firm who, working with DII staff, will engage in assessing intra-building wiring, and prepare an RFQ for implementation.  Phase 4 of the VolNet project will see the same A&E firm supervise the installation, and certify standards compliance of the completed project.  Completion of Phase 4 is anticipated by August 2003.

 

5.2 VolNet Project Phases

 

The VolNet project is divided into four major phases.

 

Phase 1

Initial audit and gap analysis of current communications infrastructure

Deliverables

NetEffect Report

Completion date

February 2000

 

 

Phase 2

Engage multiple external sources to finalize a new backbone design and core technology. Complete the inter-building wiring based on the new design, and connect all buildings to the new core.

Deliverables

Finalize a new backbone design. Replace the existing network core, removing the ATM and FDDI legacy backbone from service. Complete the rewiring of high priority buildings (10). Connect all buildings to the new core.

Completion date

December 31, 2001

 

 

Phase 3

Employ an Architectural and Engineering firm with proven expertise and track record in large campus wiring projects to design and oversee intra-building wiring.

Deliverables

Design specifications for building satellite equipment rooms (SERs) and the premise cable plant. Prepare bid specifications. Evaluate and award RFQ.

 

It is proposed that the buildings be designed in three batches.  Phase 3A will include 30% of buldings needing urgent upgrade.  Phase 3B, the next 40%, and Phase 3C, the final 30% of buildings. Construction Phase 4 will begin as soon as the design phase is finished and the construction contract awarded.

 

Completion date

June 2001

 

 

Phase 4

Intra-building wiring construction phase.

Deliverables

All buildings on campus will be brought up to wiring specifications including well-designed and located SERs. Certification of cable plant upon project completion.

 

Phase 4A represents the construction phase for the 30% of buildings designed in phase 3A, 4B for 3B, and 4C for 3C.

Completion date

August 2003

 

 

Prioritizing the order that buildings will be connected to the core backbone will be made on a purely technical basis (see section 4.5 for discussion).  The network connectivity for all buildings is not equal and therefore the order in which buildings will be added to the new network is, in many cases, predefined.  It is anticipated that the priority for completing the internal wiring of buildings will similarly be based on technical requirements.

 

5.3 Projected Schedule

 

1999

November     

NetEffect report started

2000

February 03

NetEffect report submitted

 

February 15

CNS Network Summit

 

March 20

DCI report submitted

 

April 03

NLANR report submitted

 

April 13-14

UT Network Design Workshop - compare report - create design

 

April 26-27

UT/IBM Network design Workshop to finalize the design

 

May 16

Meet with McCay to release funds

 

June 1

Funds released

 

June 1

Equipment ordered

 

August 15

Finalize order in which buildings will be cutover to new backbone

 

August 15

Phase 3 RFP out to bid (acquire A&E firm)

 

Sept 1

Core backbone electronics installed

 

Sept 30

Core backbone testing finished and core certified ready to go

 

October 15

Phase 3 contract awarded

 

October 31

1st lot of 10 buildings connected to backbone

 

Nov 15

Design work for Phase 4 starts with first 30% of buildings

 

Nov 30

2nd lot of 10 buildings connected to backbone

 

Dec 31

3rd lot of 10 buildings connected to backbone

2001

January 31

4th lot of 10 buildings connected to backbone

 

February 28

5th lot of 10 buildings connected to backbone

 

March 31

6th lot of 10 buildings connected to backbone

 

April 15

Phase 3A RFQ let for construction work on first 30% of buildings

 

April 30

7th lot of 10 buildings connected to backbone

 

May 31

8th lot of 10 buildings connected to backbone

 

June 01

Phase 3A awarded

 

June 30

9th lot of 10 buildings connected to backbone

 

July 01

Construction Phase 4A starts on first batch of buildings

 

July 31

10th lot of 10 buildings connected to backbone

 

August 01

Phase 3B RFQ let for construction work on next 40% of buildings

 

August 31

11th lot of 10 buildings connected to backbone

 

Sept 15

Phase 3B awarded

 

Sept 30

Construction Phase 4B starts on first batch of buildings

 

Sept 30

12th lot of 10 buildings connected to backbone

 

October 31

13th lot of 10 buildings connected to backbone

 

Dec 31

ALL buildings cutover to new backbone

2002

January 01

Phase 3C RFQ let for constructions work on last 30% of buildings

 

February 15

Phase 4C awarded

 

March 01

Construction Phase 4C starts on first batch of buildings

2003

January 01

Construction Phase 4A completed

 

March 01

Construction Phase 4B completed

 

August 15

Construction Phase 4C completed


6. VolNet: Budget

6.1 Phase 1 – Initial Rollout of Core Backbone

 

Initial rollout of new core plus inaugural buildings by December 31, 2000

 

6.1.1

Core electronics

$717,810

6.1.2

Install fiber from vault to SMC and Humanities and to build a route between SMC and Humanities

$96,776

6.1.3

Contract labor

$75,000

6.1.4

Installation of auxiliary generator for SMC Operations Center

$700,000

6.1.5

Rewire known problematic Premise-wiring areas that cannot wait for Phase III

$750,000

6.1.6

Secondary DS-3 service for ResNet commodity Internet service and backup service to UT campus

$240,000

6.1.7

Contingency Funds

$425,000

 

6.2 Phase 2  - Final Stages of Building Cut-Over

Completion of Building Cutover to New Core by December 31, 2001.

 

6.2.1

Building electronics

$214,000

6.2.2

Balance of fiber work

$154,000

6.2.3

Contract labor

$150,000

 

6.3 Phase 3 – Intra-building Design

Hire Architectural and Engineering firm to design intra-building wiring, write bid documents, oversee construction Phase 4, and certify cable plant at completion.

 

6.3.1

Architectural and Engineering Services

$1,500,000

 

6.4 Budget Total Phase 1 - 3

 

6.4

Total for completion of Phases 1 through 3

$5,022,586

 


6.5 Phase 4 - Intra-building Construction

Phase 4 represents the bulk of the project expense as well as the portion that is most likely to be funded as a capital project.

 

6.4.1

Estimated 25,000 circuits + SER work

$20,000,000

 

6.6 Ongoing Funding Model

 

The funding model is based on using State of Tennessee bond funds for the installation and activation of the entire campus network.  The recurring costs for maintenance, support, equipment replacement, retirement of the debt, upgrades, expansion, etc. will be funded through a direct port charge.  Funding sources for the port charge include state allocated funds, tuition funds, and direct charges to users.  A differentiated service and charge model will be implemented.

 

6.6.1 Network Costs Are Operating Costs, Not Capital Costs.

 

It is relevant to point out that cabling infrastructure investment is typically amortized at a 10-year rate, whereas networking equipment is typically amortized at a 3-year rate. Funding the construction of a network is just the initial expense.  Like PC technology, networking equipment should be amortized on a 3-year schedule.  This means that all future university budgets should include funding to replace and/or upgrade approximately one-third of all routers and switches each year.  It should also be understood that a cabling infrastructure has a lifetime of approximately 10 years. Therefore, network costs should be viewed as operating costs.  A well-designed network will never need to be completely replaced, but will grow gracefully and evolve smoothly into more complex higher-speed networks to fully support more advanced technologies as they are needed.

 

 


7. VolNet: Ongoing Operations

7.1 Institutional Commitment

 

The campus network is a shared institutional resource that touches all aspects of university life.   To maintain a network of high quality, there must be a commitment from the university leadership to provide the financial resources necessary and to support campus wide policies for acceptable use.

 

7.2 Standards

 

The network has been designed based on currently accepted standards, with recognition that as the network expands and develops there must be a commitment to maintain these technical standards.

 

7.3 Best Practices

 

In order for the network to achieve high availability and high reliability, it is essential that a set of  “best practices” be developed, implemented, and rigorously adhered to.  Examples of best practices include, but are not limited to:

 

 

 

 

 

 

 

 

 

7.4 Administrative Responsibility

 

Every effort will be made to involve the campus community in the planning and development of the VolNet, as well as the operational issues, however it must be clearly established that the administrative responsibility for this campus resource will be maintained centrally by the Division of Information Infrastructure.

7.5 Staffing

 

Acquiring and retaining professional staff for the support of VolNet is critical to maintaining a high quality and reliable network.  Appropriate compensation must be offered, and a challenging and supportive work environment must be provided.

 

7.6 Space

 

Appropriate space with environmental controls and electrical service is required for installation of the Satellite Equipment Rooms (SERs) for the campus network.  It is recognized that space is at a premium on campus, and support for the installation of SERs across campus will be needed from the highest levels of the administration.

 

7.7 Cost Recovery

 

To maintain a viable network into the future, an appropriate cost recovery mechanism must be in place.  Funds will be sought independently for the initial capital outlay, but the continuing maintenance, staff support, upgrade, replacement, etc. must be funded on a fee for service basis.

 

7.8 Continuous Improvement

 

Two things must occur for a continuous improvement environment to be established.  The network must be designed such that real-time monitoring and feedback of information is available and the institutional discipline must be such that continuous improvement is part of the daily culture and planning process and not a mere afterthought.

 

 


Index of Appendices

 

Appendix I         An Assessment of Information Technology
– Dr. T. Dwayne McCay

Appendix II        Presentation at the Dean’s Retreat, August 1999 – Dr. William T. Snyder

Appendix III       Current Network Logical Layout

Appendix IV       NetEffect Report

Appendix V        NetEffect Proposed Design

Appendix VI       DCI Proposed Design

Appendix VII      NLANR Proposed Design

Appendix VIII     CNS Proposed Design

Appendix IX       NLANR/ES Report

Appendix X        DCI Report

Appendix XI       NS Networking Workshop Agenda

Appendix XII      Proposed Network Design

Appendix XIII     IBM International Networking Center – Workshop Agenda

Appendix XIV    IBM Networking Report

Appendix XV     Article from EduCause Quarterly: “Guiding Principles for Designing and Growing a Campus Network for the Future”


 [AP1]