Tutorials

Tutorial 1: Theory and Practice of Configuration Management in decentralized Systems

Prof. Mark Burgess, University College Oslo, Oslo, Norway

Abstract

What is configuration management? Often at NOMS we think only of network management - i.e. the management of network devices like routers and switches. Host management, on the other hand, has been studied more in the Unix community. Increasingly we are seeing these two worlds converge, as network devices run embedded GNU/Linux or Free BSD operating systems. So what are the differences? One difference is the file abstraction - host operating systems have files and databases that contain configuration data. What are the technologies for managing these? Should they be centralized?

Autonomy is a central concept in modern computing technology. Increasingly computers are being managed by their owners rather than by centralized authorities. In the early 1990's the author developed the automation system cfengine for configuring and maintaining Unix-like operating systems, based on an arbitarary model of either centralized or decentralized control. It was based on the idea of voluntary cooperation - a topic which is now centre stage in autonomic and pervasive computing. cfengine was conceived to be able to run on any device, no matter how large or small. Moreover, it started a field of research into configuration management at the USENIX configuration management workshops and was the proof-of-principle for several key results. Today cfengine is used on an estimated million computers around the world, both in large and small companies.

Cfengine is a tool for setting up and maintaining a configuration across a network of hosts. It embodies a very high level declarative language, much higher-level than scripting languages, together with an autonomous, smart agent and machine-learning monitors. The idea behind cfengine is to create a single "policy" or configuration specification that describes the setup of as many or as few hosts in a network, without sacrificing their autonomy. Cfengine runs on each host and makes sure that it is in a policy-conformant state; if necessary, any deviations from policy rules are fixed automatically. Unlike tools such as rdist, cfengine does not require hosts to open themselves to any central authority, nor to subscribe to a fixed image of files. It is a modern tool, supporting state-of-the-art encryption and IPv6 transport, that can handle distribution and customization of system resources in huge networks (tens of thousands of hosts).

Outline

The tutorial focuses on the general principles of configuration management and uses cfengine as an example which intregrates the state of the art research. The list of topics follows the following plan:

We end with a discussion of where cfengine is going, and how it can be extended to encompass configuration management, integrating switches and routers with host configuration in data centres.

Who should attend?

Network and System administrators with a minimal knowledge of a scripting language, who wish to understand and perhaps start using cfengine to automate the maintenance and security of their systems. UNIX administrators will be most at home in this tutorial, but cfengine can also be used on Windows 2000 and above. Network administrators who are interested in the principles of configuration management, beyond SNMP, will find a frank discussion about the future of the subject and will have the opportunity to participate in the design of cfengine 3 - the next generation of host-device management.

Biography of the Instructor

Mark Burgess is Professor of Network and System Administration at Oslo University College, Norway. He is the author of the configuration management system cfengine and of several books and many papers on the topic. Professor Burgess is a frequent, popular speaker at conferences on system administration.

Tutorial 2: Network Security Policies: Verification, Optimization and Testing

Prof. Ehab Al-Shaer, DePaul University, Chicago, IL, USA

Abstract

The importance of network security has been significantly increasing in the past few years. However, the increasing complexity of managing security polices particularly in enterprise networks poses real challenge for efficient security solutions. Network security perimeters such as Firewalls, IPSec gateways, Intrusion Detection and Prevention Systems operate based on locally configured policies. Yet these policies are not necessarily autonomous and might interact between each other to construct a global network security policy. Due to manual, distributed and uncoordinated configuration of security polices, rules conflicts and policy inconsistency are created, causing serious network security vulnerabilities. In addition, enterprise networks continuously grow in size and complexity, which makes policy modification, inspection and evaluation nightmare. Addressing these issues is a key requirement for obtaining provable security and seamless policy configuration. In addition, with growth in network speed and size, the need to optimize the security policy to cope with the traffic rate and attacks is significantly increasing. The constant evolution of policy syntax and semantics make the functional testing of these devices for vulnerability penetration is a difficult task.

This tutorial is divided into three parts. In the first part, we will present techniques to automatically verify and correct firewall and IPSec/VPN polices in large-scale enterprise networks. In the second part, we will discuss techniques to enhance and optimize the policy structure and rule ordering in order to reduce packet matching and improve significantly firewall and IPSec performance. In the third part, we will present techniques that can be used by users, service provider as well as vendors to test their security devices efficiently and accurately.

Outline

Devastating attacks overview
Overview of firewall and IPSec operation and architectures
  1. Security Policy Verification
    1. Classification of intra- and inter-policy conflicts in firewalls
    2. Classification of intra- and inter-policy conflicts in IPSec
    3. Policy modeling and verification using formal methods
    4. Conflicts discovery and resolution security policy conflicts
    5. Automated policy management: editing, distribution, optimization
    6. Policy management of multi-vendor security solution
    7. Policy translation: from high-to-low level and vice versa
  2. Security Policy Optimization
    1. Performance problems with security polices
    2. Overview of algorithmic-based optimization techniques
    3. Overview of statistical-based optimization techniques
    4. Autonomic optimization of security polices
    5. Evaluation
  3. Security Policy Testing
    1. Policy evaluation
    2. Exhaustive vs. Random testing
    3. Intelligent testing
    4. Benchmarking

Who should attend?

This tutorial will discuss timely and important issues in academic as well industrial research. Students, academic researchers, industrial researchers and developers, security system architects and practitioners are all target audience for this tutorial and they will directly benefit from attending this tutorial.

Biography of the Instructor

Ehab Al-Shaer is an Associate Professor and the Director of the Multimedia Networking Research Lab (MNLAB) in the School of Computer Science, Telecommunications and Information System at DePaul University. His primary research areas are Network Security, Internet monitoring, and multimedia networks. Prof. Al Shaer published many refereed journal and conference publications. He also was a Co-Editor of number of books in Management of Multimedia on the Internet and End-to-End Monitoring. Prof. Al-Shaer was Guest Editor for number of journals. He also served as conference Chair, TPC Co-chair, invited speaker, panelist, tutorial presenter and TPC member in many IEEE and ACM conferences including INFOCOM, ICNP, IM/NOMS, ICDCS, CCNC, MMNS and E2EMON. He was invited speaker in many academic and industrial panel in the area of network security policy management. His current research is funded by NSF and Cisco systems, Intel and Sun Microsystems.

Tutorial 3: Managing IT Resources using Web Services: A Tutorial on the Web Services Distributed Management Standard from the Ground up

Ms. Heather M. Kreger, Senior Technical Staff Member, IBM Corporation, Research Triangle Park, NC, USA

Abstract

The industry has been wrestling with the complexity of managing business systems for years. The challenge stems from the variety of application and IT resource providers that enterprises use to build their business systems. A range of management systems co-exist to manage the breadth of resources.

The management industry and customers have an opportunity to take advantage of the industry trend towards using Web services for business integration and moving to Service oriented architectures for business. It is now possible to garner these same advantages seen in business for management. Building manageable resources and management systems on a Web services foundation is going to cause a profound shift in how enterprises and vendors manage their IT resources in the future. Embracing this shift is going to create more flexible IT infrastructures, better integration of business and IT objectives, and greater end to end management of both IT infrastructures and business processes.

This presentation provides a bottoms-up tutorial of Web Services Distributed Management (WSDM), the new OASIS Standard that provides the first step in solving this classic management integration problem. The session will begin with an overview of the Management Roadmap architecture and WSDM's place in that architecture relative to other industry standards and initiatives. The technical tutorial will begin with an introduction on WSDL and WS-Addressing, specifications on which WSDM depends. The presenters will build on this with an overview of the Web Services for Resource Framework (WSRF) and Web Services Notification (WSN) OASIS specifications and discuss how they are used by WSDM. Finally, the session will explore WSDM components, Management Using WS (MUWS) and WSDM Management Of Web Services (MOWS). MUWS defines how to represent and access the manageability interfaces of any IT resource as Web services. MOWS defines how to manage Web Services as resources and how to describe and access that manageability using MUWS. Concrete customer issues solved by WSDM will also be highlighted as well as how CIM modeled resources can be accessed using WSDM.

Outline

  1. Introduction to WSDM
  2. Positioning in the Industry: The Web Services Management Roadmap
  3. Foundation Specifications:
    1. XML, SOAP, and WSDL
    2. WS-Addressing
    3. WS-Resource Framework
    4. WS-Notification
  4. WSDM Management Using Web Services
    1. Manageability Capabilities
  5. Standard (Identity, Description, Correllatable Properties, Metrics, Configuration, Operational Status, State)
    1. Custom
    2. Relationship
    3. Event Format (WEF)
    4. Resource Discovery
    5. Creating the Manageable Resource example
    6. Using the Manageable Resource example
  6. WSDM Management Of Web Services
    1. Application of standard capabilities
    2. Custom capabilities for Web services
    3. Message status tracking
    4. A manageable Web service example
  7. Outlook
  8. Summary

Who should attend?

This session will appeal to attendees who are programmers using Web services involved in making those systems manageable, systems administrators, company strategists and architects who are responsible for managing disparate systems in geographically diverse corporations. This session does assume that the attendees have working knowledge of XML, WSDL and Web Services concepts.

Biography of the Instructor

Heather Kreger is the IBM lead architect for Web Services and Management in the Emerging Technologies area. She is currently co-lead of the OASIS Web Services Distributed Management Technical Committee, member of several related DMTF Work Groups, as well as IBM's representative to the W3C Web Services Architecture Working Group. Heather was co-lead of JSR109 that specifies web services deployment in J2EE environments and a contributor to the Java Management Extensions (JMX) specification, Heather is also the author of: numerous articles on Web services and management in the IBM Systems Journal, Communications of ACM, Web Services Journal; public technical work includes the "Web Services Conceptual Architecture", "WS-Manageability"; and her own book "Java and JMX, Building Manageable Systems".

Tutorial 4: Beyond Device Management: Route Analytics for Management of Dynamic Routing in IP Networks

Dr. Cengiz Alaettinoglu, Fellow, Packet Design, Inc., Palo Alto, CA, USA

Abstract

Network management has traditionally been carried out using SNMP polling, in some cases augmented by codebook-based correlation. But periodic polling falls far short of capturing the complex and dynamic layer 3 operations of IP networks. In particular, the routing dynamics of IP networks often lead to unpredictable and intermittent behaviors that leave network managers unable to explain what happened or why.

This tutorial introduces an emerging technology called route analytics, which addresses the most difficult management problems in IP networks. Specifically, the tutorial will demonstrate how route analytics can be used to manage routing protocols and the dynamic IP network topology to increase service predictability and availability.

Outline

  1. Why network-layer management is needed in IP networks
    1. IP's "cloud" architecture provides resiliency but not visibility and predictability
    2. IP networks are highly dynamic; problems leave no audit trail
    3. Traditional layer 2 management is device-oriented, has no knowledge of routes, cannot detect such layer 3 problems as route flaps, router misconfigurations
  2. How route analytics works: managing logical vs. physical elements
    1. Separation of routing control plane from data forwarding path
    2. Listening to and participating in routing protocol exchanges (passive peering)
    3. Computing a real-time, network-wide routing map
    4. Monitoring/displaying routing topology changes as they happen
    5. Correlating routing events with other information (e.g., performance data) to reveal underlying causes and effects
    6. Recording and analyzing historical routing events and trends
    7. Simulating "what-if" scenarios for network planning
  3. Route analytics for Interior Gateway Protocols
    1. Link-state protocols: OSPF, IS-IS
      • Diagnosing historical problems
      • Metric modeling on as-built networks without touching the "live" network
    2. Distance-vector protocol: Cisco EIGRP
      • Preventing and resolving stuck-in-active issues on EIGRP routers
  4. Route analytics for the BGP protocol
    1. BGP management challenges
      • Most "chatty" of all protocols, BGP can produce millions of routing events after a peering loss
    2. BGP root-cause analysis
      • BGP RIB (routing information base) visualization
      • Dynamic real-time analysis of millions of BGP events
  5. Route analytics for MPLS VPNs
    1. Layer 3 MPLS VPN management challenges
      • Ensuring reachability, privacy when supporting overlapping private customer address spaces
      • Maintaining up-to-date VPN routing information
      • Optimizing backbone and edge router resources
    2. New technology based on IETF RFC 2547bis standard provides VPN infrastructure visibility
      • VPNs are overlaid on layer 3 topology map
      • VPNs viewable on customer-by-customer basis
      • ISPs can monitor connectivity, audit security, ensure SLA compliance for individual VPNs
  6. Practical examples of route analytics in enterprise, educational and service provider networks
  7. Q&A

Who should attend?

Attendees should have a solid understanding of IP networking and routing, including routing protocol functionality. This session will be particularly useful for those who have experience in managing IP routing in a large network.

Biography of the Instructor

Cengiz Alaettinoglu is a fellow at Packet Design, Inc. Currently he is working on scaling and convergence properties of both inter-domain and intra-domain routing protocols. He was previously at the USC Information Sciences Institute, where he worked on the Routing Arbiter project. He co-chaired the IETF Routing Policy System Working Group to define the Routing Policy Specification Language and the protocols to enable a distributed, secure routing policy system.
Alaettinoglu received a B.S. degree in computer engineering in 1988 from the Middle East Technical University, Ankara, Turkey; and M.S. and Ph.D. degrees in computer science in 1991 and 1994 from the University of Maryland at College Park. He was a Research Assistant Professor at the University of Southern California, where he taught graduate and undergraduate classes on operating systems and networking from 1994 to 2000. He has given numerous talks at NANOG, IETF, RIPE and APNIC meetings, as well as at ACM and IEEE conferences and workshops.

Tutorial 5: Efficient Network and Traffic Monitoring

Prof. Danny Raz, The Technion, Haifa, Israel

Abstract

Offering reliable novel services in modern heterogeneous networks is a key challenge and the main prospective income source for many network operators and providers. Providing reliable future services in a cost effective scalable manner requires efficient use of networking and computation resources. This can be done by making the network more self-enabled, i.e. making it capable of making distributed local decisions regarding the utilization of the available resource. However, such decisions must be correlated in order to achieve a global overall goal (maximum utilization or maximum profit, for example).

A key building block for all such systems is the ability to monitor the network parameters and the relevant traffic, and to infer from these measurements the relevant information needed in each one of the local decision points. Due to the heterogonous nature of modern networks and to the very high amount of traffic, even monitoring a local location introduces significant difficulties. It is much more challenging to decide what type of traffic or network information should be collected at each network segment in order to acquire the needed global information without investing too much effort in the monitoring process or its management. In fact, efficient network and traffic monitoring may become a very significant ingredient in the ability to provide modern network services in a cost effective way.

This Tutorial deals with practical and efficient techniques to retrieve information from modern network devices. We start by examining the SNMP suit and the various methods to collect information from possibly large MIB tables. Then we develop a framework for quantifying resource (bandwidth and CPU) utilization in distributed network management. To demonstrate the practical impact of this framework, advanced techniques for efficient reactive traffic monitoring, efficient QoS parameter monitoring, and multimedia application monitoring, together with empirical results showing the overhead reduction will be presented. The tutorial continues with an example for a reliable, efficiency aware monitoring system that combines the above techniques with the SNMP framework, and time allowing a novel technique for efficient statistical monitoring.

Outline

  1. Introduction
    1. Overview: The problems, the technologies, possible solutions.
    2. Network Monitoring: from reactive real time to statistical monitoring.
    3. Overview of the tutorial: Timing and keywords.
  2. The SNMP framework
    1. A short review.
    2. Accessing large tables.
    3. Hierarchical structures and the M2M MIB.
    4. IETF Distributed Management and scripts MIB.
    5. Possible drawbacks.
  3. Network Monitoring and Control
    1. Why do we need monitoring?
    2. The cost of monitoring.
    3. Centralized or distributed?
    4. Event driven vs. polling.
    5. Reactive monitoring.
    6. Statistical monitoring.
    7. Controlling network behavior.
  4. Retrieving information from a large set of SNMP enabled network devices
    1. To SNMP or not to SNMP?
    2. Efficient MIB table retrieving.
    3. TCP Vs. UDP.
    4. Using TCP retrieval in the current SNMP framework.
    5. Algorithmic aspects of mass data retrieval.
  5. Efficient Reactive Traffic Monitoring
    1. An abstract model.
    2. Monitoring cost.
    3. Rigorous definition of the monitoring problem.
    4. The existence of optimal monitoring algorithms.
    5. The practical problem of monitoring.
    6. Different types of monitoring algorithms for different types of monitored data.
    7. Experimental results.
  6. Monitoring of QoS parameters in the DiffServ framework
    1. Background: Differentiated services in the IP framework.
    2. The Bandwidth Broker and the need for QoS monitoring.
    3. Comparing reactive and passive monitoring techniques.
    4. Polling Vs. probing
    5. Optimal reactive monitoring.
  7. Monitoring Multimedia applications
    1. Multimedia formats.
    2. What is so special about multimedia application?
    3. Multicasting scalability.
  8. Building a light-weight reliable efficient monitoring system
    1. The three tier architecture.
    2. Using group communication.
    3. Achieving reliability.
    4. Experimental results.
  9. Statistical monitoring
    1. Why do we need statistical monitoring?
    2. Properties and requirements from statistical monitoring.
    3. Efficient statistical monitoring and the traveling miser problem.
  10. Summary and conclusions
    1. Resources.
    2. Lessons learned.
    3. Open problems.

Who should attend?

R&D personnel interested in improving the efficiency and reducing the overhead of network monitoring solutions, and research and academic people interested in both challenging and practical problems related to the efficient utilization of network resources with respect to network monitoring.

Biography of the Instructor

Prof. Raz received his doctoral degree from the Weizmann Institute of Science, Israel, in 1996. From September 1995 until September 1997 he was a postdoctoral fellow at the International Computer Science Institute, (ICSI) Berkeley, CA, and a visiting lecture at the University of California, Berkeley. From October 1997 until October 2001 he was with the Networking Research Laboratory at Bell Labs, Lucent Technologies. In October 2000 Danny Raz joined the faculty of the Computer Science Department at the Technion in Israel.
His primary research interest is the theory and application of management related problems in IP networks. Prof. Raz has been engaged in network management research in the last seven years. His main contributions are in the field of efficient network management and the use of active and programmable networks in network management. Prof. Raz gave talks and tutorials on this subject in many international conferences, he was the general chair of OpenArch 2000, a program committee member in many of the leading conferences both in the general field of networking (INFOCOM 2002, 2003), network management (IM and NOMS 2001-2006, DSOM 2003-2005), and active and programmable networks (IWAN, OpenArch). He is an editor in the Journal for Communication Networks (JCS) and edited a special issue in JSAC.

Tutorial 6: Autonomic Systems and Networks -Theory and Practice

Dr. John Strassner, Fellow, Motorola Research Labs, Schaumburg, IL USA
Dr. Jeffrey O. Kephart, Research Staff Member, IBM T.J. Watson Research Center, Yorktown Heights, NY USA

Abstract

The increasing complexity of computing systems is beginning to overwhelm the capabilities of software developers and system administrators to design, evaluate, integrate, and manage these systems. Major software and system vendors such as IBM, HP and Microsoft have concluded that the only viable long-term solution is to create computer systems that manage themselves-a vision that is often referred to as autonomic computing.

In the last few years, interest in autonomic computing has burgeoned within academia and industry. In 2005, there were at least 15 conferences and workshops devoted to the subject, and new ones are being established for 2006. Many companies such as IBM, Motorola, Intel, HP and Microsoft and several start-ups are actively pursuing research and development efforts in autonomic computing. Such widespread interest is fortunate, because autonomic computing is a broad topic, one that requires contributions from many people in a broad array of fields over a long period of time to reach full fruition.

Naturally, systems and network management is one important domain that lies within the purview of autonomic computing. This tutorial, an outline for which appears below, represents an effort to reach out to the community served by the NOMS conference, and give NOMS attendees a reasonably deep understanding of the motivation for autonomic computing, what it is, and how it is likely to affect systems and network management over the course of the foreseeable future. Participants will emerge with a good understanding of the architectural principles and technologies that contribute to autonomic computing, as well as a sense of the role that emerging standards will play. They will learn about how state-of-the-art AI technologies are being applied to and developed for future autonomic systems and networks. One of the most important elements of the tutorial will be the use cases and scenarios that are used for illustration throughout. Finally, participants will hear about research challenges and some early progress towards them by researchers in industry and academia.

Outline

  1. Introduction and Motivation
    1. The Looming Complexity Crisis
    2. What is Autonomic Computing, and How Can it Help?
    3. Illustrative Use Cases
      • Data Center Hosting Multiple Customer Applications with Service Level Agreements
      • Mapping Business Processes to IT Infrastructure: Deployment and Operation
      • Managing a VPN Multi-Service Network
  2. Autonomic Computing Primer
    1. Basic Architectural Principles
    2. Broad Look at Relevant Technologies
    3. Broad Look at Relevant Standards
  3. Autonomic Networking Primer
  4. A Closer Look at AC Architecture
    1. The Role of Service-Oriented Architecture
    2. Agent-Oriented Architecture
    3. Additional Requirements for AC Systems and Networks
    4. Autonomic Knowledge Management Architecture
      • Model-driven architecture and deployment
  5. A Closer Look at AC Technology
    1. Artificial Intelligence and Agents Technology
      • Machine learning, modeling and optimization
      • Knowledge-based reasoning
    2. Policy-based Management
    3. Knowledge Management (addresses harmonization of knowledge)
    4. Change Management
      • Accommodating change in users, environmental conditions, business policies, etc.
  6. Detailed Scenarios
    1. Autonomic system scenarios
    2. Autonomic network scenarios
  7. The Future of Autonomic Computing
    1. Research challenges
    2. How AC architecture, technology, and standards might evolve
    3. Future applications of AC
  8. Summary and General Discussion
  9. Useful References

Who should attend?

Anyone who has heard of autonomic computing, and is curious to learn more about its theoretical and practical aspects. No special expertise is required, beyond that expected of typical NOMS attendees.

Biography of the Instructors

John Strassner is Fellow and Director of Autonomic Computing at Motorola Research Labs where he is responsible for directing Motorola's efforts in autonomic computing, and in forging partnerships (especially with academia). Previously, John was the Chief Strategy Officer for Intelliden and a former Cisco Fellow. John invented DEN (Directory Enabled Networks) and DEN-ng as a new paradigm for managing and provisioning networks and networked applications. Currently, he is the chair of the TMF's NGOSS metamodel and policy working groups, and a co-chair of the TMF Shared Information and Data modeling work group, as well as being active in the ITU, OMG, and OASIS. He has also authored two books (Directory Enabled Networks and Policy Based Network Management).

Jeffrey O. Kephart manages the Agents and Emergent Phenomena group at the IBM Thomas J. Watson Research Center, and shares responsibility for IBM's Autonomic Computing research strategy and academic outreach. He and his group focus on the application of analogies from biology and economics to massively distributed computing systems, particularly in the domains of autonomic computing, e-commerce, antivirus, and anti-spam technology. Kephart's research efforts on digital immune systems and economic software agents have been publicized in publications such as The Wall Street Journal, The New York Times, Forbes, Wired, Harvard Business Review, IEEE Spectrum, and Scientific American. In 2004, he co-founded the International Conference on Autonomic Computing. Kephart received a BS from Princeton University and a PhD from Stanford University, both in electrical engineering.

Tutorial 7: Traffic Engineering and QoS Management for IP-based NGNs

Prof. George Pavlou, Centre for Communication Systems Research, University of Surrey

Content

Next Generation IP-based Networks will offer Quality of Service (QoS) guarantees by deploying technologies such as Differentiated Services (DiffServ) and Multi-Protocol Label Switching (MPLS) for traffic engineering and network-wide resource management. Despite the progress already made, a number of issues still exist regarding edge-to-edge intra-domain and inter-domain QoS provisioning and management. This tutorial will start by providing background on technologies such as DiffServ, MPLS and their potential combination for QoS support. It will subsequently introduce trends in Service Level Agreements (SLAs) and Service Level Specifications (SLSs) for the subscription to QoS-based services It will then move to examine architectures and frameworks for the management and control of QoS-enabled networks, including the following aspects: approaches and algorithms for off-line traffic engineering and provisioning through explicit MPLS paths or through hop-by-hop IP routing; approaches for dynamic resource management to deal with traffic fluctuations outside the predicted envelope; a service management framework supporting a "resource provisioning cycle"; the derivation of expected traffic demand from subscribed SLSs and approaches for SLS invocation admission control; a monitoring architecture for scalable information collection supporting traffic engineering and service management; and realization issues given the current state-of-the-art of management protocols and monitoring support. The tutorial will also include coverage of emerging work towards inter-domain QoS provisioning, including aspects such as: an inter-domain business model; customer and peer provider SLSs; an architecture for the management and control of inter-domain services; inter-domain off-line traffic engineering; and QoS extensions to BGP for dynamic traffic engineering. Relevant industrial activities such as IPsphere will be also covered. In all these areas, recent research work will be presented, with pointers to bibliography and a specially tailored Web page with additional resources.

Who should attend?

People who will benefit from this tutorial are network managers, development engineers and researchers involved in operational aspects, development and research towards IP-based Next Generation Networks (NGNs). Such networks will be the next generation ISP-operated terrestrial networks but also the core part of the 3rd generation and beyond All-IP mobile networks.

Biography of the Instructor

Prof. George Pavlou holds the Chair of Communication and Information Systems at the Center for Communication Systems Research, Dept. of Electronics Engineering, University of Surrey, UK, where he leads the activities of the Networks Research Group. He received a Diploma in Engineering from the National Technical University of Athens, Greece and MSc and PhD degrees in Computer Science from University College London, UK. His research interests encompass network and service management, network planning and dimensioning, traffic engineering, quality of service, mobile ad hoc networks, service engineering, multimedia service control and management, code mobility, programmable networks and communications middleware. He is the author or co-author of over 120 papers in fully refereed international conferences and journals and has contributed to 4 books. He has also contributed to standardization activities in ISO, ITU-T, TMF and IETF. He was the technical program co-chair of IEEE/IFIP Integrated Management 2001 and he is co-editor of the bi-annual IEEE Communications Network and Service Management series. See also http://www.ee.surrey.ac.uk/Personal/G.Pavlou/ for additional information and his publications in PDF.

Tutorial 8: Introduction to NGN Functional Architecture

Mr. Naotaka Morita, Senior Research Engineer, NTT Service Integration Laboratories, Japan

Content

The Next Generation Network (NGN), which has been overly used as a commercial catch phrase for any new technology, is now showing actual importance for major network operators and service providers to replace existing telephone networks as well as to introduce a new revenue-creating converged service platform between fixed and mobile business. Having been triggered by major carriers in Europe, the NGN study was accelerated in 2003. The International Telecommunication Union - Telecommunication Standardization Sector (ITU-T) answered the demand for new standards and established a special task force - the Focus Group for NGN (FGNGN). The FGNGN is going to finalize a series of foundational specifications by the end of 2005. The series contains the scope of the first set of the release, expected services, network capabilities, and functional architectures that characterize the NGN.
According to the general reference model that has already been specified in ITU-T Recommendations Y.2001 and Y.2011, which assumes decoupling of services and transport, NGN can be represented by multiple functional groups. One of the key implementations for session-based services, utilizing an IP multimedia subsystem (IMS), is introduced with enhanced features to meet both fixed and mobile network requirements. Another key component in the NGN is Resource and Admission Control Functions (RACF) providing end-to-end QOS. Along with these key components, the generic functional architecture shows the overall structure of the NGN and gives a clear guideline to design the associated signalling protocol as well as operation and management mechanisms.
The proposed tutorial session offered by Mr. Morita, who is one of the technical leaders of the architecture working group in FGNGN, begins by describing the target NGN services whose main focuses are session-based telephony and multimedia communication. Then it moves on to the high-level architecture, which will be divided into several functional entities. They are session-related control functional entities that provide a roaming feature over the fixed network. On top of them, multiple application platforms are expected to provide a wide variety of services ranging from emulation of legacy IN services to new 3rd party applications. At the transport stratum, multiple gateway functions are identified to interwork with existing networks as well as to protect the NGN itself. Following those functional level explanations, typical interactions between the functional entities are shown. Network configuration examples are also mentioned. Session border controller and multiple access network configurations are candidate examples. These examples will help bridge the abstract functional descriptions in the ITU-T Recommendations to actual network configurations and equipment.
This comprehensive talk based on the latest documents from FGNGN will give the audience a realistic NGN picture and encourage detailed design of operation and management functions, which really need wider interests of contributors to accelerate the deployment of NGN and facilitate its management.

Who should attend?

Introductory to academia, service providers, network operators, and manufacturers

Biography of the Instructor

Naotaka received his B.E. and M.E. degrees from Nagoya University, Aichi, Japan, in 1985 and 1987, respectively. In 1987, he joined the Research and Development Center of NTT Corporation, where he engaged in the research of ATM systems. From 2000, he has been studying VoIP and Interactive Multimedia technology. From October 2004, he has been a Vice Chair of SG13 in the ITU-T. He is a co-leader of working group 2, the Functional Architecture and Mobility Group in FGNGN.