What is configuration management? Often at NOMS we think only of network management - i.e. the management of network devices like routers and switches. Host management, on the other hand, has been studied more in the Unix community. Increasingly we are seeing these two worlds converge, as network devices run embedded GNU/Linux or Free BSD operating systems. So what are the differences? One difference is the file abstraction - host operating systems have files and databases that contain configuration data. What are the technologies for managing these? Should they be centralized?
Autonomy is a central concept in modern computing technology. Increasingly computers are being managed by their owners rather than by centralized authorities. In the early 1990's the author developed the automation system cfengine for configuring and maintaining Unix-like operating systems, based on an arbitarary model of either centralized or decentralized control. It was based on the idea of voluntary cooperation - a topic which is now centre stage in autonomic and pervasive computing. cfengine was conceived to be able to run on any device, no matter how large or small. Moreover, it started a field of research into configuration management at the USENIX configuration management workshops and was the proof-of-principle for several key results. Today cfengine is used on an estimated million computers around the world, both in large and small companies.
Cfengine is a tool for setting up and maintaining a configuration across a network of hosts. It embodies a very high level declarative language, much higher-level than scripting languages, together with an autonomous, smart agent and machine-learning monitors. The idea behind cfengine is to create a single "policy" or configuration specification that describes the setup of as many or as few hosts in a network, without sacrificing their autonomy. Cfengine runs on each host and makes sure that it is in a policy-conformant state; if necessary, any deviations from policy rules are fixed automatically. Unlike tools such as rdist, cfengine does not require hosts to open themselves to any central authority, nor to subscribe to a fixed image of files. It is a modern tool, supporting state-of-the-art encryption and IPv6 transport, that can handle distribution and customization of system resources in huge networks (tens of thousands of hosts).
The tutorial focuses on the general principles of configuration management and uses cfengine as an example which intregrates the state of the art research. The list of topics follows the following plan:
We end with a discussion of where cfengine is going, and how it can be extended to encompass configuration management, integrating switches and routers with host configuration in data centres.
Network and System administrators with a minimal knowledge of a scripting language, who wish to understand and perhaps start using cfengine to automate the maintenance and security of their systems. UNIX administrators will be most at home in this tutorial, but cfengine can also be used on Windows 2000 and above. Network administrators who are interested in the principles of configuration management, beyond SNMP, will find a frank discussion about the future of the subject and will have the opportunity to participate in the design of cfengine 3 - the next generation of host-device management.
Mark Burgess is Professor of Network and System Administration at Oslo University College, Norway. He is the author of the configuration management system cfengine and of several books and many papers on the topic. Professor Burgess is a frequent, popular speaker at conferences on system administration.
The importance of network security has been significantly increasing in the past few years. However, the increasing complexity of managing security polices particularly in enterprise networks poses real challenge for efficient security solutions. Network security perimeters such as Firewalls, IPSec gateways, Intrusion Detection and Prevention Systems operate based on locally configured policies. Yet these policies are not necessarily autonomous and might interact between each other to construct a global network security policy. Due to manual, distributed and uncoordinated configuration of security polices, rules conflicts and policy inconsistency are created, causing serious network security vulnerabilities. In addition, enterprise networks continuously grow in size and complexity, which makes policy modification, inspection and evaluation nightmare. Addressing these issues is a key requirement for obtaining provable security and seamless policy configuration. In addition, with growth in network speed and size, the need to optimize the security policy to cope with the traffic rate and attacks is significantly increasing. The constant evolution of policy syntax and semantics make the functional testing of these devices for vulnerability penetration is a difficult task.
This tutorial is divided into three parts. In the first part, we will present techniques to automatically verify and correct firewall and IPSec/VPN polices in large-scale enterprise networks. In the second part, we will discuss techniques to enhance and optimize the policy structure and rule ordering in order to reduce packet matching and improve significantly firewall and IPSec performance. In the third part, we will present techniques that can be used by users, service provider as well as vendors to test their security devices efficiently and accurately.
This tutorial will discuss timely and important issues in academic as well industrial research. Students, academic researchers, industrial researchers and developers, security system architects and practitioners are all target audience for this tutorial and they will directly benefit from attending this tutorial.
Ehab Al-Shaer is an Associate Professor and the Director of the Multimedia Networking Research Lab (MNLAB) in the School of Computer Science, Telecommunications and Information System at DePaul University. His primary research areas are Network Security, Internet monitoring, and multimedia networks. Prof. Al Shaer published many refereed journal and conference publications. He also was a Co-Editor of number of books in Management of Multimedia on the Internet and End-to-End Monitoring. Prof. Al-Shaer was Guest Editor for number of journals. He also served as conference Chair, TPC Co-chair, invited speaker, panelist, tutorial presenter and TPC member in many IEEE and ACM conferences including INFOCOM, ICNP, IM/NOMS, ICDCS, CCNC, MMNS and E2EMON. He was invited speaker in many academic and industrial panel in the area of network security policy management. His current research is funded by NSF and Cisco systems, Intel and Sun Microsystems.
The industry has been wrestling with the complexity of managing business systems for years. The challenge stems from the variety of application and IT resource providers that enterprises use to build their business systems. A range of management systems co-exist to manage the breadth of resources.
The management industry and customers have an opportunity to take advantage of the industry trend towards using Web services for business integration and moving to Service oriented architectures for business. It is now possible to garner these same advantages seen in business for management. Building manageable resources and management systems on a Web services foundation is going to cause a profound shift in how enterprises and vendors manage their IT resources in the future. Embracing this shift is going to create more flexible IT infrastructures, better integration of business and IT objectives, and greater end to end management of both IT infrastructures and business processes.
This presentation provides a bottoms-up tutorial of Web Services Distributed Management (WSDM), the new OASIS Standard that provides the first step in solving this classic management integration problem. The session will begin with an overview of the Management Roadmap architecture and WSDM's place in that architecture relative to other industry standards and initiatives. The technical tutorial will begin with an introduction on WSDL and WS-Addressing, specifications on which WSDM depends. The presenters will build on this with an overview of the Web Services for Resource Framework (WSRF) and Web Services Notification (WSN) OASIS specifications and discuss how they are used by WSDM. Finally, the session will explore WSDM components, Management Using WS (MUWS) and WSDM Management Of Web Services (MOWS). MUWS defines how to represent and access the manageability interfaces of any IT resource as Web services. MOWS defines how to manage Web Services as resources and how to describe and access that manageability using MUWS. Concrete customer issues solved by WSDM will also be highlighted as well as how CIM modeled resources can be accessed using WSDM.
This session will appeal to attendees who are programmers using Web services involved in making those systems manageable, systems administrators, company strategists and architects who are responsible for managing disparate systems in geographically diverse corporations. This session does assume that the attendees have working knowledge of XML, WSDL and Web Services concepts.
Heather Kreger is the IBM lead architect for Web Services and Management in the Emerging Technologies area. She is currently co-lead of the OASIS Web Services Distributed Management Technical Committee, member of several related DMTF Work Groups, as well as IBM's representative to the W3C Web Services Architecture Working Group. Heather was co-lead of JSR109 that specifies web services deployment in J2EE environments and a contributor to the Java Management Extensions (JMX) specification, Heather is also the author of: numerous articles on Web services and management in the IBM Systems Journal, Communications of ACM, Web Services Journal; public technical work includes the "Web Services Conceptual Architecture", "WS-Manageability"; and her own book "Java and JMX, Building Manageable Systems".
Network management has traditionally been carried out using SNMP polling, in some cases augmented by codebook-based correlation. But periodic polling falls far short of capturing the complex and dynamic layer 3 operations of IP networks. In particular, the routing dynamics of IP networks often lead to unpredictable and intermittent behaviors that leave network managers unable to explain what happened or why.
This tutorial introduces an emerging technology called route analytics, which addresses the most difficult management problems in IP networks. Specifically, the tutorial will demonstrate how route analytics can be used to manage routing protocols and the dynamic IP network topology to increase service predictability and availability.
Attendees should have a solid understanding of IP networking and routing, including routing protocol functionality. This session will be particularly useful for those who have experience in managing IP routing in a large network.
Cengiz Alaettinoglu is a fellow at Packet Design, Inc. Currently he is working
on scaling and convergence properties of both inter-domain and intra-domain
routing protocols. He was previously at the USC Information Sciences Institute,
where he worked on the Routing Arbiter project. He co-chaired the IETF Routing
Policy System Working Group to define the Routing Policy Specification
Language and the protocols to enable a distributed, secure routing policy
Alaettinoglu received a B.S. degree in computer engineering in 1988 from the Middle East Technical University, Ankara, Turkey; and M.S. and Ph.D. degrees in computer science in 1991 and 1994 from the University of Maryland at College Park. He was a Research Assistant Professor at the University of Southern California, where he taught graduate and undergraduate classes on operating systems and networking from 1994 to 2000. He has given numerous talks at NANOG, IETF, RIPE and APNIC meetings, as well as at ACM and IEEE conferences and workshops.
Offering reliable novel services in modern heterogeneous networks is a key challenge and the main prospective income source for many network operators and providers. Providing reliable future services in a cost effective scalable manner requires efficient use of networking and computation resources. This can be done by making the network more self-enabled, i.e. making it capable of making distributed local decisions regarding the utilization of the available resource. However, such decisions must be correlated in order to achieve a global overall goal (maximum utilization or maximum profit, for example).
A key building block for all such systems is the ability to monitor the network parameters and the relevant traffic, and to infer from these measurements the relevant information needed in each one of the local decision points. Due to the heterogonous nature of modern networks and to the very high amount of traffic, even monitoring a local location introduces significant difficulties. It is much more challenging to decide what type of traffic or network information should be collected at each network segment in order to acquire the needed global information without investing too much effort in the monitoring process or its management. In fact, efficient network and traffic monitoring may become a very significant ingredient in the ability to provide modern network services in a cost effective way.
This Tutorial deals with practical and efficient techniques to retrieve information from modern network devices. We start by examining the SNMP suit and the various methods to collect information from possibly large MIB tables. Then we develop a framework for quantifying resource (bandwidth and CPU) utilization in distributed network management. To demonstrate the practical impact of this framework, advanced techniques for efficient reactive traffic monitoring, efficient QoS parameter monitoring, and multimedia application monitoring, together with empirical results showing the overhead reduction will be presented. The tutorial continues with an example for a reliable, efficiency aware monitoring system that combines the above techniques with the SNMP framework, and time allowing a novel technique for efficient statistical monitoring.
R&D personnel interested in improving the efficiency and reducing the overhead of network monitoring solutions, and research and academic people interested in both challenging and practical problems related to the efficient utilization of network resources with respect to network monitoring.
Prof. Raz received his doctoral degree from the Weizmann Institute of Science,
Israel, in 1996. From September 1995 until September 1997 he was a postdoctoral
fellow at the International Computer Science Institute, (ICSI) Berkeley, CA,
and a visiting lecture at the University of California, Berkeley. From October
1997 until October 2001 he was with the Networking Research Laboratory at Bell
Labs, Lucent Technologies. In October 2000 Danny Raz joined the faculty of the
Computer Science Department at the Technion in Israel.
His primary research interest is the theory and application of management related problems in IP networks. Prof. Raz has been engaged in network management research in the last seven years. His main contributions are in the field of efficient network management and the use of active and programmable networks in network management. Prof. Raz gave talks and tutorials on this subject in many international conferences, he was the general chair of OpenArch 2000, a program committee member in many of the leading conferences both in the general field of networking (INFOCOM 2002, 2003), network management (IM and NOMS 2001-2006, DSOM 2003-2005), and active and programmable networks (IWAN, OpenArch). He is an editor in the Journal for Communication Networks (JCS) and edited a special issue in JSAC.
The increasing complexity of computing systems is beginning to overwhelm the capabilities of software developers and system administrators to design, evaluate, integrate, and manage these systems. Major software and system vendors such as IBM, HP and Microsoft have concluded that the only viable long-term solution is to create computer systems that manage themselves-a vision that is often referred to as autonomic computing.
In the last few years, interest in autonomic computing has burgeoned within academia and industry. In 2005, there were at least 15 conferences and workshops devoted to the subject, and new ones are being established for 2006. Many companies such as IBM, Motorola, Intel, HP and Microsoft and several start-ups are actively pursuing research and development efforts in autonomic computing. Such widespread interest is fortunate, because autonomic computing is a broad topic, one that requires contributions from many people in a broad array of fields over a long period of time to reach full fruition.
Naturally, systems and network management is one important domain that lies within the purview of autonomic computing. This tutorial, an outline for which appears below, represents an effort to reach out to the community served by the NOMS conference, and give NOMS attendees a reasonably deep understanding of the motivation for autonomic computing, what it is, and how it is likely to affect systems and network management over the course of the foreseeable future. Participants will emerge with a good understanding of the architectural principles and technologies that contribute to autonomic computing, as well as a sense of the role that emerging standards will play. They will learn about how state-of-the-art AI technologies are being applied to and developed for future autonomic systems and networks. One of the most important elements of the tutorial will be the use cases and scenarios that are used for illustration throughout. Finally, participants will hear about research challenges and some early progress towards them by researchers in industry and academia.
Anyone who has heard of autonomic computing, and is curious to learn more about its theoretical and practical aspects. No special expertise is required, beyond that expected of typical NOMS attendees.
John Strassner is Fellow and Director of Autonomic Computing at Motorola Research Labs where he is responsible for directing Motorola's efforts in autonomic computing, and in forging partnerships (especially with academia). Previously, John was the Chief Strategy Officer for Intelliden and a former Cisco Fellow. John invented DEN (Directory Enabled Networks) and DEN-ng as a new paradigm for managing and provisioning networks and networked applications. Currently, he is the chair of the TMF's NGOSS metamodel and policy working groups, and a co-chair of the TMF Shared Information and Data modeling work group, as well as being active in the ITU, OMG, and OASIS. He has also authored two books (Directory Enabled Networks and Policy Based Network Management).
Jeffrey O. Kephart manages the Agents and Emergent Phenomena group at the IBM Thomas J. Watson Research Center, and shares responsibility for IBM's Autonomic Computing research strategy and academic outreach. He and his group focus on the application of analogies from biology and economics to massively distributed computing systems, particularly in the domains of autonomic computing, e-commerce, antivirus, and anti-spam technology. Kephart's research efforts on digital immune systems and economic software agents have been publicized in publications such as The Wall Street Journal, The New York Times, Forbes, Wired, Harvard Business Review, IEEE Spectrum, and Scientific American. In 2004, he co-founded the International Conference on Autonomic Computing. Kephart received a BS from Princeton University and a PhD from Stanford University, both in electrical engineering.
Next Generation IP-based Networks will offer Quality of Service (QoS) guarantees by deploying technologies such as Differentiated Services (DiffServ) and Multi-Protocol Label Switching (MPLS) for traffic engineering and network-wide resource management. Despite the progress already made, a number of issues still exist regarding edge-to-edge intra-domain and inter-domain QoS provisioning and management. This tutorial will start by providing background on technologies such as DiffServ, MPLS and their potential combination for QoS support. It will subsequently introduce trends in Service Level Agreements (SLAs) and Service Level Specifications (SLSs) for the subscription to QoS-based services It will then move to examine architectures and frameworks for the management and control of QoS-enabled networks, including the following aspects: approaches and algorithms for off-line traffic engineering and provisioning through explicit MPLS paths or through hop-by-hop IP routing; approaches for dynamic resource management to deal with traffic fluctuations outside the predicted envelope; a service management framework supporting a "resource provisioning cycle"; the derivation of expected traffic demand from subscribed SLSs and approaches for SLS invocation admission control; a monitoring architecture for scalable information collection supporting traffic engineering and service management; and realization issues given the current state-of-the-art of management protocols and monitoring support. The tutorial will also include coverage of emerging work towards inter-domain QoS provisioning, including aspects such as: an inter-domain business model; customer and peer provider SLSs; an architecture for the management and control of inter-domain services; inter-domain off-line traffic engineering; and QoS extensions to BGP for dynamic traffic engineering. Relevant industrial activities such as IPsphere will be also covered. In all these areas, recent research work will be presented, with pointers to bibliography and a specially tailored Web page with additional resources.
People who will benefit from this tutorial are network managers, development engineers and researchers involved in operational aspects, development and research towards IP-based Next Generation Networks (NGNs). Such networks will be the next generation ISP-operated terrestrial networks but also the core part of the 3rd generation and beyond All-IP mobile networks.
Prof. George Pavlou holds the Chair of Communication and Information Systems at the Center for Communication Systems Research, Dept. of Electronics Engineering, University of Surrey, UK, where he leads the activities of the Networks Research Group. He received a Diploma in Engineering from the National Technical University of Athens, Greece and MSc and PhD degrees in Computer Science from University College London, UK. His research interests encompass network and service management, network planning and dimensioning, traffic engineering, quality of service, mobile ad hoc networks, service engineering, multimedia service control and management, code mobility, programmable networks and communications middleware. He is the author or co-author of over 120 papers in fully refereed international conferences and journals and has contributed to 4 books. He has also contributed to standardization activities in ISO, ITU-T, TMF and IETF. He was the technical program co-chair of IEEE/IFIP Integrated Management 2001 and he is co-editor of the bi-annual IEEE Communications Network and Service Management series. See also http://www.ee.surrey.ac.uk/Personal/G.Pavlou/ for additional information and his publications in PDF.
The Next Generation Network (NGN), which has been overly used as a commercial catch
phrase for any new technology, is now showing actual importance for major network
operators and service providers to replace existing telephone networks as well as
to introduce a new revenue-creating converged service platform between fixed and
mobile business. Having been triggered by major carriers in Europe, the NGN study
was accelerated in 2003. The International Telecommunication Union - Telecommunication
Standardization Sector (ITU-T) answered the demand for new standards and
established a special task force - the Focus Group for NGN (FGNGN). The FGNGN
is going to finalize a series of foundational specifications by the end of 2005.
The series contains the scope of the first set of the release, expected services,
network capabilities, and functional architectures that characterize the NGN.
According to the general reference model that has already been specified in ITU-T Recommendations Y.2001 and Y.2011, which assumes decoupling of services and transport, NGN can be represented by multiple functional groups. One of the key implementations for session-based services, utilizing an IP multimedia subsystem (IMS), is introduced with enhanced features to meet both fixed and mobile network requirements. Another key component in the NGN is Resource and Admission Control Functions (RACF) providing end-to-end QOS. Along with these key components, the generic functional architecture shows the overall structure of the NGN and gives a clear guideline to design the associated signalling protocol as well as operation and management mechanisms.
The proposed tutorial session offered by Mr. Morita, who is one of the technical leaders of the architecture working group in FGNGN, begins by describing the target NGN services whose main focuses are session-based telephony and multimedia communication. Then it moves on to the high-level architecture, which will be divided into several functional entities. They are session-related control functional entities that provide a roaming feature over the fixed network. On top of them, multiple application platforms are expected to provide a wide variety of services ranging from emulation of legacy IN services to new 3rd party applications. At the transport stratum, multiple gateway functions are identified to interwork with existing networks as well as to protect the NGN itself. Following those functional level explanations, typical interactions between the functional entities are shown. Network configuration examples are also mentioned. Session border controller and multiple access network configurations are candidate examples. These examples will help bridge the abstract functional descriptions in the ITU-T Recommendations to actual network configurations and equipment.
This comprehensive talk based on the latest documents from FGNGN will give the audience a realistic NGN picture and encourage detailed design of operation and management functions, which really need wider interests of contributors to accelerate the deployment of NGN and facilitate its management.
Introductory to academia, service providers, network operators, and manufacturers
Naotaka received his B.E. and M.E. degrees from Nagoya University, Aichi, Japan, in 1985 and 1987, respectively. In 1987, he joined the Research and Development Center of NTT Corporation, where he engaged in the research of ATM systems. From 2000, he has been studying VoIP and Interactive Multimedia technology. From October 2004, he has been a Vice Chair of SG13 in the ITU-T. He is a co-leader of working group 2, the Functional Architecture and Mobility Group in FGNGN.