InterComms :: International Communications Project
  Intercomms Issue 19
Issue 19 Articles

Aria Networks logoManaging Network Virtualization

InterComms talks to James Pullen, Aria Networks’ Director of Product Management & Marketing, about how service providers are changing the way they use network virtualization to realise the benefits of Cloud and Software Defined Networks, today

PDF icon Download article as PDF
James Pullen, Director of Product Management & Marketing, Aria Networks
James Pullen,
Director of Product Management & Marketing, Aria Networks

Aria Networks offers solutions for capacity management and network planning for mobile and fixed-line service providers.

With the ability to analyse current and future service profiles and forecast customer demand, Aria Networks is able to identify how networks can be more efficient, profitable and reliable.

Tier-1 telcos are using Aria to support long-term evolution plans, then zooming in to solve day-to-day capital efficiency, performance and reliability challenges.

Q: What do you mean by network virtualization?
A: In its broadest sense network virtualization is about taking network capacity, typically supported by multiple technologies and vendors, and managing it as an abstract resource for flexible, efficient delivery of services. Virtualization hides the complexity found in the physical network and in specific protocols from the process of configuring the logical network for delivery of services.

The Cloud is the ultimate realization of a virtualized network. To the customer, Cloud services offer the resources and capacities they understand they need: CPU cycles, storage, transport bandwidth, and service SLAs. But it hides the underlying physical layer. The customer doesn’t care if they are buying capacity that uses Intel or AMD CPUs, whether they are connecting over a DWDM or SDH core network, or whether the service resilience is achieved at the optical switching layer or by using disjoint IP/MPLS paths.

Q: What are the benefits to the service provider from network virtualization?
A: Oftentimes network virtualization is promoted as having cost-saving benefits. Flexible assignment of resources to services means more efficient networks as a single resource can support multiple services and capacity can shift very quickly between services as demand ebbs and flows.

Aria Networks’ software has helped customers better manage teir IP, Ethernet and optical networks by providing an abstraction layer on top of the physical capacity and this has typically resulted in a 20% reduction in annual equipment costs by more efficient allocation and upgrade of that capacity. This isn’t the whole story though.

Virtualization also delivers centralised management of resource allocation policies and service definitions. If you can intelligently analyse network capacity you have a solution to the challenges of generating revenue from the network. A virtualised network makes service design and delivery much quicker as it’s possible to see, in simple terms, what the network can support, where a service could be sold today, and how much it might cost to roll-out the service elsewhere.

This is a stark comparison to the way operators have been running their networks in the last 20 years relying on inventory-based fulfilment Operations Support Systems (OSS). These systems represent a service as a fixed allocation of capacity from a specific set of network technologies. Once a service was delivered, its allocated resources rarely, if ever, changed. It was easy to understand what the network was doing at any given time, but actual capacity utilisation was extremely inefficient.

The benefits of network virtualization to the service provider are passed on to their customers in the form of more flexible and resilient services that can be delivered and re-graded far quicker than traditional telco service provisioning processes allow.

Q: What are the key components needed to enable network virtualization?
A: The first thing you need is an open mind and a fresh perspective on the management of your network. I believe it is the design, build and operational processes that need to change to enable network virtualization, not necessarily the network equipment or protocols being used.

Hardware vendors and university research groups have been introducing new equipment interfaces like OpenFlow to support Software Defined Networks (SDN) and virtualization. These are useful technologies that give the service provider greater flexibility to configure the network. But new types of IP router operating systems alone do not deliver network virtualization. Indeed, to assume so is to totally miss the point: Network virtualization can only benefit service providers if it encompasses existing IP and optical equipment as well. Network capacity and physical resources need to be managed across all layers and all equipment to maximise the revenue potential of the current network assets.

It is the role of OSS to deliver network virtualization as these are the systems that are responsible for modelling the network configuration and designing services. Between the network model and the service policies is the virtualization layer.

Aria Networks addresses this need with a capacity management solution that uniquely models detailed network resource dependencies but then describes their capacity in a vendor-neutral ‘language’. Bandwidth, ports, CPUs, storage, even rack space and power supplies can all be modelled as a generic capacity. Service policies and design rules are defined to consume this virtualized capacity by placing demand on capacity rather than using vendor-specific configuration rules.

A service can therefore be defined in terms of the capacity it requires from the network and data centre, the quality of service expected, and the level of end-to-end reliability required. It is then up to the capacity management process to determine how this can be delivered today in the most cost-effective way and maintained in the future as service demand grows.

Q: How do Software Defined Networks (SDN) and OpenFlow support network virtualization?
A: SDN and OpenFlow are the hardware vendors’ contributions to network virtualization.

SDN is an important component of virtualized networks, but by no means the whole story. SDN takes the traffic routing logic out of individual network devices and centralises this control. This makes the network ‘programmable’. SDN requires network devices, routers and switches, to open their packet forwarding capability to programming by an external system. The devices still forward packets, that’s what they are good at, but they are told how to forward packets by a centralised brain, rather than by using their own internal rules defined by protocols like OSPF.

OpenFlow is one emerging standard for this open, programmable interface. Open Flow is being used in enterprise and academic networks but has yet to be rigorously tested in carrier-grade networks. It will first find its way in to the data centres of communication service providers.

A concern among many service providers is that the cost of equipment is not reducing in proportion to the increase in demand for capacity. Part of the OpenFlow philosophy is that you won’t need proprietary IP routers and switches if the bulk of the logic is centralised. This could result in relatively cheap commodity ASIC-based or even x86-based IP equipment. Furthermore, if the implementation of the service and content layers of the network can be migrated on to commodity hardware in data centres, then further saving can be made compared with buying and running proprietary core and head-end equipment.

Q: Does this imply adoption of SDN principals by communication service providers is some year away?
A: SDN principles pre-date OpenFlow by a long way. Arguably, MPLS traffic engineering (TE) is a protocol to implement Software Defined Networks. MPLS-TE enables far more control of traffic than plain IP IGP allows, and the underlying IP capacity can be offered as virtualised, over-booked and quality-managed services. A good IP/MPLS capacity management solution can give you centralised control of LSPs, visibility of IP traffic flows and will be aware of the protection and quality of service properties of the underlying physical network.

MPLS traffic engineering can add complexity to the planning process and in some cases this complexity has resulted in the benefits of MPLS not being fully realised. Aria has worked with service providers who need to manage large numbers of resilient P2P and P2MP LSPs, often with requirements to create, modify and re-grade them in minutes. Before Aria this would take many hours or days using traditional planning tools.
If you have the OSS software and capacity management process to run a virtualized network using existing protocols then SDN becomes possible without relying on OpenFlow-like interfaces.

Q: What is the impact of SDN on current planning processes and OSS systems?
A: SDN and OpenFlow introduce new options for configuration and optimisation of the network. This flexibility has to harnessed by the service providers’ capacity management processes. If it is not, either the potential benefits will be wasted or it will become an unmanageable mess that makes it even harder to understand how services are being delivered to customers.

Aria’s approach to network planning is to know what, where and when to make a change (add more capacity, deploy a new technology, reconfigure a link, etc) rather than just helping you make a pre-determined change. So having the option to use OpenFlow to change the flow of packets across the network gives the service provider one more option for optimising the efficiency of their network assets or for creating new types of service.

Aria is able to model multiple virtual networks while still maintaining the links to the physical equipment and links, essential for building and operating SDN networks with end to end service assurance.

Q: What specific benefits has Aria delivered to service providers adopting capacity management processes?
A: Aria’s capacity management solutions enable service providers to radically improve network efficiency, reliability and profitability. Uniquely, Aria can be applied to any technology, any vendor and any service, thanks to our capacity and service description language. This means we are increasingly being asked to deliver solutions that design new IP/MPLS, content-delivery, data centre and Cloud services.

A couple of recent examples:

In IP/MPLS networks Aria customers have been implementing automated LSP design along with ‘IGP tweaking’ to ensure IP and MPLS traffic on a multi-vendor network offers high-levels of fault tolerance without unnecessary overbuilding of capacity. Frequent, automated analysis of traffic means IP link utilisation rates can be safely increased up to 80% without introducing any single points of failure.

Content delivery networks (CDN) must be configured to balance the cost of storing content close to customer access points against the impact on network traffic caused by moving the content to these cache points. This is not a one-off task as availability of content and demand for it varies greatly over time. Aria has helped operators analyse how best to operate CDNs to minimise the costs associated with video traffic in the network while ensuring they are able to offer a good quality of service to their customers and content providers.

Aria Networks’ solutions are being used by some of the world’s most innovative tier-1 service providers for strategic design of their network virtualization, Cloud, and CDN architectures, before turning their findings in to practical capacity management policies that are used to automate daily build planning and operational activities.

Aria Networks logo
For more information visit:
www.aria-networks.com

 

Upcoming Events
 
Contributors
 
Valid XHTML 1.0 Strict
Other publications by Intercomms:
www.soldiermod.com
www.emergencycomms.org