Internet Engineering Task Force T. Narten Internet-Draft IBM Intended status: Informational October 14, 2011 Expires: April 16, 2012 Problem Statement for ARMD draft-narten-armd-problem-statement-01 Abstract This document examines issues related to the massive scaling of data centers. Our initial scope is relatively narrow. Specifically, we focus on address resolution (ARP and ND) within the data center. Status of this Memo This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79. Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet- Drafts is at http://datatracker.ietf.org/drafts/current/. Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress." This Internet-Draft will expire on April 16, 2012. Copyright Notice Copyright (c) 2011 IETF Trust and the persons identified as the document authors. All rights reserved. This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (http://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Simplified BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Simplified BSD License. Narten Expires April 16, 2012 [Page 1] Internet-Draft armd-problem October 2011 Table of Contents 1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . 3 2. Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . 3 3. Background . . . . . . . . . . . . . . . . . . . . . . . . . . 3 4. Representative Data Center Designs . . . . . . . . . . . . . . 5 4.1. Scenario 1: L3 Terminates at the Access Link . . . . . . . 5 4.2. Scenario 2: L3 Terminates at the Aggregation Switch . . . . 6 5. Address Resolution . . . . . . . . . . . . . . . . . . . . . . 6 6. Problem Itemization . . . . . . . . . . . . . . . . . . . . . . 7 6.1. ARP Processing on Routers . . . . . . . . . . . . . . . . . 7 6.2. MAC Address Table Size Limitations in Switches . . . . . . 7 7. Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 8. Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . 8 9. IANA Considerations . . . . . . . . . . . . . . . . . . . . . . 8 10. Security Considerations . . . . . . . . . . . . . . . . . . . . 8 Author's Address . . . . . . . . . . . . . . . . . . . . . . . . . 8 Narten Expires April 16, 2012 [Page 2] Internet-Draft armd-problem October 2011 1. Introduction This document examines issues related to the massive scaling of data centers. Our initial scope is relatively narrow. Specifically, we focus on address resolution (ARP in IPv4 and Neighbor Discovery in IPv6) within a single L2 broadcast domain, in which all nodes are within the same physical data center. From an IP perspective, the entire L2 network comprises one IP subnet or IPv6 "link". Data centers in which a single L2 network spans multiple geographic locations are out-of-scope. This document is intended to support the ARMD WG identify potential future work areas. The scope of this document intentionally starts out narrow, mirroring the ARMD WG charter. Expanding the scope requires careful thought, as the topic of scaling data centers generally has an almost unbounded potential scope. It is important that this group restrict itself to considering problems that are widespread and that it has the ability to solve. 2. Terminology Application: a service that runs on either a physical or virtual machine, providing a service (e.g., web server, database server, etc.) Host (or server): Physical machine on which a hypervisor and one or more VMs run. Hosts typically have one IP address assigned to them, for communications with the machine (e.g., hypervisor) itself. Each individual VM will have one (or more) addresses assigned to it as well. Hypervisor: Software running on a host that allows multiple VMs to run on the same host. L2 domain: IEEE802.1Q domain supporting up to 4095 VLANs. Virtual machine (VM): A software implementation of a physical machine that runs programs as if they were executing on a bare machine. Applications do not know they are running on a VM as opposed to running on a "bare" host or server. 3. Background Large, flat L2 networks have long been known to have scaling problems. As the size of an L2 network increases, the level of broadcast traffic from protocols like ARP increases. Large amounts Narten Expires April 16, 2012 [Page 3] Internet-Draft armd-problem October 2011 of broadcast traffic pose a particular burden because every device (switch, host and router) must process and possibly act on such traffic. In addition, large L2 networks can be subject to "broadcast storms". The conventional wisdom for addressing such problems has been to say "don't do that". That is, split large L2 networks into smaller L2 networks, each operating as its own L3/IP subnet. Numerous data center networks have been designed with this principle, e.g., with each rack under one L3 IP subnet. By doing so, the broadcast domain (and address resolution) is confined to one Top of Rack switch, which works well from a scaling perspective. Unfortunately, this conflicts in some ways with the current trend towards dynamic work load shifting in data centers and increased virtualization as discussed below. Server virtualization is fast becoming the norm in data centers. With server virtualization, each physical server supports multiple virtual servers, each running its own operating system, middleware and applications. Virtualization is a key enabler of workload agility, i.e. allowing any server to host any application and providing the flexibility of adding, shrinking, or moving services among the physical infrastructure. Server virtualization provides numerous benefits, including higher utilization, increased data security, reduced user downtime, and even significant power conservation, along with the promise of a more flexible and dynamic computing environment. The greatest flexibility in VM management occurs when it is possible to place a VM anywhere in the data center regardless of what IP addresses the VM uses and how the physical network is laid out. In practice, movement of VMs within a data center is easiest when VM placement and movement does not conflict with the IP subnet boundaries of the data center's network, so that the VM's IP address need not be changed to reflect its actual point of attachment on the network from an L3/IP perspective. In contrast, if a VM moves to a new IP subnet, its address must change, and clients will need to be made aware of that change. From a VM management perspective, management is simplified if all servers are on a single large L2 network. With virtualization, a single physical server can host 10 (or more) VMs, each having its own IP (and MAC) addresses. Consequently, the number of addresses per machine (and hence per subnet) is increasing, even when the number of physical machines stays constant. Today, it is not uncommon to support 10 VMs per physical server. In a few years, the number will likely reach 100 VMs per physical server. In the past, services were static in the sense that they tended to stay in one physical place. A service installed on a machine would Narten Expires April 16, 2012 [Page 4] Internet-Draft armd-problem October 2011 stay on that machine because the cost of moving a service elsewhere was generally high. Moreover, services would tend to be placed in such a way as to facilitate communication locality. That is, servers would be physically located near the services they accessed most heavily. The network traffic patterns in such environments could thus be optimized, in some cases keeping significant traffic local to one network segment. In these more static and carefully managed environments, it was possible to build networks that approached scaling limitations, but did not actually cross the threshold. Today, with the proliferation of VMs, traffic patterns are becoming more diverse and less predictable. In particular, there can easily be less locality of network traffic as services are moved for such reasons as reducing overall power usage (by consolidating VMs and powering off idle machine) or to move a virtual service to a physical server with more capacity or a lower load. In today's changing environments, it is becoming more difficult to engineer networks as traffic patterns continually shift as VMs move around. In summary, both the size and density of L2 networks is increasing, with the increased deployment of VMs putting pressure on creating ever larger L2 networks. Today, there are already data centers with 120,000 physical machines. That number will only increase going forward. In addition, traffic patterns within a data center are changing. 4. Representative Data Center Designs This section outlines some general data center designs and how they impact address resolution. These designs may only approximate what happens in real data centers, but it is hoped that they can serve as a useful vehicle for describing pain points that are being experienced today in current data centers. Many data centers build their L2 networks using a two-tier approach consisting of access and aggregation switches. Servers connect to access switches (e.g., top-of-rack switches) and access switches in turn are interconnected via aggregation switches. In the following, we describe two common layouts. 4.1. Scenario 1: L3 Terminates at the Access Link In Scenario 1, the L3 network extends all the way to the access switches, with the L2 broadcast domain terminated at the access switch. All servers attached to an access switch are part of the same L2 broadcast domain and the same IP subnet. Each access switch terminates its own L2 broadcast domain, and machines connected to Narten Expires April 16, 2012 [Page 5] Internet-Draft armd-problem October 2011 different access switches are numbered out of different IP subnets. This approach works well from an address resolution perspective because the overall number of machines (physical and virtual) in a single L2 domain is relatively small, e.g., in the low hundreds. The main disadvantage to this scenario is that VMs cannot easily be moved from a server attached to one access switch to a server on a different access switch, as doing so requires changing the VM's IP address, or taking additional steps at the IP routing level to ensure that traffic continues to reach the VM at its new location, even though its IP address no longer matches the subnet configuration of the physical network. 4.2. Scenario 2: L3 Terminates at the Aggregation Switch In Scenario 2, the L3 network extends only to the aggregation switches (or perhaps to routers that connect to the aggregation switches). The aggregation switches (or the routers that connect to multiple aggregation switches) could terminate multiple distinct IP subnets (e.g., one per VLAN) or one large IP subnet. In order to let hosts belonging to different IP subnets be placed under any access switches, it is necessary for access switches to enable multiple VLANs and aggregation switches to enable some VLANs (or subnets) over many physical ports. This configuration breaks the confinement of the VLAN's broadcast domain and makes it equivalent to all the access switches being part of the same L2 broadcast domain (and IP subnet). Thus, this configuration allows VMs to be moved to servers connected to other access switches, but increases the size of the L2 broadcast domain, which can lead to difficulties outlined below. 5. Address Resolution In IPv4, ARP performs address resolution. To determine the link- layer address of a given IP address, a node broadcasts an ARP Request. The request is flooded to all portions of the L2 network, and the node with the requested IP address replies with an ARP response. ARP is an old protocol, and by current standards, is sparsely documented. For example, there are no clear requirement for retransmitting ARP requests in the absence of replies. Consequently, implementations vary in the details of what they actually implement. From a scaling perspective, there are two main problems with ARP. First, it uses broadcast, and any network with a large number of attached hosts will see a correspondingly large amount of broadcast ARP traffic. The second problem is that it is not feasible to change host implementations of ARP - current implementations are too widely entrenched, and any changes to host implementations of ARP would take Narten Expires April 16, 2012 [Page 6] Internet-Draft armd-problem October 2011 years to become sufficient deployed to matter. That said, it may be possible to change ARP implementations in hypervisors, L2/L3 boundary routers, and/or ToR access switches, to leverage such techniques as Proxy ARP and/or OpenFlow infused directory assistance approaches. 6. Problem Itemization This section articulates some specific problems or "pain points" that are related to large data centers. It is a future activity to determine which of these areas can or will be addressed by ARMD or some other IETF WG. 6.1. ARP Processing on Routers One pain point with large L2 broadcast domains is that the routers connected to the L2 domain need to process all ARP traffic. Even though the vast majority of ARP traffic may well not be for that router, the router still has to process enough of the ARP request to determine it can safely be ignored. A common router architecture has ARP processing handled in a "slow path" software processor rather than directly by a hardware ASIC as is the case with forwarding packets, significantly limiting the rate at which ARP traffic can be processed. Current implementations today can support in the low thousands of ARP packets per second. Although address-resolution traffic remains local to one L2 network, some data center designs terminate L2 subnets at individual aggregation routers (i.e., Scenario 2). Such routers can be connected to a large number of interfaces (e.g., 100). While the address resolution traffic on any one interface may be manageable, the aggregate address resolution traffic across all interfaces can become problematic. Another variant of Scenario 2 perhaps has individual routers servicing a relatively small number of interfaces, with the individual interfaces themselves serving very large subnets. Once again, it is the aggregate quantity of ARP traffic seen across all of the router's interfaces that can be problematic. This "pain point" is essentially the same as the one discussed above, the only difference being whether a given number of hosts are spread across a few large subnets or many smaller ones. 6.2. MAC Address Table Size Limitations in Switches L2 switches maintain L2 MAC address forwarding tables for all sources and destinations traversing through the switch. These tables are populated through learning and are used to forward L2 frames to their Narten Expires April 16, 2012 [Page 7] Internet-Draft armd-problem October 2011 correct destination. The larger the L2 domain, the larger the tables have to be. While in theory a switch only needs to keep track of addresses it is actively using, switches flood broadcast frames (e.g., from ARP), multicast frames (e.g., from Neighbor Discovery) and unicast frames to unknown destinations. Switches add entries for the source addresses of such flooded frames to their forwarding tables. Consequently, MAC address table size can become a problem as the size of the L2 domain increases. The table size problem is made worse with VMs, where a single physical machine now hosts ten (or more) VMs, since each has its own MAC address that is visible to switches. In Scenario 1, the size of MAC address tables in switches s not generally a problem. In Scenario 2, however MAC table size limitations can be a real issue. [xxx: do we have numbers? For what size L2 broadcast domains do we start seeing problems? ] 7. Summary This document has outlined a number of problems or "pain points" related to address resolution in large data centers. 8. Acknowledgments 9. IANA Considerations 10. Security Considerations Author's Address Thomas Narten IBM Email: narten@us.ibm.com Narten Expires April 16, 2012 [Page 8]