A site dedicated to Cloud and Datacenter Management

Book Review: Microsoft System Center Network Virtualization and Cloud Computing

Network Virtualization and Cloud ComputingRecently, I finished reading the Microsoft System Center Network Virtualization and Cloud Computing eBook.

There are only 2 chapters, and they were both very useful.

I’ve decided to share my highlights from reading this specific publication, in case the points that I found of note/interest will be of some benefit to someone else. So, here are my highlights (by chapter). Note that not every chapter will have highlights (depending on the content and the main focus of my work).


Chapter 01: Hyper-V Network Virtualization Internals

  • Network virtualization should allow a virtual network, including all of its IP addresses, routes, network appliances, and so on, to appear to be running directly on the physical network
  • So without network virtualization, a key feature of server virtualization is much less flexible (i.e., you can move VMs only to hosts on the same physical subnet) and less automated (i.e., you might need to reconfigure the network before a VM can be migrated).
  • For HNV, a centralized control plane is used to distribute policies to the endpoints needed to properly encapsulate and de-encapsulate the packets. This allows for a centralized policy with a global view of the virtual network while the actual encapsulation and de-encapsulation based on this policy happens at each end host
  • There can be many VM networks on a single physical network
  • On a single host there can be a mixture of IPv4 and IPv6 customer addresses if they are in different VM networks.
  • Within a single VM network, IP and MAC addresses cannot overlap, just like in a physical network. On the other hand, across multiple VM networks, each VM network can contain the same IP and MAC address, even when those VM networks are on the same physical network
  • Only VMs can be joined to a virtual network. Windows does allow the host operating system to run through the Hyper-V virtual switch and can be attached to a VM network
  • Currently a single instance of VMM manages a particular VM network. This limits the size of the VM network to the number of VMs supported by a single instance of VMM. In the R2 release, VMM allows a maximum of 8,000 VMs and 4,000 VM networks.
  • HNV uses a built-in router that is part of every host to form a distributed router for the virtual network. This means that every host, in particular the Hyper-V virtual switch, acts as the default gateway for all traffic that is going between virtual subnets that are part of the same VM network
  • The role of the HNV Gateway is to provide a bridge between a particular VM network and either the physical network or other VM networks.
  • The IP address in the VM network must be routable on the physical network
  • A second major feature of the gateway is that a single gateway VM can be the gateway for multiple VM networks. This is enabled by the Windows networking stack becoming multi-tenant aware with the ability to compartmentalize multiple routing domains from each other.
  • The gateway must be in its own virtual subnet.
  • HNV uses a particular format of GRE, called Network Virtualization using Generic Routing Encapsulation (NVGRE), for the encapsulation protocol
  • HNV NDIS LWF does not have to be bound to network adapters anymore
  • After you attach a network adapter to the virtual switch you can enable HNV simply by assigning a virtual subnet ID to a particular virtual network adapter
  • When the two VMs are on the same host, there is no NVGRE encapsulation
  • NOTE You can set a VLAN on a PA to associate the PA to a VLAN. You might want to do this if for instance you want all HNV traffic to be isolated and on the same VLAN.


Chapter 02: Implementing Cloud Computing with Network Virtualization

  • With the new multi-tenant TCP/IP stack in Windows Server 2012 R2, a single VM can be used to route packets in a compartmentalized manner exclusively between tenants’ interfaces
  • Routing compartments virtualize the TCP/IP stack to enable multiple routing entities with overlapping IP addresses to co-exist on the same gateway VM. Packets and network entities including interfaces, IP addresses, route tables, and ARP entries of each tenant are isolated by routing compartments. Interfaces and packets in one compartment cannot be seen in another compartment
  • You can use Windows PowerShell to configure multi-tenancy
  • To ensure that that all Network Virtualization using Generic Routing Encapsulation (NVGRE) packets in the Contoso routing domain are transmitted to the Contoso interface IfCi1, the following configuration step is required on the host:
    • Add-VmNetworkAdapterRoutingDomainMapping -VMName GW-VM -VMNetworkAdapterName NIC1
      -RoutingDomainld “{12345678-1000-2000-3000-123456780001}” -RoutingDomainName “Contoso”
      -IsolationId 6001 -IsolationName “IfCil”
  • Compartments also enable routing between virtual and physical networks using the forwarding gateway feature in Windows Server 2012 R2.
  • It is therefore recommended that you rate-limit traffic on each S2S tunnel. The following cmdlet is an example where bandwidth is limited to 1,024 kilobits per second (kbps) in each direction:
    • Set-VpnS2SInterface -RoutingDomain Contoso -Name If2 -TxBandwidthKbps 1024
      -RxBandwidthKbps 1024
    • By default, all connections are created with a 5,120 kbps limit in each direction so that no single tunnel hogs the CPU.
  • Adding and removing routes based on tunnel states like this requires manual intervention and results in sub-optimal routing that can lead to connectivity loss. One way to solve this problem is via route metrics (see for some background information on route metrics).
  • For more information on BGP, see
  • BGP uses the concept of autonomous system (AS) and AS number (ASN) for making routing decisions. This example assumes that each site is identified by a unique ASN in the private range 64512 to 65535.
  • Isolation of routing information is maintained by storing routes using Windows Route Table Manager v2 (RTMv2) (see This enables the configuration of multiple virtual BGP routers, one per routing domain
  • For a walkthrough of how you can build a test lab for implementing HNV, see the blog post “Software Defined Networking: Hybrid Clouds using Hyper-V Network Virtualization (Part 2)” at
  • For an example of how a cloud hosting provider could offer Disaster Recovery as a Service (DRaaS) using HNV, see the blog post “Software Defined Networking: Hybrid Clouds using Hyper-V Network Virtualization (Part 3)” at

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: