GENI Architecture

The GENI Network architecture is designed to allow:

  • Users to set up Layer 2 network topologies best suited for their experiments,
  • Experimentation with non-IP protocol stacks and experimenter specified packet forwarding algorithms,
  • Multiple concurrent experiments, each of which may use different protocol stacks and packet forwarding algorithms.

+ Log In

The diagram below shows the GENI network architecture.  It consists of:

  1. The control plane.  The control plane is used to discover, reserve, access, program and manage GENI compute and communication resources.  It is represented by the blue links in the diagram.  The control plane runs over the Internet
  2. The data plane.  The data plane is set up on demand for each individual experiment.  It is set up according to the experimenter’s specification of the desired network topology, compute resources that connect to the network, bandwidth of individual links, and programmable switches and controllers needed to implement custom packet forwarding algorithms. The data plane is represented by the orange links in the diagram.  It runs over the GENI backbone network including Internet2, regional R&E networks and GENI rack backplanes.

arch

Slicing the network.  GENI network links are sliced by Ethernet VLANs i.e. multiple experiments sharing the same physical link are given different VLANs on that link.  Slicing by VLAN guarantees traffic isolation among experiments (one slice in GENI cannot see packets in another slice) and also provides best-effort performance isolation.

Deep programmability.  GENI allows programmers to control how packets are forwarded within their experiment.

  • GENI has compute and storage resources at some of the R&E networks that provide GENI data plane connectivity.  Experimenters can use these resources to instantiate software switches and routers and program them to forward packets as desired.  Examples of R&E networks with compute and storage resources include MAX, SOX, CENIC, MOXI and Starlight.
  • GENI has programmable hardware switches on every rack and within some of the R&E networks that provide data plane connectivity.  Most notably, the Internet2 network that provides national data plane connectivity makes its programmable switches available to GENI experimenters.  Programmable switches are OpenFlow capable; experimenters can write controllers to implement custom packet forwarding algorithms.

Network federation and stitching.  GENI is a federated testbed i.e. different organizations host GENI resources and make them available to GENI experimenters.  This include network resources that are owned by regional and national R&E network providers and by campuses hosting GENI racks or wireless base stations.  Setting up the data plane for a GENI experiment may therefore require coordination among multiple network resource providers.  This is done by a process called GENI Stitching.

The figure below illustrates the coordination needed to stitch a Layer 2 link from an experimenter’s compute resource on one rack (Rack A) to a compute resource on another rack (Rack B).  This stitching requires provisioning of VLANs at the regional networks that Racks A and B connect to and at the national backbone network.  Each of these networks needs to allocate and provision a VLAN and they all need to ensure traffic can be exchanged between the VLAN they allocated and the VLAN allocated by their neighboring network.  For details on GENI stitching see the Stitching page on the GENI wiki.

backbone1

GENI wireless networks.  GENI wireless base stations have backhaul connections to a local GENI rack and connect through the rack to the rest of the GENI network.  This is shown in the GENI network architecture diagram at the top of this page.

Connecting campus resources to GENI.  The best way to connect a campus resources such as a scientific instrument or a research lab to GENI is through the dataplane switch on a  GENI rack.  This is illustrated in the figure at the top of this page.

GENI is composed of a broad set of heterogeneous resources, each owned and operated by different entities (called aggregate providers). For example, a campus hosting a GENI rack is an aggregate provider. Likewise, a R&E network that provides network connectivity for the GENI data plane network (see Network Architecture) is also an aggregate provider.

By joining the GENI federation, these aggregate providers make their resources available to GENI experimenters while still maintaining a degree of control and trust that these resources will be used in a responsible and secure manner.

Similarly, GENI experimenters trust aggregate providers to provide the resources promised them and enforce any resource isolation guaranteed by the aggregate.

There are simply too many experimenters and aggregate providers to allow everyone to know everyone and approve every resource-related translation. A scalable trust architecture is therefore needed to ensure that the interests of the aggregate providers and the experimenters are protected.

What is needed is a trusted third party that can vouch for the proper operations of resources (for the experimenters) and for the credentials of the experimenters (for the aggregate providers). This trusted third party is the GENI Federation. It establishes common notions of identity, authentication, authorization and accountability to allow all participations in the GENI federation to enter into resource related transactions in a trusted manner.

Resource owners and experimenters and federations are real people or groups: GENI establishes software services to represent their interests in these transactions. The following figure shows these real-world entities and their virtual representatives in the GENI Federation Architecture.