This article is written based on our background with our OpenStack private cloud, Flex Metal, created with OpenStack and Ceph. In this guide, we will walk you through the essentials that make up the OpenStack Network architecture, services, and security.
OpenStack Networking is a standalone service that often deploys several processes across several nodes. Neutron-server is the main process for OpenStack Networking. This is a Python daemon that exposes the OpenStack Networking API and passes tenant requests to a suite of plug-ins for additional processing.
The OpenStack Essentials are:
- Neutron Server – This service runs on the network node to service the Networking API and its extensions.
- Plugin Agent – Runs on each compute node to manage local virtual switch (vswitch) configuration.
- DHCP agent – Provides DHCP services to tenant networks.
- L3 agent – Provides L3/NAT forwarding for external network access of VMs on tenant networks.
- Network Provider Services (SDN server/services) – Provides additional networking services to tenant networks.
A standard OpenStack Networking setup has up to four distinct physical data center networks:
- Management Network – Used for internal communication between OpenStack Components.
- Guest Network – Used for VM data communication within the cloud deployment.
- External Network – Used to provide VMs with Internet access in some deployment scenarios.
- API Network – Exposes all OpenStack APIs, including the OpenStack Networking API, to tenants.
During the initial architectural phase, it is important to ensure appropriate expertise is available with physical networking infrastructure design. This is to identify proper security controls and auditing mechanisms.
OpenStack Networking Service Essentials
L2 isolation using VLANs and tunneling
OpenStack Networking can employ two different mechanisms for traffic segregation on a per tenant/network combination: VLANs (IEEE 802.1Q tagging) or L2 tunnels using GRE encapsulation.
VLAN configuration complexity depends on your OpenStack design requirements. To allow OpenStack Networking to efficiently use VLANs, you must allocate a VLAN range (one for each tenant) and turn each compute node physical switch port into a VLAN trunk port.
Network tunneling encapsulates each tenant/network combination with a unique “tunnel-id” that is used to identify the network traffic belonging to that combination. Tunneling adds a layer of obfuscation to network data traffic, reducing the visibility of individual tenant traffic from a monitoring point of view.
The choice of tenant network isolation affects how the network security and control boundary is implemented for tenant services.
Access control lists
OpenStack Compute supports tenant network traffic access controls directly when deployed with the legacy nova-network service, or may defer access control to the OpenStack Networking service.
Security groups allow administrators and tenants the ability to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a virtual interface port. The security group’s rules are stateful L2-L4 traffic fillers.
When using the Networking service, we recommend that you enable security groups in this service and disable it in the Compute service.
Networking services security best practices
To secure OpenStack Networking, you must understand how the workflow process for tenant instance creation needs to be mapped to security domains.
There are four main services that interact with OpenStack Networking. In a typical OpenStack deployment these services map to the following security domains:
- OpenStack dashboard: Public and management
- OpenStack Identity: Management
- OpenStack compute node: Management and guest
- OpenStack network node: Management, guest, and possibly public depending upon neutron-plugin in use.
- SDN services node: Management, guest and possibly public depending upon product used.
Securing OpenStack Networking Services
The OpenStack Networking service provides security group functionality using a mechanism that is more flexible and powerful than the security group capabilities built into OpenStack Compute.
Thus, nova.conf should always disable built-in security groups and proxy all security group calls to the OpenStack Networking API when using OpenStack Networking. Failure to do so results in conflicting security policies being simultaneously applied by both services.
To proxy security groups to OpenStack Networking, use the following configuration values:
- firewall_driver must be set to nova.virt.firewall.NoopFirewallDriver so that nova-compute does not perform iptables-based filtering itself.
- security_group_api must be set to neutron so that all security group requests are proxied to the OpenStack Networking service.
Security groups and their rules allow administrators and projects the ability to specify the type of traffic and direction (ingress/egress) that is allowed to pass through a virtual interface port. When a virtual interface port is created in OpenStack Networking it is associated with a security group.