GRE(4) | Device Drivers Manual | GRE(4) |
gre
, mgre
,
egre
, nvgre
—
Generic Routing Encapsulation network device
pseudo-device gre
The gre
pseudo-device provides interfaces
for tunnelling protocols across IPv4 and IPv6 networks using the Generic
Routing Encapsulation (GRE) encapsulation protocol.
GRE datagrams (IP protocol number 47) consist of a GRE header and an outer IP header for encapsulating another protocol's datagram. The GRE header specifies the type of the encapsulated datagram, allowing for the tunnelling of multiple protocols.
Different tunnels between the same endpoints may be distinguished by an optional Key field in the GRE header. The Key field may be partitioned to carry flow information about the encapsulated traffic to allow better use of multipath links.
This pseudo driver provides the clonable network interfaces:
gre
mgre
egre
nvgre
eoip
See eoip(4) for information regarding MikroTik Ethernet over IP interfaces.
All GRE packet processing in the system is allowed or denied by setting the net.inet.gre.allow sysctl(8) variable. To allow GRE packet processing, set net.inet.gre.allow to 1.
gre
, mgre
,
egre
, and nvgre
interfaces
can be created at runtime using the ifconfig
iface
N create
command
or by setting up a hostname.if(5)
configuration file for netstart(8).
For correct operation, encapsulated traffic must not be routed over the interface itself. This can be implemented by adding a distinct or a more specific route to the tunnel destination than the hosts or networks routed via the tunnel interface. Alternatively, the tunnel traffic may be configured in a separate routing table to the encapsulated traffic.
A gre
tunnel can encapsulate IPv4, IPv6,
and MPLS packets. The MTU is set to 1476 by default to match the value used
by Cisco routers.
gre
supports sending keepalive packets to
the remote endpoint which allows tunnel failure to be detected. To return
keepalives, the remote host must be configured to forward IP packets
received from inside the tunnel back to the address of the local tunnel
endpoint.
gre
interfaces may be configured to
receive IPv4 packets in Web Cache Communication Protocol (WCCP)
encapsulation by setting the link0
flag on the
interface. WCCP reception may be enabled globally by setting the
net.inet.gre.wccp sysctl value to 1. Some magic with
the packet filter configuration and a caching proxy like squid are needed to
do anything useful with these packets. This sysctl requires
net.inet.gre.allow to also be set.
mgre
interfaces can encapsulate IPv4,
IPv6, and MPLS packets. Unlike a point-to-point interface,
mgre
interfaces are configured with an address on an
IP subnet. Peers on that subnet are mapped to the addresses of multiple
tunnel endpoints.
The MTU is set to 1476 by default to match the value used by Cisco routers.
An egre
tunnel interface carries Ethernet
over GRE (EoGRE). Ethernet traffic is encapsulated using Transparent
Ethernet (0x6558) as the protocol identifier in the GRE header, as per RFC
1701. The MTU is set to 1500 by default.
nvgre
interfaces allow construction of
virtual overlay Ethernet networks on top of an IPv4 or IPv6 underlay network
as per RFC 7367. Ethernet traffic is encapsulated using Transparent Ethernet
(0x6558) as the protocol identifier in the GRE header, a 24-bit Virtual
Subnet ID (VSID), and an 8-bit FlowID.
By default the MTU of an nvgre
interface
is set to 1500, and the Don't Fragment flag is set. The MTU on the network
interfaces carrying underlay network traffic must be raised to accommodate
this and the overhead of the NVGRE encapsulation, or the
nvgre
interface must be reconfigured for less
capable underlays.
The underlay network parameters on a nvgre
interface are a unicast tunnel source address, a multicast tunnel
destination address, and a parent network interface. The unicast source
address is used as the NVE Provider Address (PA) on the underlay network.
The parent interface is used to identify which interface the multicast group
should be joined to.
The multicast group is used to transport broadcast and multicast traffic from the overlay to other participating NVGRE endpoints. It is also used to flood unicast traffic to Ethernet addresses in the overlay with an unknown association to a NVGRE endpoint. Traffic received from other NVGRE endpoints, either to the Provider Address or via the multicast group, is used to learn associations between Ethernet addresses in the overlay network and the Provider Addresses of NVGRE endpoints in the underlay.
gre
, mgre
,
egre
, and nvgre
interfaces
support the following ioctl(2) calls for
configuring tunnel options:
SIOCSLIFPHYADDR
struct if_laddrreq *gre
and egre
interfaces support configuration of unicast IP addresses as the tunnel
endpoints.
mgre
interfaces support configuration
of a unicast local IP address, and require an
AF_UNSPEC
destination address.
nvgre
interfaces support configuration
of a unicast IP address as the local endpoint and a multicast group
address as the destination address.
SIOCGLIFPHYADDR
struct if_laddrreq *SIOCDIFPHYADDR
struct ifreq *SIOCSVNETID
struct ifreq *gre
, mgre
, and
egre
interfaces configured with a virtual
network identifier will enable the use of the GRE Key header. The Key is
a 32-bit value by default, or a 24-bit value when the virtual network
flow identifier is enabled.
nvgre
interfaces use the virtual
network identifier in the 24-bit Virtual Subnet Identifier (VSID) aka
Tenant Network Identifier (TNI) field in of the GRE Key header.
SIOCGVNETID
struct ifreq *SIOCDVNETID
struct ifreq *When the virtual network identifier is disabled on
gre
, mgre
, and
egre
interfaces, it disables the use of the GRE
Key header.
nvgre
interfaces do not support this
ioctl as a Virtual Subnet Identifier is required by the protocol.
SIOCSLIFPHYRTABLE
struct ifreq *SIOCGLIFPHYRTABLE
struct ifreq *SIOCSLIFPHYTTL
struct ifreq *gre
and mgre
interfaces configured with a TTL of -1 will copy the TTL in and out of
the encapsulated protocol headers.
SIOCGLIFPHYTTL
struct ifreq *SIOCSLIFPHYDF
struct ifreq *SIOCGLIFPHYDF
struct ifreq *SIOCSTXHPRIO
struct ifreq *IF_HDRPRIO_PACKET
to specify that the current
priority of a packet should be used.
gre
and mgre
interfaces configured with a value of
IF_HDRPRIO_PAYLOAD
will copy the priority from
encapsulated protocol headers.
SIOCGTXHPRIO
struct ifreq *gre
, mgre
, and
egre
interfaces support the following
ioctl(2) calls:
SIOCSVNETFLOWID
struct ifreq *The interface must already be configured with a virtual network identifier before enabling flow identifiers in the GRE Key header. The configured virtual network identify must also fit into 24 bits.
SIOCDVNETFLOWID
struct ifreq *gre
interfaces support the following
ioctl(2) calls:
SIOCSETKALIVE
struct ifkalivereq *Setting the keepalive period or count to 0 disables keepalives on the tunnel.
SIOCGETKALIVE
struct ifkalivereq *nvgre
interfaces support the following
ioctl(2) calls:
SIOCSIFPARENT
struct if_parent *SIOCGIFPARENT
struct if_parent *SIOCGIFPARENT
struct ireq *The GRE protocol in all its flavours does not provide any integrated security features. GRE should only be deployed on trusted private networks, or protected with IPsec to add authentication and encryption for confidentiality. IPsec is especially recommended when transporting GRE over the public internet.
The Packet Filter pf(4) can be used to filter tunnel traffic with endpoint policies pf.conf(5).
The Time-to-Live (TTL) value of a tunnel can be set to 1 or a low value to restrict the traffic to the local network:
# ifconfig gre0 tunnelttl 1
Host X ---- Host A ------------ tunnel ------------ Cisco D ---- Host E \ / \ / +------ Host B ------ Host C ------+
On Host A (OpenBSD):
# route add default B # ifconfig greN create # ifconfig greN A D netmask 0xffffffff up # ifconfig greN tunnel A D # route add E D
On Host D (Cisco):
Interface TunnelX ip unnumbered D ! e.g. address from Ethernet interface tunnel source D ! e.g. address from Ethernet interface tunnel destination A ip route C <some interface and mask> ip route A mask C ip route X mask tunnelX
OR
On Host D (OpenBSD):
# route add default C # ifconfig greN create # ifconfig greN D A # ifconfig greN tunnel D A
To reach Host A over the tunnel (from Host D), there has to be an alias on Host A for the Ethernet interface:
# ifconfig <etherif> alias
Y
and on the Cisco:
ip route Y mask tunnelX
gre
keepalive packets may be enabled with
ifconfig(8) like this:
# ifconfig greN keepalive period count
This will send a keepalive packet every period seconds. If no response is received in count * period seconds, the link is considered down. To return keepalives, the remote host must be configured to forward packets:
# sysctl net.inet.ip.forwarding=1
If pf(4) is enabled then it is
necessary to add a pass rule specific for the keepalive packets. The rule
must use no state
because the keepalive packet is
entering the network stack multiple times. In most cases the following
should work:
pass quick on gre proto gre no state
mgre
can be used to build a
point-to-multipoint tunnel network to several hosts using a single
mgre
interface.
In this example the host A has an outer IP of 198.51.100.12, host B has 203.0.113.27, and host C has 203.0.113.254.
Addressing within the tunnel is done using 192.0.2.0/24:
+--- Host B / / Host A --- tunnel ---+ \ \ +--- Host C
On Host A:
# ifconfig mgreN create # ifconfig mgreN tunneladdr 198.51.100.12 # ifconfig mgreN inet 192.0.2.1 netmask 0xffffff00 up
On Host B:
# ifconfig mgreN create # ifconfig mgreN tunneladdr 203.0.113.27 # ifconfig mgreN inet 192.0.2.2 netmask 0xffffff00 up
On Host C:
# ifconfig mgreN create # ifconfig mgreN tunneladdr 203.0.113.254 # ifconfig mgreN inet 192.0.2.3 netmask 0xffffff00 up
To reach Host B over the tunnel (from Host A), there has to be a route on Host A specifying the next-hop:
# route add -host 192.0.2.2
203.0.113.27 -iface -ifp mgreN
Similarly, to reach Host A over the tunnel from Host B, a route must be present on B with A's outer IP as next-hop:
# route add -host 192.0.2.1
198.51.100.12 -iface -ifp mgreN
The same tunnel interface can then be used between host B and C by adding the appropriate routes, making the network any-to-any instead of hub-and-spoke:
On Host B:
# route add -host 192.0.2.3
203.0.113.254 -iface -ifp mgreN
On Host C:
# route add -host 192.0.2.2
203.0.113.27 -iface -ifp mgreN
egre
can be used to carry Ethernet traffic
between two endpoints over an IP network, including the public internet.
This can also be achieved using
etherip(4), but
egre
offers the ability to carry different Ethernet
networks between the same endpoints by using virtual network identifiers to
distinguish between them.
For example, a pair of routers separated by the internet could
bridge several Ethernet networks using egre
and
bridge(4).
In this example the first router has a public IP of 192.0.2.1, and
the second router has 203.0.113.2. They are connecting the Ethernet networks
on two vlan(4) interfaces over the
internet. A separate egre
tunnel is created for each
VLAN and given different virtual network identifiers so the routers can tell
which network the encapsulated traffic is for. The
egre
interfaces are explicitly configured to provide
the same MTU as the vlan(4) interfaces
(1500 bytes) with fragmentation enabled so they can be carried over the
internet, which has the same or lower MTU.
At the first site:
# ifconfig vlan0 vnetid 100 # ifconfig egre0 create # ifconfig egre0 tunnel 192.0.2.1 203.0.113.2 # ifconfig egre0 vnetid 100 # ifconfig egre0 mtu 1500 -tunneldf # ifconfig egre0 up # ifconfig bridge0 add vlan0 add egre0 up # ifconfig vlan1 vnetid 200 # ifconfig egre1 create # ifconfig egre1 tunnel 192.0.2.1 203.0.113.2 # ifconfig egre1 vnetid 200 # ifconfig egre1 mtu 1500 -tunneldf # ifconfig egre1 up # ifconfig bridge1 add vlan1 add egre1 up
At the second site:
# ifconfig vlan0 vnetid 100 # ifconfig egre0 create # ifconfig egre0 tunnel 203.0.113.2 192.0.2.1 # ifconfig egre0 vnetid 100 # ifconfig egre0 mtu 1500 -tunneldf # ifconfig egre0 up # ifconfig bridge0 add vlan0 add egre0 up # ifconfig vlan1 vnetid 200 # ifconfig egre1 create # ifconfig egre1 tunnel 203.0.113.2 192.0.2.1 # ifconfig egre1 vnetid 200 # ifconfig egre1 mtu 1500 -tunneldf # ifconfig egre1 up # ifconfig bridge1 add vlan1 add egre1 up
NVGRE can be used to build a distinct logical Ethernet network on
top of another network. nvgre
is therefore like a
vlan(4) interface configured on top of a
physical Ethernet interface, except it can sit on any IP network capable of
multicast.
The following shows a basic nvgre
configuration and an equivalent vlan(4)
configuration. In the examples, 192.168.0.1/24 will be the network
configured on the relevant virtual interfaces. The NVGRE underlay network
will be configured on 100.64.10.0/24, and will use 239.1.1.100 as the
multicast group address.
The vlan(4) interface only relies on Ethernet, it does not rely on IP configuration on the parent interface:
# ifconfig em0 up # ifconfig vlan0 create # ifconfig vlan0 parent em0 # ifconfig vlan0 vnetid 10 # ifconfig vlan0 inet 192.168.0.1/24 # ifconfig vlan0 up
nvgre
relies on IP configuration on the
parent interface, and an MTU large enough to carry the encapsulated
traffic:
# ifconfig em0 mtu 1600 # ifconfig em0 inet 100.64.10.1/24 # ifconfig em0 up # ifconfig nvgre0 create # ifconfig nvgre0 parent em0 tunnel 100.64.10.1 239.1.1.100 # ifconfig nvgre0 vnetid 10010 # ifconfig nvgre0 inet 192.168.0.1/24 # ifconfig nvgre0 up
NVGRE is intended for use in a multitenant datacentre environment to provide each customer with distinct Ethernet networks as needed, but without running into the limit on the number of VLAN tags, and without requiring reconfiguration of the underlying Ethernet infrastructure. Another way to look at it is NVGRE can be used to construct multipoint Ethernet VPNs across an IP core.
For example, if a customer has multiple virtual machines running
in vmm(4) on distinct physical hosts,
nvgre
and
bridge(4) can be used to provide network
connectivity between the tap(4) interfaces
connected to the virtual machines. If there are 3 virtual machines, all
using tap0 on each hosts, and those hosts are connected to the same network
described above, nvgre
with a distinct virtual
network identifier and multicast group can be created for them. The
following assumes nvgre1 and bridge1 have already been created on each host,
and em0 has had the MTU raised:
On physical host 1:
# ifconfig em0 inet 100.64.10.10/24 # ifconfig nvgre1 parent em0 tunnel 100.64.10.10 239.1.1.111 # ifconfig nvgre1 vnetid 10011 # ifconfig bridge1 add nvgre1 add tap0 up
On physical host 2:
# ifconfig em0 inet 100.64.10.11/24 # ifconfig nvgre1 parent em0 tunnel 100.64.10.11 239.1.1.111 # ifconfig nvgre1 vnetid 10011 # ifconfig bridge1 add nvgre1 add tap0 up
On physical host 3:
# ifconfig em0 inet 100.64.10.12/24 # ifconfig nvgre1 parent em0 tunnel 100.64.10.12 239.1.1.111 # ifconfig nvgre1 vnetid 10011 # ifconfig bridge1 add nvgre1 add tap0 up
Being able to carry working multicast and jumbo frames over the
public internet is unlikely, which makes it difficult to use NVGRE to
extended Ethernet VPNs between different sites.
nvgre
and egre
can be
bridged together to provide such connectivity. See the
egre
section for an example.
eoip(4), inet(4), ip(4), netintro(4), options(4), hostname.if(5), protocols(5), ifconfig(8), netstart(8), sysctl(8)
S. Hanks, T. Li, D. Farinacci, and P. Traina, Generic Routing Encapsulation (GRE), RFC 1701, October 1994.
S. Hanks, T. Li, D. Farinacci, and P. Traina, Generic Routing Encapsulation over IPv4 networks, RFC 1702, October 1994.
D. Farinacci, T. Li, S. Hanks, D. Meyer, and P. Traina, Generic Routing Encapsulation (GRE), RFC 2784, March 2000.
G. Dommety, Key and Sequence Number Extensions to GRE, RFC 2890, September 2000.
T. Worster, Y. Rekhter, and E. Rosen, Encapsulating MPLS in IP or Generic Routing Encapsulation (GRE), RFC 4023, March 2005.
P. Garg and Y. Wang, NVGRE: Network Virtualization Using Generic Routing Encapsulation, RFC 7637, September 2015.
Web Cache Coordination Protocol V1.0, https://tools.ietf.org/html/draft-ietf-wrec-web-pro-00.txt.
Web Cache Coordination Protocol V2.0, https://tools.ietf.org/html/draft-wilson-wrec-wccp-v2-00.txt.
Heiko W. Rupp <hwr@pilhuhn.de>
RFC 1701 and RFC 2890 describe a variety of optional GRE header
fields in the protocol that are not implemented in the
gre
and egre
interface
drivers. The only optional field the drivers implement support for is the
Key header.
gre
interfaces skip the redirect header in
WCCPv2 GRE encapsulated packets.
The NVGRE RFC specifies VSIDs 0 (0x0) to 4095 (0xfff) as reserved
for future use, and VSID 16777215 (0xffffff) for use for vendor-specific
endpoint communication. The NVGRE RFC also explicitly states encapsulated
Ethernet packets must not contain IEEE 802.1Q (VLAN) tags. The
nvgre
driver does not restrict the use of these
VSIDs, and does not prevent the configuration of child
vlan(4) interfaces or the bridging of VLAN
tagged traffic across the tunnel. These non-restrictions allow non-compliant
tunnels to be configured which may not interoperate with other vendors.
January 8, 2021 | OpenBSD-current |