-
Notifications
You must be signed in to change notification settings - Fork 710
VPP BIER
BIER is a multicast transport technology as described in https://tools.ietf.org/html/rfc8279
there are three actions to consider when programming BIER; imposition, mid-point and disposition. Imposition is the act of entering a BIER domain, IP multicast traffic that is intended to be transported over a BIER domain will therefore result in an imposition (the act of pre-prending a BIER header). Conversely, disposition is the act of leaving a BIER domain and hence stripping the BIER header. Mid-point forwarding uses the BIER header and the bits therein to forward packets to BIER neighbours.
A BIER table is a data-structure that holds entries corresponding to devices that have a bit-position. A lookup in the table (or FIB) is based on the destination bit-position as retrieved from the packet to be forwarded. There is one BIER table per-set, per-subdomain and per-Bit-string-length (BSL), therefore a BIER table's ID is composed of the corresponding set, sub-domain and BSL it represents. This is the ID that API clients will see. internally VPP constructs tables for ECMP, so the extended internally key includes the ECMP ID.
As a transport protocol BIER can run over MPLS or non-MPLS (https://tools.ietf.org/html/rfc8296) networks. in MPLS networks the packet's local label will map to a BIER table. To add a BIER table that has such an associated label do:
# mpls table add 0
# bier table add sd 1 set 2 bsl 256 mpls 56
the result is:
# sh bier fib
[@0] bier-table:[sub-domain:1 set:2 ecmp:65535 bsl:256 local-label:56]
[@1] bier-table:[sub-domain:1 set:2 ecmp:0 bsl:256 local-label:1048576]
[@2] bier-table:[sub-domain:1 set:2 ecmp:1 bsl:256 local-label:1048576]
...
this output shows the 'main' table, the one the API/CLI call created, with a ECMP ID of ~0 and a defined local label, plus the tables VPP created internally for ECMP, which have valid ECMP-IDs.
To create a table that does not use an MPLS label, just omit the label from the CLI.
imposition is modelled by a BIER-imposition object; these represent both the BIER header to impose, the bit-position of the node from which the BIER packet originates (i.e. this node) and the subsequent table in which the resulting BIER packet will be forwarded.
# bier imp table sd 1 set 2 bsl 256 header <BIT-STRING> source <BIT-POSITION>
n.b. CLI is under construction but the API is available
the value returned from the CLI/API is the ID of the imposition object which can be used to describe a path in the mFIB.
When a BIER packets leaves a BIER domain it is forwarded based on 1) when it originated and 2) the next header in the stack. To forward based on the originator a BIER-dispoition table contains entries for each originator.
# bier disp table add X
where X is the user's choice of table-ID. Entries are added to the disposition table for each originator from which packets should be accepted.
# bier disp entry add table X source Y payload-proto [v4|v6|mpls] via ...
where X is the table-ID previously created, Y the originator to accept from and "via ..." is a description of the 'path', i.e. result, the packet should take - the usual description used by IP and MPLS are valid here; i.e. "via 192.168.1.1 GigEthernet0/0/0" or "via mpls-lookup 5"
In a similar manner to IP and MPLS, routes are added to the BIER FIB to describe reachability to each destination bit-position. BIER routes can also be ECMP.
# bier route sd X set Y bsl Z bp K via ...
adds the route to the table whose ID is [sd:X, set:Y, BSL:Z] for the destination bit-position, K. Again "via ... ' describes the peer to which to send the packets matching the route. In order to perform disposition, i.e. to say that 'bp K' is this nodes bit-position the path is "via bier-disp-table Q".
Still under development, but, counters are currently collected per-neighbour, which is per "via ...." in the route. However, how the control plane maps this information to its own data structures is TBD. Counters could also be collected per-imposition object and/or per-disposition entry relatively cheaply. Due to the mechanics of BIER forwarding (see the RFC) collecting counters per destination bit-position (per-route), which while possible would be expensive, since it would involve another pass through the header counting bits, is not currently done, but this could be a additional knob to enable.
- VPP 2022 Make Test Use Case Poll
- VPP-AArch64
- VPP-ABF
- VPP Alternative Builds
- VPP API Concepts
- VPP API Versioning
- VPP-ApiChangeProcess
- VPP-ArtifactVersioning
- VPP-BIER
- VPP-Bihash
- VPP-BugReports
- VPP Build System Deep Dive
- VPP Build, Install, And Test Images
- VPP-BuildArtifactRetentionPolicy
- VPP-c2cpel
- VPP Code Walkthrough VoD
- VPP Code Walkthrough VoD Topic Index
- VPP Code Walkthrough VoDs
- VPP-CodeStyleConventions
- VPP-CodingTips
- VPP Command Line Arguments
- VPP Command Line Interface CLI Guide
- VPP-CommitMessages
- VPP-Committers-SMEs
- VPP-CommitterTasks-ApiFreeze
- VPP CommitterTasks Compare API Changes
- VPP-CommitterTasks-CutPointRelease
- VPP-CommitterTasks-CutRelease
- VPP-CommitterTasks-FinalReleaseCandidate
- VPP-CommitterTasks-PullThrottleBranch
- VPP-CommitterTasks-ReleasePlan
- VPP Configuration Tool
- VPP Configure An LW46 MAP E Terminator
- VPP Configure VPP As A Router Between Namespaces
- VPP Configure VPP TAP Interfaces For Container Routing
- VPP-CoreFileMismatch
- VPP-cpel
- VPP-cpeldump
- VPP-CurrentData
- VPP-DHCPKit
- VPP-DHCPv6
- VPP-DistributedOwnership
- VPP-Documentation
- VPP DPOs And Feature Arcs
- VPP EC2 Instance With SRIOV
- VPP-elog
- VPP-FAQ
- VPP Feature Arcs
- VPP-Features
- VPP-Features-IPv6
- VPP-FIB
- VPP-g2
- VPP Getting VPP 16.06
- VPP Getting VPP Release Binaries
- VPP-HA
- VPP-HostStack
- VPP-HostStack-BuiltinEchoClientServer
- VPP-HostStack-EchoClientServer
- VPP-HostStack-ExternalEchoClientServer
- VPP HostStack Hs Test
- VPP-HostStack-LDP-iperf
- VPP-HostStack-LDP-nginx
- VPP-HostStack-LDP-sshd
- VPP-HostStack-nginx
- VPP-HostStack-SessionLayerArchitecture
- VPP-HostStack-TestHttpServer
- VPP-HostStack-TestProxy
- VPP-HostStack-TLS
- VPP-HostStack-VCL
- VPP-HostStack-VclEchoClientServer
- VPP-Hotplug
- VPP How To Add A Tunnel Encapsulation
- VPP How To Build The Sample Plugin
- VPP How To Connect A PCI Interface To VPP
- VPP How To Create A VPP Binary Control Plane API
- VPP How To Deploy VPP In EC2 Instance And Use It To Connect Two Different VPCs
- VPP How To Optimize Performance %28System Tuning%29
- VPP How To Use The API Trace Tools
- VPP How To Use The C API
- VPP How To Use The Packet Generator And Packet Tracer
- VPP-Howtos
- VPP-index
- VPP Installing VPP Binaries From Packages
- VPP Interconnecting vRouters With VPP
- VPP Introduction To IP Adjacency
- VPP Introduction To N Tuple Classifiers
- VPP IP Adjacency Introduction
- VPP-IPFIX
- VPP-IPSec
- VPP IPSec And IKEv2
- VPP IPv6 SR VIRL Topology File
- VPP Java API
- VPP Java API Plugin Support
- VPP Jira Workflow
- VPP-Macswapplugin
- VPP-MakeTestFramework
- VPP-Meeting
- VPP-MFIB
- VPP Missing Prefetches
- VPP Modifying The Packet Processing Directed Graph
- VPP MPLS FIB
- VPP-NAT
- VPP Nataas Test
- VPP-OVN
- VPP Per Feature Notes
- VPP Performance Analysis Tools
- VPP-perftop
- VPP Progressive VPP Tutorial
- VPP Project Meeting Minutes
- VPP Pulling, Building, Running, Hacking And Pushing VPP Code
- VPP Pure L3 Between Namespaces With 32s
- VPP Pure L3 Container Networking
- VPP Pushing And Testing A Tag
- VPP Python API
- VPP-PythonVersionPolicy
- VPP-QuickTrexSetup
- VPP Random Hints And Kinks For KVM Usage
- VPP Release Plans Release Plan 16.09
- VPP Release Plans Release Plan 17.01
- VPP Release Plans Release Plan 17.04
- VPP Release Plans Release Plan 17.07
- VPP Release Plans Release Plan 17.10
- VPP Release Plans Release Plan 18.01
- VPP Release Plans Release Plan 18.04
- VPP Release Plans Release Plan 18.07
- VPP Release Plans Release Plan 18.10
- VPP Release Plans Release Plan 19.01
- VPP Release Plans Release Plan 19.04
- VPP Release Plans Release Plan 19.08
- VPP Release Plans Release Plan 20.01
- VPP Release Plans Release Plan 20.05
- VPP Release Plans Release Plan 20.09
- VPP Release Plans Release Plan 21.01
- VPP Release Plans Release Plan 21.06
- VPP Release Plans Release Plan 21.10
- VPP Release Plans Release Plan 22.02
- VPP Release Plans Release Plan 22.06
- VPP Release Plans Release Plan 22.10
- VPP Release Plans Release Plan 23.02
- VPP Release Plans Release Plan 23.06
- VPP Release Plans Release Plan 23.10
- VPP Release Plans Release Plan 24.02
- VPP Release Plans Release Plan 24.06
- VPP Release Plans Release Plan 24.10
- VPP Release Plans Release Plan 25.02
- VPP Release Plans Release Plan 25.06
- VPP Release Plans Release Plan 25.10
- VPP Release Plans Release Plan 26.02
- VPP Release Plans Release Plan 26.06
- VPP-RM
- VPP-SecurityGroups
- VPP Segment Routing For IPv6
- VPP Segment Routing For MPLS
- VPP Setting Up Your Dev Environment
- VPP-SNAT
- VPP Software Architecture
- VPP STN Testing
- VPP The VPP API
- VPP Training Events
- VPP-Troubleshooting
- VPP-Troubleshooting-BuildIssues
- VPP-Troubleshooting-Vagrant
- VPP Tutorial DPDK And MacSwap
- VPP Tutorial Routing And Switching
- VPP-Tutorials
- VPP Use VPP To Chain VMs Using Vhost User Interface
- VPP Use VPP To Connect VMs Using Vhost User Interface
- VPP Using mTCP User Mode TCP Stack With VPP
- VPP Using VPP As A VXLAN Tunnel Terminator
- VPP Using VPP In A Multi Thread Model
- VPP-VOM
- VPP VPP BFD Nexus
- VPP VPP Home Gateway
- VPP VPP WIKI DEPRECATED CONTENT
- VPP-VPPCommunicationsLibrary
- VPP-VPPConfig
- VPP What Is ODP4VPP
- VPP What Is VPP
- VPP Working Environments
- VPP Working With The 16.06 Throttle Branch