We offer a standard "University VPN Service" which gives all users an address from a single pool, but institutions can pay to have a "Managed VPN Service" which is limited to a subset of users of their own choice (typically the ones in their institution) and has a dedicated pool of client addresses. The institution can then permit this range access through firewalls and into servers. The addresses are all routed to the VPN across a single routed link in our server network.
We provide some institutions with a private internal network using MPLS L3 VPN. However, the VPN server itself doesn't have VRFs (and we don't really want to configure and would like to be able to use the source select feature to put their pool of addresses into the VPN.
The server router is a Nexus 7010 with NX-OS 7.2(1)D1(1). We're running 7.2 to get use MPLS Inter-AS Option B routing working, but I don't think this is needed for the source select feature.
A bit of Googling and searching Cisco's website didn't show up a VRF source select equivalent directly, but you can roll your own very simply with inter-VRF routes and some Policy Based Routing (PBR). Cisco's website documents this but doesn't give a complete example.
The VPN server
In real life, our VPN server is a Linux box running StrongSWAN and acting as a router (with a link subnet and the client addresses routed to it over that). However, I'm simulating it using another VDC on the same Nexus 7010.Here's the uplink subnet (to router R1) and the default route:
interface Ethernet2/5
description to-r1
ip address 1.19.0.9/24
no shutdown
!
ip route 0.0.0.0/0 Ethernet2/5 1.19.0.1
We simulate client addresses in the global and customer VRFs with a pair of loopback interfaces:
interface loopback19
description global
ip address 1.0.9.1/24
!
interface loopback109
description cust
ip address 100.0.9.1/24
Link and default VRF on the router
The upstream router has a link to the VPN server with the client address range in the default VRF routed across it:
interface Ethernet2/6
description to-v1
ip address 1.19.0.1/24
no shutdown
!
ip route 1.0.9.0/24 Ethernet2/6 1.19.0.9
Clients in the the default VRF are now reachable across the network (assuming static routes are redistributed appropriately).
The VPN client address in the default VRF can now be pinged:
route-dcr-r1# ping 1.0.9.1
PING 1.0.9.1 (1.0.9.1): 56 data bytes
64 bytes from 1.0.9.1: icmp_seq=0 ttl=254 time=1.506 ms
64 bytes from 1.0.9.1: icmp_seq=1 ttl=254 time=1.36 ms
64 bytes from 1.0.9.1: icmp_seq=2 ttl=254 time=1.339 ms
64 bytes from 1.0.9.1: icmp_seq=3 ttl=254 time=1.325 ms
64 bytes from 1.0.9.1: icmp_seq=4 ttl=254 time=1.371 ms
--- 1.0.9.1 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 1.325/1.38/1.506 ms
Routing traffic out from the VRF
To route traffic from inside the VRF to the VPN server in the default VRF, an inter-VRF static route can easily be created:
vrf context cust
ip route 100.0.9.0/24 Ethernet2/6 1.19.0.9 vrf default
... this says that 100.0.9.0/24 is to be routed via 1.19.0.9 (the VPN server) in VRF default on Ethernet2/6.
The route needs to be redistributed as per any normal route in the VRF. In our case, this is redistributed as a static route (not as part of an aggregate), along with the direct (NX-OS parlance for "connected") route used on the link subnet:
route-map permit_rtmap permit 10
!
router bgp 1
vrf cust
address-family ipv4 unicast
redistribute direct route-map permit_rtmap
redistribute static route-map permit_rtmap
Selecting VRF based on source IP address
Before we can use Policy Based Routing (PBR), we need to enable it as a feature:
feature pbr
First, we create an access list to match the traffic to jump into a different VRF:
ip access-list vpn-cust-addrs
10 permit ip 100.0.9.0/24 any
Then we create a route-map to change the VRF:
route-map vpn-in_rtmap permit 10
match ip address vpn-cust-addrs
set vrf cust
... the set statement changes the VRF of the received traffic: the next hop and output interface are derived by looking at the routing table in the cust VRF.
Next, we apply the policy routing to the interface linking to the VPN server:
interface Ethernet2/6
ip policy route-map vpn-in_rtmap
A ping to the client addresses from inside the VRF now works from R1:
route-dcr-r1# ping 100.0.9.1 vrf cust
PING 100.0.9.1 (100.0.9.1): 56 data bytes
64 bytes from 100.0.9.1: icmp_seq=0 ttl=254 time=1.552 ms
64 bytes from 100.0.9.1: icmp_seq=1 ttl=254 time=1.291 ms
64 bytes from 100.0.9.1: icmp_seq=2 ttl=254 time=1.3 ms
64 bytes from 100.0.9.1: icmp_seq=3 ttl=254 time=1.444 ms
64 bytes from 100.0.9.1: icmp_seq=4 ttl=254 time=1.307 ms
--- 100.0.9.1 ping statistics ---
5 packets transmitted, 5 packets received, 0.00% packet loss
round-trip min/avg/max = 1.291/1.378/1.552 ms