I was playing around with the idea that IPv6 anycast addresses could be useful to replace next-hop redundancy protocols. So I thought I'd give it a try and see how that ended up. My idea was to port known VPN design good practices (router and site redundancy) to the IPv6 and weigh the pros and cons.
So I created a small topology with one spoke and several hubs to get up a basic network:
In this case, R6 (left) is the client, and R9 (top) is a standalone hub, whilst R10 and R11 (bottom) are an HA hub pair. After that I went on and configure basic EIGRPv6 for the WAN domain and redistributed all connected interfaces into EIGRPv6. Here is the meaningful config for R5, it is more or less the same for R8 and R6:
interface GigabitEthernet1/0
no ip address
negotiation auto
ipv6 address 2001:DB8::5/64
ipv6 enable
ipv6 eigrp 6
!
interface Serial2/0
no ip address
ipv6 address 1234::/64 anycast
ipv6 address 2001:DB5::1/64
ipv6 enable
ipv6 eigrp 6
!
ipv6 router eigrp 6
passive-interface Serial2/0
eigrp router-id 5.5.5.5
The important part here is the "ipv6 address 1234::/64 anycast" command on the hub facing interface. What we are doing here using the anycast keyword is disabling Duplicate Address Detection (dad) on the interface. This means that we are allowing duplicate IPv6 addresses on the fa2/0 segment. That way we can have duplicate addresses on the lan segment (for router redundancy?). The same thing can be achieved by doing:
interface Serial2/0
ipv6 address 1234::/64ipv6 nd dad attempts 0
After that the trick is to redistribute your anycast subnet (1234::/64 in our case) into EIGRPv6, by adding the "ipv6 eigrp 6" at the interface level. What are we doing here? We are redistributing the anycast subnet into our core network, thus the spoke will learn about the anycast address through EIGRPv6. The point here is that the anycast address is learned both via R5 and R8, so R6 (the spoke) can decide based on the EIRGPv6 metrics which is the best path to get to 1234::1:
R6#sh ipv6 eigrp topology
EIGRP-IPv6 Topology Table for AS(6)/ID(6.6.6.6)
Codes: P - Passive, A - Active, U - Update, Q - Query, R - Reply,
r - reply Status, s - sia Status
P 2001:DB9::/64, 1 successors, FD is 28416
via FE80::C809:57FF:FED0:1C (28416/28160), GigabitEthernet1/0
P 1234::/64, 1 successors, FD is 28416
via FE80::C809:57FF:FED0:1C (28416/28160), GigabitEthernet1/0
P 2001:DB8::/64, 1 successors, FD is 2816
via Connected, GigabitEthernet1/0
P 2001:DB6::/64, 1 successors, FD is 128256
via Connected, Loopback6
P 2001:DB5::/64, 1 successors, FD is 2170112
via FE80::C805:57FF:FECE:1C (2170112/2169856), GigabitEthernet1/0
This provides network redundancy, because if one link of the network goes down, all you need to do is to wait for the routing protocol to converge, and you will hit another hub (or if you tune the link metrics the EIGRPv6 FS will kick in straight away).
Let's have a check at what this looks like on R6 when both R5 and R8 edge routers are up and running:
R6#traceroute 1234::1
Type escape sequence to abort.
Tracing the route to 1234::1
1 2001:DB8::8 36 msec 20 msec 16 msec
2 2001:DB9::10 52 msec 36 msec 36 msec
So we can see that we are going to R10 hub. We are going there because the network cost is lower on the the path to R8 (I artificially lowered the bandwith on the WAN facing R5 interface). Now let's shut down the R8 edge router facing interface and see the result:
*Dec 15 01:16:29.961: %DUAL-5-NBRCHANGE: EIGRP-IPv6 6: Neighbor FE80::C809:57FF:FED0:1C (GigabitEthernet1/0) is down: interface down
R6#sh ipv6 eigrp topology
EIGRP-IPv6 Topology Table for AS(6)/ID(6.6.6.6)
Codes: P - Passive, A - Active, U - Update, Q - Query, R - Reply,
r - reply Status, s - sia Status
P 1234::/64, 1 successors, FD is 2170112
via FE80::C805:57FF:FECE:1C (2170112/2169856), GigabitEthernet1/0
P 2001:DB8::/64, 1 successors, FD is 2816
via Connected, GigabitEthernet1/0
P 2001:DB6::/64, 1 successors, FD is 128256
via Connected, Loopback6
P 2001:DB5::/64, 1 successors, FD is 2170112
via FE80::C805:57FF:FECE:1C (2170112/2169856), GigabitEthernet1/0
R6#traceroute 1234::1
Type escape sequence to abort.
Tracing the route to 1234::1
1 2001:DB8::5 48 msec 12 msec 16 msec
2 2001:DB5::9 80 msec 12 msec 40 msec
We are taking a different network path, so we have achieved network redundancy in an easy way.
Now what about redundancy at the router level (between R10 and R11)? This works the same way exactly. Let me "no shut" the WAN facing interface of R8, to bring the primary link back up:
*Dec 15 01:21:05.325: %DUAL-5-NBRCHANGE: EIGRP-IPv6 6: Neighbor FE80::C809:57FF:FED0:1C (GigabitEthernet1/0) is up: new adjacency
R6# traceroute 1234::1
Type escape sequence to abort.
Tracing the route to 1234::1
1 2001:DB8::8 216 msec 100 msec 100 msec
2 2001:DB9::10 144 msec 84 msec 24 msec
and "shut" the interface of R10 to simulate a failure of the hub. I will keep a ping running from the client to see how long the down time is:
R6#ping 1234::1 repeat 1000
Type escape sequence to abort.
Sending 1000, 100-byte ICMP Echos to 1234::1, timeout is 2 seconds:
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!...!!!!!!!!!!!!!!!!!!!!!!!!!!
R6#traceroute 1234::1
Type escape sequence to abort.
Tracing the route to 1234::1
1 2001:DB8::8 72 msec 104 msec 96 msec
2 2001:DB9::11 512 msec 8 msec 28 msec
We can see that the redundancy at the router level (between R10 and R11) is achieved within 3 pings.
I think that IPv6 anycast addresses has real potential to achieve HA for stateless protocols and at the network layer. I do not think as I though initially that it is a good replacement for router level redundancy, such as NHRP. Indeed these have the advantage of "converging" faster, and keeping track of there state on a separate channel. Also they can help achieve stateful redundancy for crypto.
There are still a few grey spots that I am unsure of:
- How does the router choose the anycast address to use when several live on the same lan segment? For example for R10 and R11, how does the router choose to which one to speak? It would be necessary to be able to influence the choice of the primary and secondary hubs.
- Can we use anycast to achieve load-balancing by making sure the metrics of the links are the same? I think this is a bad idea, but...
The next step here is to add redundancy at the tunnel level. I used plain GRE tunnels, but principle would be the same for IPSEC. On R6 use the anycast address as the GRE destination of your tunnel. That way if one tunnel fails, you will be able to failover to the other hub:
R6#sh run int tun0
Building configuration...
Current configuration : 178 bytes
!
interface Tunnel0
no ip address
ipv6 address 2001::6/64
ipv6 enable
keepalive 10 3
tunnel source GigabitEthernet1/0
tunnel mode gre ipv6
tunnel destination 1234::1
end
I think DMVPN would integrate perfectly in this setup, but Cisco doesn't yet support DMVPN for NBMA IPv6, so the story cannot continue now...
No comments:
Post a Comment