Issues with Layer2 across DCs
- Ideally, data centers do not share fate. But extending L2 creates a common broadcast domain in 2 data center. Now, we are sharing fate
- Traffic patterns become sub-optimal
- Where does the default-gateway live? In the local DC? Or remote?
- Traffic to load balancer to pool member, but pool member lives in the remote DC now
- And that’s just L3 considerations. What about L2? How is spanning tree handled between DCs?
- Dealing with these issues has spawned whole new protocols, including Overlay Transport Virtualization (OTV) from Cisco that isolates protocols like STP, HSRP, and VTP to attempt to isolate failure domains and avoid FHRP-related traffic tromboning
- Redundant links between DCs becomes even more critical, because there’s an application assumption that the network is always available
- Latency tends to be high and bandwidth tends to be constrained in DCI, which make it hard to successfully vMotion anything. vMotions are huge datasets constantly changing
- Problems you’d have anyway that are bandwidth problems of their own…
- How is storage being synced between sites?
- What about database stores?
- Site awareness
- How do you abstract the current site of a workload so that the compute knows where it’s running?
- Sync storage across sites – active/active
Latest posts by Rick Donato (see all)
- How to Configure a BIND Server on Ubuntu - March 15, 2018
- What is a BGP Confederation? - March 6, 2018
- Cisco – What is BGP ORF (Outbound Route Filtering)? - March 5, 2018
Want to become a networking expert?
Here is our hand-picked selection of the best courses you can find online:
Cisco CCNA 200-301 Certification Gold Bootcamp
Complete Cyber Security Course – Network Security
Internet Security Deep Dive course
Python Pro Bootcamp
and our recommended certification practice exams:
AlphaPrep Practice Tests - Free Trial