GCP: Configuring Policy-Based Routing (PBR)

Policy-Based Routing (PBR) is an advanced networking technique that gives you granular control over traffic flow. Instead of routing traffic based solely on the destination IP address, PBR allows you to make routing decisions based on other criteria, such as the source IP address, protocol, or a VM's network tag. This is essential for directing specific types of traffic through security appliances (NVAs), different VPN tunnels, or other specialized next-hops.

1. The "Classic" Method: Policy via Tags and Priority

Before the dedicated PBR feature was introduced, policy-based routing was achieved by combining two features of custom static routes: Network Tags and Priority.

Diagram: Classic PBR with Tags and Priority

        graph TD
            subgraph GCP VPC
                VM_Tagged["VM with 'proxy-traffic' tag"]
                VM_Untagged[VM with no special tag]
                
                NVA[Firewall NVA]
                
                TaggedRoute["Route to 0.0.0.0/0
Source Tag: 'proxy-traffic'
Priority: 900"] DefaultRoute["Default Internet Route
Destination: 0.0.0.0/0
Priority: 1000"] end Internet([Internet]) subgraph "Traffic Flow" VM_Tagged -- Egress packet --> TaggedRoute TaggedRoute -- Next Hop is NVA --> NVA NVA -- Forwards to --> Internet VM_Untagged -- Egress packet --> DefaultRoute DefaultRoute -- Next Hop is IGW --> Internet end style VM_Tagged fill:#fce4ec,stroke:#ad1457,stroke-width:2px; style TaggedRoute fill:#fce4ec,stroke:#ad1457,stroke-width:2px;
This method is simple and effective for policies that can be expressed with a single tag. However, it lacks granularity (you can't specify source IP ranges or protocols) and tightly couples routing policy to VM instance configuration.

2. The "Modern" Method: Policy-Based Routes (Preview)

GCP now has a dedicated Policy-Based Routes feature. This is a purpose-built, more flexible, and powerful way to implement PBR. It decouples the routing logic from the individual VMs.

Feature Status: Policy-Based Routes is currently in Preview. It is not recommended for production workloads until it becomes Generally Available (GA).

Key Features

Diagram: Modern PBR with a Dedicated Policy Resource

        graph TD
            subgraph "GCP VPC"
                Client_VM_1[Client VM
Source IP: 10.10.0.10] Client_VM_2[Client VM
Source IP: 10.10.0.20] subgraph "Firewall Appliance Farm" ILB(Internal Load Balancer) NVA1(Firewall NVA 1) NVA2(Firewall NVA 2) ILB --> NVA1 & NVA2 end PolicyRoute["Policy-Based Route Resource
Rule 1:
- Source: 10.10.0.0/24
- Protocol: TCP
- Next Hop: ILB
Default Rule:
- Next Hop: Internet Gateway"] end Internet([Internet]) subgraph "Traffic Flow Decision" Client_VM_1 -- TCP Packet --> PolicyRoute PolicyRoute -- Matches Rule 1 --> ILB --> NVA1 --> Internet Client_VM_2 -- UDP Packet --> PolicyRoute PolicyRoute -- No Match, Use Default --> Internet end style PolicyRoute fill:#e1f5fe,stroke:#0277bd,stroke-width:2px;

3. gcloud Commands: Implementing Modern Policy-Based Routing

Here, we will configure a PBR to forward all TCP traffic from a specific source range (`10.10.0.0/24`) to a highly available fleet of firewall appliances behind an Internal Load Balancer. All other traffic will go to the internet normally.


# 1. Enable required APIs (do this once per project)
gcloud services enable networkservices.googleapis.com
gcloud services enable compute.googleapis.com

# 2. Setup the Network and Subnet
gcloud compute networks create pbr-vpc --subnet-mode=custom
gcloud compute networks subnets create pbr-subnet \
    --network=pbr-vpc --region=us-central1 --range=10.10.0.0/20

# 3. Create the highly available NVA (Firewall) fleet behind an ILB
# (This follows the standard "ILB as next hop" pattern)

# 3a. NVA Instance Template with NAT config
gcloud compute instance-templates create firewall-template-pbr \
    --region=us-central1 --machine-type=e2-medium --can-ip-forward \
    --network-interface=network=pbr-vpc,subnet=pbr-subnet \
    --tags=allow-health-check \
    --image-family=debian-11 --image-project=debian-cloud \
    --metadata=startup-script='#! /bin/bash
    sudo sysctl -w net.ipv4.ip_forward=1
    sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE'

# 3b. Managed Instance Group (MIG)
gcloud compute instance-groups managed create firewall-mig-pbr \
    --template=firewall-template-pbr --size=2 --zone=us-central1-a

# 3c. Health Check and Firewall Rule for it
gcloud compute health-checks create tcp pbr-health-check --region=us-central1 --port=80
gcloud compute firewall-rules create allow-pbr-health-checks \
    --network=pbr-vpc --allow=tcp:80 --source-ranges=35.191.0.0/16,130.211.0.0/22 --target-tags=allow-health-check

# 3d. ILB Backend Service
gcloud compute backend-services create pbr-backend-service \
    --load-balancing-scheme=INTERNAL --protocol=TCP --region=us-central1 \
    --health-checks=pbr-health-check --health-checks-region=us-central1
gcloud compute backend-services add-backend pbr-backend-service \
    --instance-group=firewall-mig-pbr --instance-group-zone=us-central1-a --region=us-central1

# 3e. ILB Forwarding Rule (this is the next hop)
gcloud compute forwarding-rules create pbr-forwarding-rule \
    --region=us-central1 --load-balancing-scheme=INTERNAL \
    --network=pbr-vpc --subnet=pbr-subnet \
    --backend-service=pbr-backend-service --all --purpose=PRIVATE_NAT_AUTO

# 4. Create the Policy-Based Route
# This is the core PBR configuration.
gcloud network-services policy-based-routes create my-tcp-inspection-policy \
    --location=global \
    --network="projects/your-project-id/global/networks/pbr-vpc" \
    --source-range="10.10.0.0/24" \
    --ip-protocol="TCP" \
    --next-hop-ilb-ip="10.10.0.100"  # Replace with the actual IP of your pbr-forwarding-rule

# NOTE: You need to get the IP address of the forwarding rule created in step 3e and use it here.
# You can get it with: gcloud compute forwarding-rules describe pbr-forwarding-rule --region=us-central1
    
After this setup is complete, any VM with a source IP in the `10.10.0.0/24` range sending TCP traffic to any destination will have its traffic redirected to the ILB, and thus through one of the firewall appliances. All other traffic (e.g., UDP, ICMP, or from other source ranges) will bypass this policy and use the VPC's standard routing table (e.g., the default route to the internet).

4. Comparison: Classic vs. Modern PBR

Feature Classic (Tags & Priority) Modern (Policy-Based Routes)
Source Identification Network Tag on a VM. Source IP CIDR Range.
Matching Granularity Low (only source tag). High (Source Range, Dest Range, Protocol).
Next Hop Single VM instance, VPN tunnel, etc. Internal Load Balancer IP Address (Recommended) or Internet Gateway.
Configuration Model Tightly coupled. Policy is tied to the VM's tag. Decoupled. Policy is a central network resource.
Management Can be complex to manage tags across many VMs. Centralized and easier to manage and audit.
Status Generally Available Preview

Conclusion

Policy-Based Routing is a powerful tool for controlling network traffic beyond simple destination-based routing.