I’m a big fan of Fortigate firewalls. They’ve got a nice interface to work with, they’ve stable and they are like a Swiss army knife in our Network. It’s been many times where I’ve thought that surely we can’t use the Fortigate for things but time and time again they’ve proved me wrong.

So when I read about PorterLB and decided to try it the choice was easy enough as I run a Fortigate firewall and it has BGP support!

Want to give thanks to Christian Karlsson on NetCat IT for joining this session. He’s a great Network admin and he knows BGP like the back of his pocket!

All of the files/examples mentioned in this article are available here.

What is PorterLB?

It’s a way for Kubernetes administrators to expose services to the outside world without using node ports. Instead of exposing your service with a random high port you announce its existence to your network using BGP.

Installation

Easy, just follow the steps here to install it. Rancher users might want to remove the default nginx ingress controller before doing this as at least in my case it conflicted with PorterLB. You can do this in the cluster yaml file here:

rancher_kubernetes_engine_config:
  ingress:
    provider: none

Kubernetes Config

PorterLB BGP Listener

Ok, so here’s one thing that Fortigate could not do. It could not connect via BGP using a custom port. This could be a problem if you’re using Calico. Luckily, Calico was not enabled on my Cluster so it worked fine.

The following YAML file tells PorterLB that it should listen to BGP traffic on port 179 and that its local AS number is 64512. You can chose any private AS number between 64512 to 65534.

apiVersion: network.kubesphere.io/v1alpha2
kind: BgpConf
metadata:
  name: default
spec:
  as: 64512
  listenPort: 179

The peer configuration

The following configuration tells PorterLB that it should connect to 192.168.1.1 to announce BGP changes. It also tells PorterLB that the peers AS number is 64513. If you already have an AS number on your site you should use that instead. I don’t have one in my lab so I simply picked 64513.

apiVersion: network.kubesphere.io/v1alpha2
kind: BgpPeer
metadata:
  name: fortigate-01
spec:
  conf:
    peerAs: 64513
    neighborAddress: 192.168.1.1
  afiSafis:
    - config:
        family:
          afi: AFI_IP
          safi: SAFI_UNICAST
        enabled: true
      addPaths:
        config:
          sendMax: 20

Please note that the installation of PorterLB only runs one replica on one of the nodes using a Deployment. In order to get routes from all nodes I recommend that you replace the Deployment with a DaemonSet. Now, you can convert the Deployment yourself manually, or use the one I have converted. I suggest you do this right away. More on this topic here.

That’s it for the PorterLB configuration, let’s move on to the Fortigate configuration.

Fortigate configuration

This took a little bit more time but as a whole the process went really smooth. Before you start you need to enable the Advanced Routing feature. To do this you go to System -> Feature Visibility and click on the toggle.

After this is time to do some confiuration. You can do most things in the WebUI but there are some things that needs to be done in the CLI. The code which follows this list essentially does the following:

  1. Setting local AS number to 64513 (same as the PorterLB Peer AS number)
  2. Setting the router id to 192.168.1.1
  3. Enable multipath in order to let the Fortigate load balance between multiple routes
  4. Configure a neighbor-group which has the AS number of the PorterLB local configuration above (not the perAs number).
  5. The neighbor range makes Fortigate accept routes from any IP in that range. Since Nodes come and go in Kubernetes this is handy since it let’s you deploy a new node without touching the Fortigate configuration.

Here’s the full configuration. Open up a CLI, change the needed details (ie. IP, AS number) and paste it.

config router bgp
    set as 64513
    set router-id 192.168.1.1
    set ebgp-multipath enable
    config neighbor-group
        edit "PorterLB"
            set remote-as 64512
        next
    end
    config neighbor-range
        edit 1
            set prefix 192.168.1.0 255.255.255.0
            set max-neighbor-num 100
            set neighbor-group "PorterLB"
        next
    end
end

Configure an IP range which can be announced

Next step is to prepare a range which we will announce to the outside world. In my case I used the example from PorterLBs guide:

apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
  namespace: istio-system
  name: porter-bgp-eip
spec:
  address: 172.22.0.2-172.22.0.10

When you define a service as type LoadBalancer PorterLB will pick an IP from the range above and announce it to the Fortigate telling it that it can route traffic to this IP via the Kubernetes nodes. While the IP range above is internal you could easily do the same thing with a public IP range.

Look how friendly they are! 🙂

Then I took an existing service (istio-ingressgateway), added the necessary PorterLB annotations, changed the type from NodePort to LoadBalancer and removed the old nodePort configuration. See comments below:

apiVersion: v1
kind: Service
metadata:
  annotations:
    lb.kubesphere.io/v1alpha1: porter                   # Added
    protocol.porter.kubesphere.io/v1alpha1: bgp         # Added
    eip.porter.kubesphere.io/v1alpha2: porter-bgp-eip   # Added
  labels:
    app: istio-ingressgateway
    install.operator.istio.io/owning-resource: unknown
    install.operator.istio.io/owning-resource-namespace: istio-system
    istio: ingressgateway
    istio.io/rev: default
    operator.istio.io/component: IngressGateways
    operator.istio.io/managed: Reconcile
    operator.istio.io/version: 1.8.3
    release: istio
  name: istio-ingressgateway-porter
  namespace: istio-system
spec:
  externalTrafficPolicy: Cluster
  ports:
  - name: status-port
    port: 15021    # Removed all nodePort definitions from this list
    protocol: TCP
    targetPort: 15021
  - name: http2
    port: 80
    protocol: TCP
    targetPort: 8080
  - name: https
    port: 443
    protocol: TCP
    targetPort: 8443
  - name: tcp
    port: 31400
    protocol: TCP
    targetPort: 31400
  - name: tls
    port: 15443
    protocol: TCP
    targetPort: 15443
  selector:
    app: istio-ingressgateway
    istio: ingressgateway
  sessionAffinity: None
  type: LoadBalancer   # Changed from NodePort to LoadBalancer
status:
  loadBalancer: {}

Verify that the service has been deployed properly by checking that the service got an EXTERNAL-IP from the range we defined earlier:

❯ kubectl get svc -n istio-system istio-ingressgateway-porter
NAME                          TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)                                                                      AGE
istio-ingressgateway-porter   LoadBalancer   10.43.154.22   172.22.0.2    15021:31644/TCP,80:32282/TCP,443:31786/TCP,31400:31723/TCP,15443:30042/TCP   30s

Validation

First, let’s check if you got the BGP session between PorterLB and the Fortigate:

fortigate # get router info bgp summary 
BGP router identifier 192.168.1.1, local AS number 64513
BGP table version is 14
1 BGP AS-PATH entries
0 BGP community entries
Next peer check timer due in 39 seconds
 
Neighbor        V         AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
192.168.1.33   4      64512     687     772       14    0    0 05:35:26        1
192.168.1.39   4      64512     670     756       14    0    0 05:28:52        1
192.168.1.45   4      64512     689     771        0    0    0 05:37:00        1
 
Total number of neighbors 3

Sweet, we have sessions between all nodes (if you only get 1 here, back track to the note about replica count earlier).

Next, check if you have any routes announced from the nodes to the Fortigate:

fortigate # get router info routing-table bgp
Routing table for VRF=0
B       172.22.0.2/32 [20/0] via 192.168.1.42, internal, 00:00:24
                      [20/0] via 192.168.1.39, internal, 00:00:24
                      [20/0] via 192.168.1.45, internal, 00:00:24

You can also see this via the Routing Monitor:

Nice! We have a route! Finally, I pointed the DNS to 172.22.0.2 voila, applications behind Istio are loading and we have BGP load balancing!

This started out as an lab but considering the small footprint and smooth management I think I will keep it.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *