Implementing Argo in my lab cluster and ran into some head aches. There’s a bunch of ingress configurations documented on the installation guide but Istio is not one of them so I figured I’d document it here.

Basing this article on these:
https://github.com/argoproj/argo-cd/issues/2784
https://gist.github.com/janeczku/b16154194f7f03f772645303af8e9f80

In order to make it work you’ll going to have to rename some of the service ports and start the API server with the –insecure flag. It sounds scary, but since you have the Istio envoy sidecar in your pod the traffic will be encrypted either way so it does not matter.

See something that is wrong or that can be improved. Please leave a comment and I’ll update the instructions!

Installing Argo

Prepare the name space

Create the namespace and label it in order to enable automatic injection.

kubectl create namespace argocd
kubectl label namespace argocd istio-injection=enabled --overwrite

Download the YAML files

I’ve prepared some YAML files based on https://github.com/argoproj/argo-cd/issues/2784 but with the modification that we’re running the argocd-server with the –insecure paramteter and that we’re removing 443 from the service. Basically the patch file is renaming ports and adding metadata + labels to your argo installation in order to make it more Istio friendly (to get proper stats).

mkdir argo-install
cd argo-install

# Download the official Argo installation files
curl https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml -o ./argocd_install.yaml

# Download the kustomize and patches
curl https://raw.githubusercontent.com/epacke/argo-istio/master/istio_patches.yaml -o ./istio_patches.yaml
curl https://raw.githubusercontent.com/epacke/argo-istio/master/kustomization.yaml -o ./kustomization.yaml

# Istio declarations
curl https://raw.githubusercontent.com/epacke/argo-istio/master/VirtualService.yaml -o ./VirtualService.yaml
curl https://raw.githubusercontent.com/epacke/argo-istio/master/Gateway.yaml -o ./Gateway.yaml

Configure your domain

Edit both VirtualService.yaml and Gateway.yaml to use your domain instead of argocd.xip.io. Only two places needs to be modified.

Install Argo

Make sure you’re in the argo-install directory, then run kubectl apply -k ./ to install argo.

Test

Now you can surf to your Istio ingress port and test it out. The user is admin and the password is the same as your argocd-server pod. The commands below will show you the password and get the Istio ingress gateway node port. Just remember to replace argocd.xip.io with your domain.

echo "Argo admin password is \"$(kubectl get pods -n argocd -l app.kubernetes.io/name=argocd-server -o name | cut -d'/' -f 2)\""
export SECURE_INGRESS_PORT=$(kubectl -n istio-system get service istio-ingressgateway -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}')
argocd --grpc-web login argocd.xip.io:$SECURE_INGRESS_PORT

You should also be able to surf to https://<your domain>:$SECURE_INGRESS_PORT using your browser.

Adding Rancher cluster credentials

This is section is applicable if you, like me, are running Rancher. In that case adding the cluster credentials won’t work like it usually does. Instead, you need to do this manually.

Creating the user and assigning permissions

First we need to create a user. Go to Global -> Security -> Users.

Click on Add and then proceed to create a Standard user with a name of your choosing. For the purpose of this guide we’ll use argo-service. Assign a password and Click on Create.

Next, navigate to the User Cluster you want Argo to be able to access.

Click on Members.

Then click on Add Member. Search for argo-service, assign the role “Member” and click on Create.

Creating an API key

  1. Login as argo-service and choose API & Keys from the top right corner
  2. Click on Add Key
  3. Add a description if you want to and leave everything as default
  4. Click on Create
  5. Copy all the information at the page into a password storage of some kind and click on Close

Creating the Argo Cluster secret

Before you create these secrets you need to determine the URL to your user cluster. You can get this by clicking on Cluster on the top menu in Rancher and then click on Kubeconfig File.

The URL is marked with red below:

Next, create the following YAML file. Note that the server property is the URL from above and the bearerToken is the bearer token you got when creating the API key before. This configuration assumes that you have a legitimate certificate for your rancher cluster. If you don’t I have a guide for setting this up here.

apiVersion: v1
kind: Secret
metadata:
  name: racherprod-cluster-secret
  labels:
    argocd.argoproj.io/secret-type: cluster
  namespace: argocd
type: Opaque
stringData:
  name: rancher-prod
  server: https://rancher.xip.io/k8s/clusters/c-aabb12
  config: |
    {
      "bearerToken": "token-123aa:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa",
      "tlsClientConfig": {
        "insecure": false
      }
    }

Run kubectl apply -f <file name> to create the secret. Argo should automatically pick this up and have access to your user cluster.

Troubleshooting

When does things go as they should in IT. More or less never. Argo has this great guide on how to troubleshoot adding cluster credentials. You can find it here.

The only thing I’d like to add that was not super clear to me is the following things:

  1. Make sure to look at the right version of the docs. The command to generate the kubeconfig was argocd-util kubeconfig in my version, not argocd-util cluster kubeconfig as it is in other versions.
  2. The api-server-url is the Rancher user cluster URL, ie https://rancher.xip.io/k8s/clusters/c-aabb12

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *