Brain dump for at least semi-good ideas

Logstash pipeline tester

At work we do log parsing and shipping using logstash. Logstash has been working great and it’s been stable for us but testing the pipelines has been a bit of a hurdle, especially for people that’s not so well versed with Linux.

To solve this issue I decided to try to develop a tool for testing pipelines. In order for the project to be successful the following criterias had to be met.

  • The tool had to support any client platform, be it Windows, Linux or Macintosh
  • It had to be easy to use without knowing Linux commands
  • The interface should give direct results from the Logstash output section

The result ended up in a combination of a Web Frontent, NodeJs and Logstash.

How to start

  1. If needed, install docker on your machine
  2. Clone the repository on GitHub.
  3. Copy your pipeline folder to logstash/logstash-config/pipeline
  4. Enter the repository directory
  5. Run docker-compose build
  6. Run docker-compose up
  7. Open up http://localhost:8080 in your browser to access the interface

IMPORTANT – Depending on docker host you might get the question if you want to share your drive with the docker service. If you get this message, share your drive, stop the containers and run docker-compose up again.

Adding pipelines

There are two ways, either modify the existing pipelines or follow thes following steps.

  1. Modify logstash-config/pipelines.yml
# Example covers the creation of a pipeline called mypipeline
- generic-json
  path.config: "/usr/share/logstash/pipeline/mypipeline"
  1. Then create a directory in logstash-config/pipeline called mypipeline
  2. Copy your logstash config to logstash-config/pipeline/mypipeline
  3. Last but not least, restart the containers using docker-compose up

Modifying pipelines

Since the logstash pipeline directory is mounted in the containers logstash should detect file changes and reload the pipeline. If this does not happen for some reason you need to restart the containers.


Check the docker-compose window for any exceptions thrown.

Reporting issues

First, please check if there’s any current issues that matches your problem. If not, please feel free to submit an issue here at Github.

I have very limited time so I won’t be able to act fast on any issue but it’s always good to have it logged and who knows, maybe someone else will pick it up and make a PR.

Application flow diagram

Screenshot from the application

Code available here

Writing a custom integration for the Google assistant


We’re using Google assistant at home for controlling the lights, playing the news while having breakfast, getting weather forecasts and sometimes playing music for the kids.

The Mini’s are relatively cheap and if you disregard the fact that you’re letting Google even further into your private space they’re a great companion at home for the helping with some of the mundane daily stuff.

Now to the “problem”. We live in an area where we can commute using either bus or boat and asking the assistant would only give information about the next bus, never the bus after that. The response is also a bit too chatty and takes quite a while to complete.

This caused me to look into writing a custom integration. At first I was guided to IFTTT (IF This Then That), which is a great page for when you want the assistant to do some actions with pre-integrated things, like turning on a lamp and then replying with a static sentence.


  • User says: “Ok google, movie time”
  • Assistant turns off the lights.
  • Assistant closes the curtains.
  • Assistant responds: “Done, enjoy the movie.”

But I needed to get a response that was dependent on the result of an external API call and I could not find any way to do this with IFTT. However, it looked like Google DialogFlow could.

Google DialogFlow

DialogFlow is Googles platform for creating applications interactions with humans using a conversation structure. You can do many advanced things with it such as follow-up questions, changing the conversation depending on context and machine learning. The goal in my case was quite simple:

  • Ask the assistant when the bus, or boat is
  • Have the assistant collect information about the bus and boat table using a REST API call
  • Respond to the user with content from the REST API call response

Getting an account

In order to use Dialogflow you need an account, so start with creating one, or link your existing account. After that we’re going to create our first so-called Agent.

Configure the name of the agent, the language and a time zone and click on Create.

Configuring an intent

Next up we’re going to start with configuring an Intent which is the phrase that will be used to help Google assistant understand that you want to trigger this particular action. Let’s start, shall we?

  1. Click on Intents
  2. Delete the existing intents as we don’t need them for this scenario
  3. Click on “Create Intent” at the top right corner
  4. Click on Events and add “Google Assistant Welcome” and “Welcome
  5. Click on Add training phases. These are examples of what you want the Google Assistant to answer on later on. In my case the Training phrases were “When is the next bus?“, “When is the bus?“, “When is the boat” and “When is the next boat?”. The more training phrases you add, the more likely it is for the Google assistant to apply machine learning to understand when users want to use your service. Neat huh?
  6. When you have added all the intents, click on “Enable Fulfillment“.
  7. Then click on “Enable webhook call for this intent
  8. Click on “Save“.

Let’s recap what we have done so far. First we created a new Agent, then we told the Agent to react on a few different Training Phrases by an Intent. Finally, we told the agent that we want the intent to be fulfilled by a Web Hook.

There’s other cool stuff we could do on this page such as parsing parameters from user input, but in our case we have all we need so let’s keep it simple.

Next it is time to configure the “Fulfillment” by telling DialogFlow which URL to use for the Web Hook. In my case I turned to Googles Cloud Platform for hosting it. So let’s take a break from DialogFlow and head on over to the web service creation.

Signing up for a GCP account

The intent here is to create a web service so you can use any platform for this, including your own private server or AWS. In my case I went with Google which has a great startup package where you get $300 when you create an account in their cloud platform. Using their Cloud functions is free for quite a large amount of calls which makes it an excellent decision for our needs.

Creating a cloud function

Cloud functions are pieces of code that is executed on demand. The server layer is abstracted (also called server-less) so you can focus 100% on your code. You can run either Nodejs, Python or Go in cloud functions. In this example I will use Python.

  1. Click on the “Hamburger” menu button and go to “Cloud Functions
  2. Click on “Create Function
  3. Give your function a name
  4. Assign an appropriate amount of memory. In my case 128MB was more than enough.
  5. Set the trigger to “HTTP
  6. Check “Allow unauthenticated invocations
  7. In the source code section, choose “Inline editor
  8. Write/Paste your script into the text area. Look below for an example.
  9. As Runtime choose Python 3.7
  10. In Function to execute, choose main or whatever function you want to use to initiate the cloud function
  11. Click on “Environment variables, networking, timeouts and more
  12. Choose the region and then click on “Create
  13. Note the cloud function URL that is generated.

Example script

The script below uses the (Stockholm Commuting) API to fetch information about the coming bus and boat trips near my station. I’ve redacted the API key and station id’s for privacy purposes.

import requests
import json
import datetime
import math
from flask import escape
import pytz

slApiKey = 'xxx'
busDepartureStationId = 'xxx'
busDestinationStationId = 'xxx'
boatDepartureStationId = 'xxx'
boatDestinationStationId = 'xxx'

def minutes_until(d):
    # Since I live in Stockholm we set the tz to Stockholm
    tz = pytz.timezone('Europe/Stockholm')
    now =

    # Remove the time zone information since we can't do a datediff if it is still there
    # The datestamp will remain the same
    now = now.replace(tzinfo=None)

    # Return the difference in minutes and round downwards
    d = datetime.datetime.strptime(d, "%Y-%m-%d %H:%M:%S")
    return math.trunc(abs((now - d).total_seconds()/60))

def getTrips(departure, destination):
    response = requests.get(f'{slApiKey}&originId={departure}&destId={destination}')
    data = json.loads(response.content)
    return data

def get_trip(departures, trip_number):
    departure = departures['Trip'][trip_number]['LegList']['Leg'][0]['Origin']
    departure_date = departure['date']
    departure_time = departure['time']
    return [departure_date, departure_time]

def convert_time_stamp(ts):
  return ':'.join(ts.split(':')[:-1])

def main(request):
    buses = getTrips(busDepartureStationId, busDestinationStationId)
    bus1 = get_trip(buses, 0)
    bus2 = get_trip(buses, 1)

    boats = getTrips(boatDepartureStationId, boatDestinationStationId)
    boat1 = get_trip(boats, 0)
    boat2 = get_trip(boats, 1)

    if minutes_until(' '.join(boat1)) > 180:
      boatResponse = 'There are no boats departing to the city within 3 hours'
      boatResponse = f'There\'s also a boat leaving at {convert_time_stamp(boat1[1])} and then another at {convert_time_stamp(boat2[1])}.'

    return json.dumps({
      "payload": {
        "google": {
          "expectUserResponse": False,
          "richResponse": {
            "items": [
                "simpleResponse": {
                  "textToSpeech": f"The next bus leaves at {convert_time_stamp(bus1[1])}, and the one after that at {convert_time_stamp(bus2[1])}. {boatResponse}"

Configuring the Web hook in DialogFlow

  1. In DialogFlow, click on “Fulfillment
  2. Toggle Webhook to be “Enabled
  3. Paste the URL that was generated for you when you created the Cloud Function(or the URL to your service).
  4. Click on Save.

Testing the app

On the right side of your screen there is a text the says “See how it works in Google Assistant:

Click on to navigate to the Dialogflow Simulator here you can see how the interaction will work with your test app by entering text in the input section. You can also try to talk to your home assistant by saying “Ok Google, Talk to my test app“. This should trigger the asisstant to repeat the response given by your API call. You can also try to trigger usage of your app by using the training phrases from before, but I’ve found this to be a bit of a hit or miss depending on the uniqueness of the training phrase.

No luck? See the troubleshooting section below.


When using the simulator from the previous section of the guide you have the option to deploy your app by creating a release. If you aren’t going to spread it to a larger crowd you might want create alpha or beta releases to a smaller crowd. Either way I’d start with a Alpha or Beta release.

Since Google has done a good job with explaining each field of the deployment forms I won’t get into the details of this part. The only thing I can say is that they do review both your code and your descriptions so it is worth to add some extra effort into being as verbose as you possibly can. Think about what you would like to see when looking through the actions library and populate the forms accordingly.

That’s the end of the guide. Please do leave a comment if you tried it and what you did with it. It’s always nice to be inspired!

Troubleshooting DialogFlow

Hopefully you don’t need to go through this part of the guide, but in case you run into trouble, here’s a few pointers.

Inspecting the machine to machine communications

In the DialogFlow simulator you have a top menu with buttons named “Request” and “Response”. If you click them you can inspect how the call to your REST API looks like and what the REST API response was. There’s also quite a lot of information in the “Debug” section, but I found the former to be more less noisy and more helpful.

Cloud function is timing out

The assistant has a fairly short timeout and if your cloud function is taking more than 5 seconds it will fail. You can see if this happens by looking at the response.

If this happens often you can consider a few of these things:

  • Rewriting your function to be faster, perhaps by caching data
  • Moving your cloud function to a region that is closer to your API source
  • Do some pre-warming of the cloud function as there might be a bit of additional delay when spawning the process if the Cloud function has not bee used in a long time. Be aware thought that pre-warming comes with a cost in terms of the number of executions so it might also be an alternative to move the code to a dedicated public server.

Cloud function has a malformed response

You can do a lot of nice stuff such as follow up questions and such, but if stuff is not working, go back to basics. Make your cloud function return a static response to rule out that the response is not the problem.

Here’s an example of a static response body that works:

  "payload": {
    "google": {
      "expectUserResponse": false,
      "richResponse": {
        "items": [
            "simpleResponse": {
              "textToSpeech": "Wow, this app works with static responses!"

The action is not triggered when using the training phrases

This section covers the case where the phrase “Talk to my test app” triggers your action but the training phrases doesn’t. I must confess that this part has puzzled me too. Some times it has worked directly, sometimes it has worked after a while.

However, doing these things has worked for me:

  • Adding more training phrases. Think about the different ways you can ask the assistant to do what you need it to do. “When is the next bus” could also be phrased a bit more sloppy as “When is the bus” or “When is the bus leaving”.
  • Try a totally different arbitrary command, like ie. “Where did all my smurfs go?” to see if your previous phrases simply just does not want to play ball with the search giant.
  • Test, test, test. Go through the simulator a few rounds with both text input and speech input.

Fortigate API – FortiOS 6.2

Recently I changed my firewall from Sophos UTM to a Fortigate. Since I have a decent lab setup at home with a bunch of services I decided to try out the Fortigate API. However, to my surprise there was no API documentation openly available online. To get hold of it one had to be a part of the Fortinet Developer Network which requires endorsement from two Fortinet employees. Personally I’m not a bit fan of keeping these things behind closed doors. I think it benefits neither the company, nor the customer.

So in case someone else is in the same situation that I was I thought I’d write a short intro on how to use the API using an admin account using Powershell.


First step is to do send a post against /logincheck using form data:

# Authentication against the box
$PostParameters = @{
    "username" = $FortigateSettings.user;
    "secretkey" = $FortigateSettings.password;

$Result = Invoke-WebRequest -Method POST "" -Body $PostParameters -SessionVariable FortigateSession

The code above also saves the cookies from the response into a session variable called FortigateSession. From this variable we will also extract the CSRFTOKEN cookie value which is required when one wants to change things on the device.

$CSRFTOKEN = ($FortigateSession.Cookies.GetCookies("") | Where-Object { $ -eq "ccsrftoken" }).value.replace("`"", "")

Now we’re set to run commands against the Fortigate API by using the session variable.

John Heyer sent in a tip below that you can also go via System -> Administrators -> Create New -> Rest API Admin, then add “?access_token=XXXX” to the API calls.


# Get the DHCP configuration
Invoke-WebRequest "" -WebSession $FortigateSession

# Get a list of the DNS databases
Invoke-WebRequest "" -WebSession $FortigateSession -Method "GET"

# Get a list of the address objects
Invoke-WebRequest "" -WebSession $FortigateSession

# Add an address object
$SHost = @{
    "name" = "CloudFlare-1";
    "subnet" = "";
} | ConvertTo-Json -Compress

Invoke-WebRequest "" -Headers @{"Content-Type" = "application/json"; "X-CSRFTOKEN" = $CSRFTOKEN} -WebSession $FortigateSession -Method "POST" -Body $SHost -ErrorAction SilentlyContinue

Please note that while these examples covers authentication using a normal admin account the Fortigate devices also has support for dedicated REST accounts using tokens. For frequent/production integrations you’d want to look there instead.

The script I used to migrate from Sophos to Fortigate is available here.

BigIp Report – 2019 Survey

There were some great suggestions in the previous survey. Some of them has been transferred to this years survey in order to let people vote for the ones they would like to see.

Make your voice heard and cast your vote using the link below!

Gather SSL cipher statistics from your F5 device

With the new PCI DSS requirements around the corner it might be interesting to gather some SSL cipher statistics from your F5’s. If you have a syslog server this is a piece of cake using the HSL function in iRules.

To use the iRule below, first create a pool called syslog-514_pool, or simply replace the name with a pool containing your syslog server(s). Then, for each virtual server attach the following iRule:


    if { [info exists logged] && $logged == 1 }{
        # Do nothing. Already logged for this session
    } else {
        set hsl [HSL::open -proto UDP -pool syslog-514_pool]
        set host [HTTP::host]
        set useragent [HTTP::header "User-Agent"]
        set vs [virtual name]
        set logged 1

        HSL::send $hsl "[string map [list "\t \t" "\t-\t"]\
        [info hostname]\t\
        [clock format [clock seconds] -format "%d/%m/%Y %H:%M:%S %z"]\t\
        [SSL::cipher name]\t\
        [SSL::cipher version]\t\
        [SSL::cipher bits]\t\


Essentially, what it does is to send a syslog message for every new SSL session established. This data could easily be indexed by Splunk or Elastic search to generate a report.

PS. If you have a Firewall between your loadbalancer and your syslog server you might want to verify that it’s open first.

Protecting BigIP Report behind an APM – By Shannon Poole

A fellow Devcentral member named Shannon Poole graciously shared this guide on how to protect BigIP Report behind the APM. This would actually be the first “guest post” on the blog too. If you want to get into contact with Shannon you can connect with him via LinkedIn or send a message via Devcentral.

Thank you very much for sharing this Shannon!


Here is a simple configuration that I came up with to regulate access to my BIGIP Report and utilize the APM module.  I am, by no means, an expert with APM but this policy may be simple enough to deploy to anything you want.

The author would like to thank David Allshouse, Senior Systems Engineer for constructive criticism of the manuscript.

Configure an Active Directory AAA server

Navigate to Access Policy -> AAA Servers -> Active Directory and use the configuration below.  It is necessary to give a name, domain name, and IP address of the domain controller. Also, choose Direct rather than Use Pool.

Note:  A better configuration may be to use the Pool should a DC become unresponsive but that is something which can be configured later.

Creating a New Access Profile

Navigate to Access Policy -> Access Profiles List and hit the create button.  Provide a name, such as MyAccessProfile, and set the profile type to “ALL.” This could probably be set to “LTM-APM” if you want to be precise but that is not necessary.  Next, remove the check for “Secure” in “Cookie Options” as it is not required due to no SSO. Finally, add “English (en)” as a language is required and click Finish.

Note: Since I am not using multiple domains or SSO configurations for this setup, creating an access profile was fairly simple.  

Configure Your Access Policy

Once you have configured your Access Profile, you should now see your policy in the Access Profile List and should be able to click on the policy name, which brings you to the screen below:

Click on the Access Policy tab and now when you click on Edit Access Policy for Profile “My Access Policy”, you should see the following screen:

This brings you to the basic configuration of your policy and configured with a “deny-by-default” method similar to most things with F5.

Configure a Macro

With this policy, it was important to configure it in a way as to limit access via Active Directory security groups.  In order to do this, you need to add a macro to handle the logon page, authentication, and AD query processes. This can be done by clicking on “Add New Macro” and then selecting “AD auth query and resources” for the “Select macro template” drop-down.  Provide a name, such as “MyADAuth” and it should look like the template below:

Once you click “Save”, the Macro has been created and added to the policy:

The next step was to remove the “Resource Assign” and “AD Logging” items by clicking on the “X” and selecting delete.  These are not required for this policy. The end result should be this:

Now you am ready to configure the policy.  Start with the Logon Page and write some simple text in the “From Header Text” box and change the “Logon Button” to “Submit”.  Everything else was left as the defaults.

For the “AD Auth” configuration, only select the AAA server that you created earlier in the “Server” drop-down:

The AD Query is where you will configure your AD groups.  Like the previous screen shot, you need to select your AAA server from the “Server” drop-down:

Now it’s time to move onto the “Branch Rules” tab.  The first thing was to remove the “Primary Group ID is 100” branch rule so you can create your own.  Once that is removed, you are now free to select “Add Branch Rule.” It should look like this:

Next, rename the Branch Rule to “MyBranchRule” and select “change” which gives the ability to add an expression:

Next, click “Add Expression” and select the items that you see below while also adding your AD memberof attribute string for the group you want to use:

Once you click “Add Expression”, you should see your policy look like this:

Now you are ready to indicate which action determines a failure or a success within your macro.  You can do this by simply clicking on “Failure”, selecting the radio button for Successful, and click save:

The final step for the Access Policy configuration is to add your macro, MyADAuth, to the policy by selecting the plus sign between “Start” and “Deny” and navigating to the “Macrocalls” tab:

Now when you select the macro and click “Add Item”, it adds the macro to the policy:

Since both rules are set to deny, you need to change the Successful branch to an allow by clicking on “Deny” and selecting allow.

Save your changes and add the Access Policy to your Virtual Server.  To save your changes, you can simply click on the “Apply Access Policy” in the header above.  Then add the policy to your virtual server by navigating to your virtual server and adding it in the Access Policy section:

Scheduled BigIPReport CSV exports via mail

Today I got a feature request over at Devcentral from a BigIPReport admin to add the possibility to add scheduled exports of BigIPReport via mail. While it does not really fit into the project itself actually doing it is actually simpler than you might think!

Using a mix of Powershell and .Net we can download the Json files, parse them and generate a CSV file that can be sent to anyone in the organisation.

Please note that as usual there’s a thousand ways to skin a cat (funny expression right there) and this script could be improved quite a bit. Some potential examples:

  • Creating the attachment from memory instead of a temporary file
  • Changing the mail format to HTML and adding some useful statistics like virtual server count, pool count, node count etc.
  • Adding a database, or using a flat file could also give out trends.

If anyone is up to the task and wants to share the result I’d be happy to post it here along with your name. 🙂

Anyways, here’s the script!

$BigIPReportURL = "https://bigipreport.domain.local"
#SMTP Configuration
$User = "user"
$Password = "password"
$SmtpServer = ""
$SmtpServerPort = "2525" 
$From = ""
$Recipients = @("", "")
#Full path to where you want to store the csv temporary csv file
$CSVFile = "C:\Users\Patrik\Documents\t.csv"
If(Test-Path $CSVFile){
 Write-Host "CSV file exists, exiting script in order not to overwrite it"
#Create new webclient object
$WebClient = New-Object System.Net.WebClient
#Enable integrated authentication
$WebClient.UseDefaultCredentials = $true
#Get the json objects
$Virtualservers = ($WebClient.DownloadString("$BigIPReportURL/json/virtualservers.json")) | ConvertFrom-Json
$Pools = ($WebClient.DownloadString("$BigIPReportURL/json/pools.json")) | ConvertFrom-Json
$CSVHeader = "name;description;ip;port;sslprofile;compressionprofile;persistenceprofile;availability;enabled;currentconnections;cpuavg5sec;cpuavg1min;cpuavg5min;defaultpool;associated-pools;loadbalancer"
Function Get-PoolDetails {
 Param([array]$VSPools, [string]$Loadbalancer)
 $ReturnData = @()
 Foreach($Pool in $VSPools){
 $ObjPool = $Pools | Where-Object { $ -eq $Pool -and $_.loadbalancer -eq $Loadbalancer }
 $ReturnData += ($ObjPool.members | ForEach-Object { $ + " (" + $_.ip + ")" }) -Join ", "
 $ReturnData -Join "|"
$CSVHeader | Out-File $CSVFile
Foreach($VS in $VirtualServers){
 $PoolDetails = Get-PoolDetails -VSPools $VS.pools -Loadbalancer $VS.Loadbalancer
 @($, $Vs.description, $Vs.ip, $Vs.port, $vs.sslprofile, $vs.compressionprofile, $vs.persistenceprofile, $vs.availability, $vs.enabled, $vs.currentconnections, $vs.cpuavg5sec, $vs.cpuavg1min, $vs.cpuavg5min, $vs.defaultpool, $PoolDetails, $vs.loadbalancer) -Join ";" | Out-File -Append $CSVFile
$MailDate = $(Get-Date -format d)
$Email = New-Object System.Net.Mail.MailMessage
$Email.From = $From
Foreach($Recipient in $Recipients){
$Email.Subject = "$MailDate F5 CSV"
$Email.Body = "Here's the monthly CSV export"
$Attachment = New-Object System.Net.Mail.Attachment($CSVFile, 'text/plain')
$SMTPClient = New-Object System.Net.Mail.SmtpClient( $SmtpServer , $SmtpServerPort )
$SMTPClient.EnableSsl = $True
$SMTPClient.Credentials = New-Object System.Net.NetworkCredential( $User , $Password );
Remove-Item $CSVFile


F5 case creation tweaks


F5 has recently updated their support portal and it was a great leap forward compared to the old one. Kudos on that!

Here’s a few functions that could we further improved:

  • Being able to log cases from a company perspective. When I log a case I want all my colleagues with access to the F5 support to be able to see the case, not just me.
  •  I want F5 to give me a drop down of the serial numbers my company owns instead of me having to find them myself.
  • The modules should be filtered based on what I have activated. This might require some call home function to be enabled on the devices, but the choice would be nice.
  • Give me an option to chat with a support representative. Checkpoint has this and it’s really good.

While waiting for these things to happen I’ve written a script that will do some of those things today.


Only show the activated modules

Only show the versions you have installed

You can still click on “Show all modules” to unhide them again.

Choose the load balancer from the drop-down

Get the serial number auto populated and verified. The drop-down is dynamically populated based on your BigIP Report data.


Other tweaks

  • Configure default case severity
  • Configure default choice for “Was this working before?”
  • Configure default chose for “Is the problem related to a virtual server?”
  • Configure a default peferred method of contact
  • Configure a default time zone


  • BigIP Report – See more here.
  • Tampermonkey – See more here.

How to use

  1. Install BigIP Report if you haven’t already done so.
  2. Install TamperMonkey.
  3. Click on the new script button:
  4. Replace everything in the script content with the content of “Casecreation.js”:
  5. Configure the script. The only mandatory configurations are the connect option in the TamperMonkey script metadata and the URL to the loadbalancers.json file of BigIP-Report:Example for if BigIPReport was hosted on linuxworker.j.local:

  6. Done!


BigIP Report just got an upgrade

BigIP Report delivers information to colleagues in an format that gives good overview. It saves administrators time by avoiding questions about where things are hosted, the status of pools and members or even when looking for things themselves across their whole environment.

I’ve been working hard the last couple of weeks to improve the tool and figured the results warranted a post about the recent feature additions.

New style

Been considering this for a long time but just never came around to it. Until now that is. The new report has a brighter theme and even more important, a consistent one. Where there was previously different looks you’ll find that most, if not all, of the report sections has been updated to use the same style.


For those that wants to have updated member states more often there’s now an option to configure polling of member states. This ensures that the states of the members are up to date.

The console

Device overview

Devices breaks down, serial numbers change upon replacement and people forget to update. When logging a case with F5 you’ll sometimes have to log in to the device and check the serial number. If you have many devices you’ll know what I’m talking about.

This overview gives you dynamically updated table of your device so when a device is being replaced the new one will automatically appear here. Along with version, model and more. Check out the picture below to see an example.

Defined iRules

This part used to be available in the main report section but has now been moved to the console. All iRules can be shared if you choose to do so. But in case you want to only share some, here’s where you do it.


This part gives you an overview of all your certificates. Checking if there are any certificates expiring soon is as easy as sorting by expiration dates in the table.


Does something look strange, or is the polling failing or disabled? Checking the logs section of the console might give you an idea of what’s wrong.


Contains tips and tricks on things that users might not be aware of.

Improved sharing

The new version has a more modern way of letting users share what they’re seeing. Using the hash URI instead of query strings makes it possible to simply copy the URL in the browser. It’s now possible to share iRules, Data Group Lists, Virtual server details and every piece of the new shiny console.

Export to CSV

A bunch of people asked for the ability to export searches to CSV. If you enable it in the report configuration a button will be added to the main view where you can export the existing view to CSV.

Want to try it out? Installation instructions are available here:

Bigip Report

BigIP Report feedback requested

Want to speak your mind, share some feedback?

The report has been evolving a bit more towards being more user friendly lately. Icons has been added, column toggle, preferences and reset search.

But truth be told, I more or less have no idea who uses the tool and I’ve got no statistics whatsoever except for the feedback I get in the insanely big comment thread on devcentral.

To make it easier for me to make better decisions/priorities about future features, or even to get ideas from you guys and girls, I’d love if you could answer this short poll (no registration is required):

While the poll is anonymous and the questions is not targeted at you personally it’d nice with an introduction in the last free text question, if you feel like it. 🙂

Any feedback (good or bad) is appreciated, as it always has been.


Page 1 of 3

Powered by WordPress & Theme by Anders Norén