Load Balancing a Splunk Search Head Cluster

A Splunk Search Head (SH) enables an analyst to query a Splunk Indexer for data in a distributed configuration. A Search Heads group that shares knowledge objects and settings is known collectively as a Search Head Cluster (SHC). Deploying a SHC provides high availability (HA) and many other benefits to your users. However, as you scale out your Search Heads, updating DNS, and providing the list of available servers to your users is not ideal. We need a way to provide a single landing point while abstracting the details from your users.

One solution is to deploy a load balancer (LB) in front of the SHC. This LB should be in a constrained network segment and have access limited to the users or organizations that have a legitimate need.

A search head cluster needs at least three instances to form a quorum. However, we can incorporate many more search heads in a SHC. In the image below, notice that the load balancer will broker user traffic between the search heads.

You can use HAProxy or traefik and many other tools to serve as a load balancer, however, this post will show how I use Ansible to automate Nginx’s deployment and configuration as a load balancer.

Install OS or Provision VM

The first thing we need to do is set up a server that will perform the load balancing functions. Once the OS is configured you will need to set up an automation user and enable SSH Public Key Authentication.

Checkout Playbook Repository

$ git clone https://github.com/tankmek/shc-nginx-lb.git
$ cd shc-nginx-lb

Modify the inventory file to point to the IP address of your server and edit the name of your automation user (ansible_user).

[all:vars]
ansible_user=naruto

[lb]
loadbalancer ansible_host=192.168.28.77

Next edit the group variables under: group_vars/lb/vars.yml

---
lb:
  user: nginx
  web_port: 80
  ssl_port: 443
  tls_dir: /etc/nginx/ssl
  tls_certificate: lb-cert.crt
  tls_csr: lb-csr.req
  tls_privkey: lb-priv.key
  # With ip-hash, the client’s IP address is used as a hashing
  # key to determine what server in a server group should be
  # selected for the client’s requests. This method ensures that
  # the requests from the same client will always be directed to
  # the same server except when this server is unavailable. 
  load_balance_discipline: ip_hash

The role variables will also need to be modified under: roles/nginx-lb/vars/main.yml

---
# vars file for nginx-lb
vhosts:
  - name: siem
    srv_ip0: 10.15.100.50
    srv_ip1: 10.15.100.51
    srv_ip2: 10.15.100.52
    srv_port: 8000
    srv_fqdn: siem.fakelabs.io
    ssl_port: 443
    listen_port: 80
    tls_certificate_path: /etc/nginx/ssl/lb-cert.crt
    tls_certprivkey_path: /etc/nginx/ssl/lb-priv.key

Once you have all the variables configured I recommend testing connectivity before running the main playbook.

 $ ansible all -m ping

You should get the success output below.

loadbalancer | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}

If you did not get a success output, double-check your SSH keys, and your automation user has sudo access without a password. Once you have it corrected, we are ready to run the main playbook.

$ ansible-playbook site.yml

Once the tasks complete with no errors you can browse to your load balancer IP address using https and it will take care of the rest.

Thanks for reading.

Start a discussion or ask a question.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Discover more from Michael Edie

Subscribe now to keep reading and get access to the full archive.

Continue reading