Steps to migrate any Load Balancer configuration to RELIANOID

View Categories

Steps to migrate any Load Balancer configuration to RELIANOID

6 min read

Overview #

This article describes a structured and safe methodology to migrate services from any third-party load balancer vendor (hardware or software) to RELIANOID Load Balancer.

The procedure minimizes risk by separating analysis, networking, service translation, and final activation, while allowing coexistence during the migration window.

The migration is divided into the following phases:

  1. Inventory and assessment of existing services
  2. Target architecture and network design in RELIANOID
  3. Preparing RELIANOID for migration (API access & isolation)
  4. Applying network configuration using noid-cli
  5. Translating and creating services (L4, HTTP/HTTPS, GSLB)
  6. Final service hardening and activation via Web UI

Step 1: Inventory and Assessment of Existing Load Balancer Services #

Before touching RELIANOID, document everything from the source load balancer.

Service Inventory Table #

Create an inventory including inbound (VIP) and outbound (backends or real servers) configuration:

Service Name Type (L4, HTTP/S, GSLB) Protocol (TCP, UDP, SCTP, ALL) VIP VIP Ports Persistence Backend 1 IP and Port Backend 2 IP and Port Backend 3 IP and Port Backend 4 IP and Port Advanced Configuration
web-prod-ssl HTTPS TCP 192.0.2.10 443 Cookie 10.0.1.10:80 10.0.1.11:80 10.0.1.12:80 WAF enabled, using SSL cert mycert.pem
api-l4 L4 TCP 192.0.2.20 8443 Source IP 10.0.2.5 10.0.2.6
dns-gslb GSLB UDP 192.0.2.40 53 Priority 23.3.3.3 53.3.3.3

Key Elements to Identify #

For each service, identify:

  • VIPs and floating IPs
  • Listening ports and protocols
  • Backend servers and health checks
  • Persistence method (cookie, source IP, header)
  • SSL certificates
  • Advanced features (WAF, rate limiting, header rewrite)
  • DNS dependencies (especially for GSLB)

This inventory will be directly mapped to RELIANOID objects.

Step 2: Designing the Target RELIANOID Network Architecture #

RELIANOID enforces clear separation between networking and services, which simplifies migrations.

Network Interfaces Design #

Based on your inventory:

  • Frontend network: where VIPs will be exposed
  • Backend network(s): where real servers live
  • Optional management network

Example:

Interface Purpose IP
eth0 Management 192.168.100.10
eth1 Frontend (VIPs) 192.0.2.0/24
eth2 Backend 10.0.0.0/16

Routing & High Availability Considerations #

Ensure routing symmetry between RELIANOID and backend servers.

For HA setups, confirm:

  • Virtual IP failover behavior
  • Gratuitous ARP allowed
  • Firewall rules aligned with new MAC/IPs

Step 3: Preparing RELIANOID for Migration #

During migration, enable and configure API Key for temporary access. API access allows automation and repeatability.

From the Web UI:

  1. Navigate to: System > Users settings > API
  2. Click on Enable API permission
  3. Set an API Key or apply on Generate random key
  4. Store the key securely
  5. Click on Enable API permission

relianoid_configure_user_api_key

This key will be used by noid-cli.

Authenticate noid-cli #

Then, connect through console or SSH and authenticate with the API key in the noid-cli.

root@noid3-82-1:~# noid-cli
Load balancer API key: <Insert here the API Key>

This is only required the first time that noid-cli is launched.

relianoid_noid-cli

Step 4: Configure Network Interfaces #

Networking should always be configured before service creation.

noid-cli (localhost): network-nic set eth0 -ip 192.168.100.10 -netmask 255.255.255.0 -gateway 192.168.100.1
noid-cli (localhost): network-nic start eth0
noid-cli (localhost): network-nic set eth1 -ip 192.0.2.5 -netmask 255.255.255.0
noid-cli (localhost): network-nic start eth1
noid-cli (localhost): network-nic set eth2 -ip 10.0.0.5 -netmask 255.255.255.0
noid-cli (localhost): network-nic start eth2

Confirm via web UI that the configuration has been applied in Network > NIC.

Test connectivity #

Test connectivity with ping command against the network interfaces gateways:

ping 192.168.100.1

Add Virtual IPs (VIPs) #

These Virtual IP addresses will be used for load balancing services and will move between load balancer cluster nodes.

noid-cli (localhost): network-virtual create -name eth1:web0 -ip 192.0.2.10
noid-cli (localhost): network-virtual start eth1:web0
noid-cli (localhost): network-virtual create -name eth1:web1 -ip 192.0.2.11
noid-cli (localhost): network-virtual start eth1:web1
noid-cli (localhost): network-virtual create -name eth1:web2 -ip 192.0.2.12
noid-cli (localhost): network-virtual start eth1:web2
noid-cli (localhost): network-virtual create -name eth1:web3 -ip 192.0.2.13
noid-cli (localhost): network-virtual start eth1:web3

Confirm via web UI that the configuration has been applied in Network > Virtual Interfaces.

Step 5: Uploading farm SSL Certificates #

In the web UI section LSLB > SSL Certificates upload the PEM format SSL Certificates to be used in the HTTPS farms listed in the previous inventory of load balancing services.

Step 6: Translating Services to RELIANOID #

Semi-automated CLI commands can be generated using the following templates so the service creation should be very easy and straight forward.

At any stage, the same commands performed via noid-cli, can be applied via web UI as well.

HTTP / HTTPS Services #

  1. Create the new HTTP/S farm.
  2. Set the listener (HTTP or HTTPS).
  3. Add the SSL certificate in the farm.
  4. Create a new service (ex. default) in the farm.
  5. Set the service persistence by Cookie and set the backends are delivering the application in plain HTTP.
  6. Then, add the backends.
noid-cli (localhost): farm create -farmname web-prod -profile http -vip 192.0.2.10 -vport 443
noid-cli (localhost): farm set web-prod -listener https
noid-cli (localhost): farm-certificate add web-prod -file example.pem
noid-cli (localhost): farm-service add web-prod -id default
noid-cli (localhost): farm-service set web-prod default -persistence COOKIE -sessionid ASP.SessionId -httpsb false
noid-cli (localhost): farm-service-backend add web-prod default -ip 10.0.1.10 -port 80
noid-cli (localhost): farm-service-backend add web-prod default -ip 10.0.1.11 -port 80
noid-cli (localhost): farm-service-backend add web-prod default -ip 10.0.1.12 -port 80

Finally, the details about the service configuration such as Farm Guardian for health checks and WAF configuration can be configured via web UI to facilitate the process.

Layer 4 (TCP/UDP) Services #

  1. Create the new L4 farm.
  2. Set the protocol (TCP, UDP…), NAT mode (without transparency) and persistence by source IP.
  3. Then, add the backends.
noid-cli (localhost): farm create -farmname api-l4 -profile l4xnat -vip 192.0.2.20 -vport 8443
noid-cli (localhost): farm set api-l4 -protocol tcp -nattype nat -persistence srcip
noid-cli (localhost): farm-service-backend add api-l4 default_service -ip 10.0.2.5
noid-cli (localhost): farm-service-backend add api-l4 default_service -ip 10.0.2.6

Finally, confirm and complete the configuration in the web UI if required by adding a Farm Guardian health check or security policies.

Step 7: Configure the cluster service #

At this stage, it is a good timing to proceed with the configuration of the Clustering Service between 2 load balancer nodes in case that it was not created yet. Then, all the configuration performed in regards to Virtual Interfaces and Farms will be replicated to the secondary node automatically.

Step 8: Final Service Configuration via Web UI #

Some advanced or sensitive configurations are intentionally finalized in the Web UI.

Advanced HTTP Features #

Configure and revisit the options about:

  • Header rewrites
  • Redirects
  • WAF rules
  • Custom health checks

Validation Before Cutover #

Test services using:

  • Temporary DNS entries
  • Hosts file overrides
  • Alternate ports

Monitor:

  • Backend health
  • Logs
  • Session persistence

Step 9: Cutover and Decommissioning #

Once validated:

  1. Move production traffic (DNS or routing change)
  2. Monitor for at least one business cycle
  3. Revoke temporary API key
  4. Decommission legacy load balancer

📄 Download this document in PDF format #

    EMAIL: *

    Powered by BetterDocs