Farms status after zevenet ce 5 to relianoid ce 7.1 upgrade

Viewing 15 posts - 1 through 15 (of 19 total)
  • Author
    Posts
  • #52503 Reply
    s.zamboni
    Member

      Hi there

      moved from zevenet (last ce available release, full up to date) to relianoid 7.1 using the script

      everything seems ok but the farms status, which is always “down” even if everything is ok
      is there any log I can dive in to understand why?

      TIA
      Stefano

      #52504 Reply
      nevola
      Moderator

        Ciao Stefano, which kind of farms are you using?

        The farm status is controlled by a PID file created under /var/run path. You can check the logs in /var/log/syslog searching for some errors.
        Also, feel free to generate a support save via System > Support Save and send it via support@relianoid.com

        Kind Regards.

        #52505 Reply
        s.zamboni
        Member

          Hi, thank you for your reply

          I’m using http farms (it’s a quite simple situation), 2 virutal ips, 2 certificates, about 10 services..

          Will check the logs and report back

          Thank you

          #52577 Reply
          s.zamboni
          Member

            here I am, again

            I created a new VM from Relianoid ce iso and restored the backup I took on production maachine

            I’m experiencing the sabe issue:
            in the web interface I see farms in Critical status, but there are no errors in syslog and I see that everything is working as expected.

            I dug everywhere and there’s no errors at all, in any log file on the machine.

            Can anyone give an hint to understand why?

            And more, may I suggest to make it easier to understand? I mean, If I see a critical state on a service, I’d like to know exactly what to check and/or why I have such a state.

            Thank you

            #52578 Reply
            nevola
            Moderator

              Ciao Stefano,

              Please refer to the documentation explaining the color codes for both HTTP and L4 farms.

              LSLB | Farms | Update | HTTP Profile

              LSLB | Farms | Update | L4xNAT Profile

              The “critical” status would mean there are no backends available to deliver the traffic. You could disable temporary the farm guardian advanced checks just to confirm the health scripts are not affecting to the status of the backends.

              Hope that helps,
              Regards.

              #52584 Reply
              s.zamboni
              Member

                Hi, the status I see is
                Black: Indicates a CRITICAL damage. The farm is UP but there is no backend available or they are in maintenance mode.

                But all farms are working as expected, backends are up and running and even farmguardian is telling me everything is ok

                2024-02-27T15:41:26.055263+01:00 svlinproxy farmguardian[243748]: (INFO) Farm Filasolutions8443 – Service pss – timetocheck 15 – portadmin /tmp/Filasolutions8443_proxy.socket – command check_ping -H HOST -w 2,100
                2024-02-27T15:41:26.084935+01:00 svlinproxy farmguardian[243757]: (INFO) Farm FilasolutionsSSL – Service helpdesk – timetocheck 15 – portadmin /tmp/FilasolutionsSSL_proxy.socket – command check_ping -H HOST -w 2,100
                2024-02-27T15:41:26.220633+01:00 svlinproxy farmguardian[243756]: (INFO) Farm FilasolutionsSSL – Service vault – timetocheck 15 – portadmin /tmp/FilasolutionsSSL_proxy.socket – command check_ping -H HOST -w 2,100
                2024-02-27T15:41:26.243783+01:00 svlinproxy farmguardian[243754]: (INFO) Farm FilasolutionsSSL – Service zucchetti – timetocheck 15 – portadmin /tmp/FilasolutionsSSL_proxy.socket – command check_ping -H HOST -w 2,100
                2024-02-27T15:41:27.059237+01:00 svlinproxy farmguardian[243748]: (INFO) Farm Filasolutions8443 – Service pss – server[0] 192.168.0.63:8443 – status active – timedout 0 – errorcode 0
                2024-02-27T15:41:27.089533+01:00 svlinproxy farmguardian[243757]: (INFO) Farm FilasolutionsSSL – Service helpdesk – server[0] 192.168.0.26:443 – status active – timedout 0 – errorcode 0
                2024-02-27T15:41:27.224939+01:00 svlinproxy farmguardian[243756]: (INFO) Farm FilasolutionsSSL – Service vault – server[0] 192.168.0.26:443 – status active – timedout 0 – errorcode 0
                2024-02-27T15:41:27.246284+01:00 svlinproxy farmguardian[243754]: (INFO) Farm FilasolutionsSSL – Service zucchetti – server[0] 192.168.0.53:443 – status active – timedout 0 – errorcode 0

                tried disabling farmguardian for 10 mins and nothing changed

                #52588 Reply
                nevola
                Moderator

                  If farmguardian is not detecting down the backends, probably is the reverse proxy. Please check the status of the backends with the command:

                  root@noid-ce:~# /usr/local/relianoid/app/pound/sbin/poundctl -c /tmp/<FARM_NAME>_proxy.socket

                  Cheers.

                  #52591 Reply
                  s.zamboni
                  Member

                    are you suggesting me to execute that command?

                    I have no _proxy.socket file in /tmp/

                    drwxr-xr-x 18 root root 4096 Feb 27 11:32 ..
                    -rw-r—– 1 root root 257 Feb 27 15:41 cgisess_8acf41d0126e16025b8e9a4e1e7b65ed
                    drwx—— 2 root root 4096 Feb 13 14:45 cherokee.XXXXXB3dCwQ
                    drwx—— 2 root root 4096 Feb 13 14:45 cherokee.XXXXXeCrs2m
                    drwx—— 2 root root 4096 Feb 13 14:45 cherokee.XXXXXiGNzgR
                    drwx—— 2 root root 4096 Feb 13 14:45 cherokee.XXXXXlo8cwj
                    drwx—— 2 root root 4096 Feb 13 14:45 cherokee.XXXXXOhLVuo
                    drwx—— 2 root root 4096 Feb 13 14:45 cherokee.XXXXXYQraxU
                    -rw-r–r– 1 root root 0 Feb 27 15:55 err.log
                    -rw-r–r– 1 root root 0 Feb 27 15:30 Filasolutions8443.lock
                    -rw-r–r– 1 root root 0 Feb 13 14:45 Filasolutions.lock
                    -rw-r–r– 1 root root 0 Feb 27 15:30 FilasolutionsSSL.lock
                    drwxrwxrwt 2 root root 4096 Feb 13 14:45 .font-unix
                    drwxrwxrwt 2 root root 4096 Feb 13 14:45 .ICE-unix
                    drwx—— 3 root root 4096 Feb 13 14:45 systemd-private-1934d9d6cd3240bdb4bb58b5145b9b06-systemd-logind.service-wsM0ZT
                    drwx—— 3 root root 4096 Feb 13 14:45 systemd-private-1934d9d6cd3240bdb4bb58b5145b9b06-systemd-timesyncd.service-Iuq6QT
                    drwxrwxrwt 2 root root 4096 Feb 13 14:45 .X11-unix
                    drwxrwxrwt 2 root root 4096 Feb 13 14:45 .XIM-unix

                    #52594 Reply
                    nevola
                    Moderator

                      In case that it is about a HTTP farm you should have a “pound” process running that should open a control socket defined in the directive “Control” of the farm configuration file “/usr/local/relianoid/config/FARM-NAME_proxy.cfg”. Then, you could execute the ctl command over such socket defined in the farm configuration.

                      root@noid-ce:~# /usr/local/relianoid/app/pound/sbin/poundctl -c /tmp/<FARM_NAME>_proxy.socket

                      If the socket is defined but doesn’t exist then that could be the status problem faced. Restarting the farm should regenerate the socket file.

                      Cheers.

                      #52596 Reply
                      s.zamboni
                      Member

                        root@svlinproxy:/usr/local/relianoid/config# ls -la *_proxy.cfg
                        -rw-r–r– 1 root root 1863 Feb 27 16:26 Filasolutions8443_proxy.cfg
                        -rw-r–r– 1 root root 1878 Feb 13 14:45 Filasolutions_proxy.cfg
                        -rw-r–r– 1 root root 2586 Feb 27 15:30 FilasolutionsSSL_proxy.cfg

                        no Control directive in my _proxy.cfg files

                        root@svlinproxy:/usr/local/relianoid/config# grep -i control *_proxy.cfg
                        root@svlinproxy:/usr/local/relianoid/config#

                        root@svlinproxy:/usr/local/relianoid/config# ps aux | grep pound
                        root 901 0.0 0.0 61548 2180 ? Ss Feb13 0:00 /usr/local/relianoid/app/pound/sbin/pound -f /usr/local/relianoid/config/Filasolutions_proxy.cfg -p /var/run/Filasolutions_proxy.pid
                        root 902 0.0 0.0 193140 3420 ? Sl Feb13 0:29 /usr/local/relianoid/app/pound/sbin/pound -f /usr/local/relianoid/config/Filasolutions_proxy.cfg -p /var/run/Filasolutions_proxy.pid
                        root 243330 0.0 0.0 61672 2380 ? Ss 15:30 0:00 /usr/local/relianoid/app/pound/sbin/pound -f /usr/local/relianoid/config/FilasolutionsSSL_proxy.cfg -p /var/run/FilasolutionsSSL_proxy.pid
                        root 243331 0.0 0.2 1049524 9632 ? Sl 15:30 0:01 /usr/local/relianoid/app/pound/sbin/pound -f /usr/local/relianoid/config/FilasolutionsSSL_proxy.cfg -p /var/run/FilasolutionsSSL_proxy.pid
                        root 246138 0.0 0.0 61672 2364 ? Ss 16:26 0:00 /usr/local/relianoid/app/pound/sbin/pound -f /usr/local/relianoid/config/Filasolutions8443_proxy.cfg -p /var/run/Filasolutions8443_proxy.pid
                        root 246139 0.0 0.1 127728 6480 ? Sl 16:26 0:00 /usr/local/relianoid/app/pound/sbin/pound -f /usr/local/relianoid/config/Filasolutions8443_proxy.cfg -p /var/run/Filasolutions8443_proxy.pid
                        root 246758 0.0 0.0 6332 2132 pts/0 S+ 16:40 0:00 grep pound
                        root@svlinproxy:/usr/local/relianoid/config#

                        root@svlinproxy:/usr/local/relianoid/config# netstat -napt | grep pound
                        tcp 0 0 10.10.10.2:443 0.0.0.0:* LISTEN 243330/pound
                        tcp 0 0 10.10.10.2:8443 0.0.0.0:* LISTEN 246138/pound
                        tcp 0 0 10.10.10.2:80 0.0.0.0:* LISTEN 901/pound

                        Thank you

                        Restarted all farms many, many times

                        #52598 Reply
                        s.zamboni
                        Member

                          ok, maybe I found the issue

                          When I upgraded my zevenet to Relinoid, _proxy.cfg files were not recreated.

                          I can see the Control directive in my templates

                          root@svlinproxy:/usr/local/relianoid/share# grep -i control *.cfg
                          poundtpl.cfg:Control “/tmp/[DESC]_proxy.socket”
                          proxytpl.cfg:Control “/tmp/[DESC]_proxy.socket”

                          but nothing in imported/restored/migrated profiles/farms

                          How can I regenerate my cfg files without restarting from scratch?

                          THank you

                          #52601 Reply
                          nevola
                          Moderator

                            The template for the proxy configuration is under /usr/local/relianoid/share/poundtpl.cfg and should include such directive. How did you created such farms? Did you imported a backup?

                            Thanks,

                            #52602 Reply
                            nevola
                            Moderator

                              You could edit the farm configuration file of every proxy farm and add the Control directive in the form:

                              Control “/tmp/FARMNAME_proxy.socket”

                              Just before the ListenHTTP(S) directive. Then restart the farms and they should create the control socket.

                              Cheers.

                              #52603 Reply
                              s.zamboni
                              Member

                                ok, let’s recap the history

                                I had a zevenet 5 CE and migrated it to Relianoid 7 with your script

                                everything went smooth, but the critical farm status

                                I then installed, for testing purpose, a new VM directly from Relinoid 7 CE iso and restored a backup taken form the production machine

                                in both of them I see the critical status, even if on the production one everything is, as said, working ok (apparently)

                                How can I regenerate/mitrate my config files?
                                It seems something that is missing in migration fro zevenet

                                Thank you

                                #52604 Reply
                                nevola
                                Moderator

                                  Did you follow this guide?

                                  Migrating from Zevenet CE to RELIANOID ADC Load Balancer Community Edition

                                  Or which script did you use?

                                  Thank you.

                                Viewing 15 posts - 1 through 15 (of 19 total)
                                Reply To: Farms status after zevenet ce 5 to relianoid ce 7.1 upgrade
                                Insert your details or Register to avoid being moderated