HA Cluster Installation (multi-machines)

General

When deploying XPLG in busy environments, it is recommended to deploy several XPLG instances as a cluster, for high availability and load balancing.

The XPLG cluster is composed of several instances, using a common storage in order to share the system tasks load and users’ activity.

Some of the instances function as processor nodes, taking care of back-end tasks (indexing, analysis, monitoring, and more), while the rest of the instances function as UI nodes.

This architecture enables easy scaling of XPLG in heavily loaded environments, without influencing the users’ front-end experience. A load balancer can be used if more than one UI node is deployed.

It is highly recommended to consult with XPLG support prior to setting up the clustered environment.

Review the System Architecture diagrams that explain the XPLG Center architecture, and below a step-by-step cluster deployment instructions.

XPLG Cluster Deployment Instructions

The following are instructions for installing XPLG in a clustered environment, with two UI nodes and a single processor node. 

Preparations

1. Decide if XPLG will be installed in a Windows or Linux environment. In case there is a need to analyze log data from Windows machines, XPLG must be installed on Windows machines.
2. Prepare machines (physical or virtual) for the UI node(s) and for the processor node(s), based on the XPLG hardware requirements. It is very important that all cluster servers are clocked synced (either verify manually or use a NTP server to ensure synchronization).

3. Prepare a shared storage device that can be accessed by all XpoLog Center nodes, based on the XPLG hardware requirements. It is mandatory that ALL XPLG instances in the cluster will have full permissions (READ/WRITE) on the allocated shared storage.

Note: XPLG performs heavy read/write operations. It is highly recommended that the fastest storage connectivity is allocated to the UI node.


Installation
1. Download the XPLG installer from the XPLG website at http://www.xplg.com
2. Run the installer on each node machine - See installation instructions for more details
3. Once completed, open a web browser directly to each node at: http://[NODE_HOST_NAME]:30303 to verify that XPLG was installed successfully.

Configuration
1. Create a folder that will store XPLG ’s data on the shared storage device (referred to as EXTERNAL_CONFIGURATION_DIRECTORY).
2. Open a web browser to each node at http://[NODE_HOST_NAME]:30303, go to XPLG Manager > Left Navigation Panel > Settings > General, and do the following:
    a. Select the Use external configuration directory checkbox.
    b. Enter the full path to the EXTERNAL_CONFIGURATION_DIRECTORY in the ‘Configuration full path’ field.
    c. Select the Cluster Mode checkbox.
    d. Click Save.
    e. Wait until receiving a message that the configuration was saved successfully and a restart request but don’t restart XPLG yet.
    g. Under the Mail tab, specify the SMTP details and system administrator email address. XPLG will send an alert in case the active processor node changes.
3. On each node (starting with the processor node), go to XPOLOG_CENTER_INSTALLATION_DIRECTORY, edit the lax file (XpoLog.lax on Windows installation; XpoLog.sh.lax on Linux installation), and perform the following changes to the line that starts with lax.nl.java.option.additional=
    a. By default, XPLG is allocated with 1024 MB of memory. It is recommended to increase this value to about 75% of the machine’s memory. To do so, replace -Xmx1024m with -XmxNEW_VALUE
    b. In a clustered environment, each node should be assigned a unique name for it to be identified in the system. To do so, append the following to the end of the line -Dxpolog.uid.structure=[NODE_NAME]
        example node name: PROCESSOR1, PROCESSOR2, UI01, UI02, etc.
    c. Save the file.
    d. Restart XPLG (on a Windows installation, restart the XpoLogCenter service; on a Linux installation, run the script XPOLOG_CENTER_INSTALLATION_DIRECTORY/runXpoLog.sh restart).
4. In a clustered environment, some configuration properties should be tuned. To do so, open a web browser to the processor node at http://[PROCESSOR_NODE_HOST_NAME]:30303/logeye/support,
select ‘Advanced Settings’ in the select box.
For each of the following properties, enter the property name in the text box of the ‘Name’ column, right-click the row, click ‘Edit’, enter the custom value and click Save:

  • Property name: cluster.allowedMasters
    Purpose: used to determine the name of the processor node.
    Should be customized: always
    Custom value: [NODE_NAME_1],[NODE_NAME_2]...,[NODE_NAME_N] (of the processor nodes)

  • Property name: htmlUtil.BaseUrl
    Purpose: used by the processor node when exporting information on the server side
    Should be customized: always
    Custom value: http://[PROCESSOR_NODE_HOST_NAME]:[PROCESSOR_NODE_PORT]/logeye/

  • Property name: htmlUtil.ui.BaseUrl
    Purpose: used by the UI node when exporting information on the server side
    Should be customized: always
    Custom value: http://[UI_NODE_HOST_NAME]:[UI_NODE_PORT]/logeye/

  • Property name: mail.link.baseUrl
    Purpose: used in links that point back to XPLG from an email
    Should be customized: always
    Custom value: http://[UI_NODE_HOST_NAME]:[UI_NODE_PORT]/logeye/ (in multiple UI nodes environments, consider pointing the link to a load balancer, if exists)

Note: It is highly recommended to consult XPLG support before editing any of the following properties:

  • Property name: cluster.shouldCheckForMaster
    Purpose: used to indicate whether the UI nodes should take over the processor node activity in case the processor node is down
    Should be customized: only if UI nodes should never take over the processor activity
    Custom value: false

  • Property name: cluster.takeOverAttempts.notAllowedMaster
    Purpose: used to indicate the number of minutes that should pass before a UI node attempts to take over the processor activity in case the processor node is down
    Should be customized: only when there’s a need to allow the processor node to be down for more than 5 minutes without a UI node taking over its activity
    Custom value: numeric value larger than 5

Note: in case there is an XPLG instance which is a dedicated to be a listener then the following has to be done for that specific instance:

  • Allocate 2GB of memory (there is no need in more)

  • Set this specific instance to run in agent mode:

    • Open a browser to the instance directly

    • Go to XPLG Manager > Left Navigation Panel > Settings > General

    • Check the 'Agent Mode' and save

  • Set this specific instance not to be recycled: 

    • Edit the file LISTENER_INSTALL_DIR/xpologInit.prop

    • Add the line (empty recycle cron expression):

      recycleCronExpression=

    • Restart Listener instance

    • Go to the Listener Account (XPLG Manager > Left Navigation Panel > Data > Listen To Data):

      • Stop the listener account

      • Edit the listener account

      • Set Listening node to be the Listener instance

      • Set the Indexing node to be the MASTER or one of the PROCESSORS

      • Save the listener account

      • Start the listener account

Verification

1. Open a web browser to each node at http://[NODE_HOST_NAME]:30303, go to XPLG Manager > Left Navigation Panel > Settings > General, and verify that the external configuration directory and cluster mode are active.
2. On the shared storage device, go to EXTERNAL_CONFIGURATION_DIRECTORY/conf/general/cluster and verify that the file with suffix .masterNode is called [PROCESSOR_NODE_NAME].masterNode.

In case the file is named differently, wait two minutes and check again. If the file still does not exist nor has a different name, verify configuration steps 3b and 4a once again.

Please review the system architecture overview for additional information: XPLG-Architecture

Â