com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'html' is unknown.

XpoLog Architecture


Download the complete architecture diagrams from: XpoLog Center - Architecture

General

When deploying XpoLog Center in busy environments it is recommended to deploy several XpoLog Center instances as a cluster, for high availability and load balancing.

The XpoLog Center cluster is composed of several instances, using a common storage in order to share the system tasks load and users’ activity. Some of the instances function as processor nodes, taking care of backend tasks (indexing, analysis, monitoring etc.), while the rest of the instances function as UI nodes. This architecture enables easy scaling of XpoLog Center in heavily loaded environments, without influencing the users’ front-end experience. A load balancer can be used if more than 1 UI node is deployed.

It is highly recommended to consult with XpoLog support prior to settings up the clustered environment. Attached is a diagram explaining the XpoLog Center Cluster architecture, and step-by-step cluster deployment instructions.


XpoLog Center Cluster Deployment Instructions
Following are instructions for installing XpoLog Center in a clustered environment, with two UI nodes and a single processor node 

Preparations
1. Decide if XpoLog Center will be installed in a Windows or Linux environment. In case there’s a need to analyze log data from Windows machines, XpoLog Center must be installed on Windows machines.
2. Prepare three machines (physical or virtual) , two for the UI nodes and one for the processor node, based on the XpoLog Center hardware requirements.
3. Prepare a shared storage device that can be accessed by all XpoLog Center nodes, based on the XpoLog Center hardware requirements.
Note that XpoLog Center performs heavy read/write operations, mostly from the processor node. if necessary, the storage can be located on the processor node.

Installation
1. Download the XpoLog Center installer from the XpoLog website at http://www.xpolog.com
2. Run the installer on each node machine 
3. Open a web browser to each node at: http://[NODE_ HOST_NAME]:30303 to verify that XpoLog Center was installed successfully.

Configuration
1. Create a folder that will store XpoLog Center’s data on the shared storage device (referred to as EXTERNAL_CONFIGURATION_DIRECTORY).
2. Open a web browser to each node at http://[NODE_ HOST_NAME]:30303, go to XpoLog > Settings > General
    a. Check the ‘Use external configuration directory’ box
    b. Enter the full path to the EXTERNAL_CONFIGURATION_DIRECTORY in the ‘Configuration full path’ field
    c. Check the ‘Cluster Mode’ box
    d. Click ‘Save’
    e. Wait until receiving a message that the configuration was saved successfully.
    f. Don’t restart XpoLog Center yet.
    g. Under the ‘Mail’ tab, specify the SMTP details and system administrator email address. XpoLog Center will send an alert in case the active processor node changes.
3. On each node (starting with the processor node), go to XPOLOG_CENTER_INSTALLATION_DIRECTORY, edit the lax file (XpoLog.lax on Windows installation; XpoLog.sh.lax on Linux installation) and perform the following changes to the line that starts with lax.nl.java.option.additional=
    a. By default, XpoLog Center is allocated with 1024MB of memory. It is recommended to increase this value to about 75% of the machine’s memory. To do so, replace -Xmx1024m with -XmxNEW_VALUE
    b. In a clustered environment, each node should be assigned a unique name for it to be identified in the system. To do so, append the following to the end of the line -Dxpolog.uid.structure=[NODE_NAME]
        example node name: PROCESSOR, UI01, UI02 etc.
    c. Save the file
    d. Restart XpoLog Center (on a Windows installation, restart the XpoLogCenter service; on a Linux installation, run the script XPOLOG_CENTER_INSTALLATION_DIRECTORY/runXpoLog.sh restart).
4. In a clustered environment, some configuration properties should be tuned. To do so, open a web browser to the processor node at http://[PROCESSOR_NODE_HOST_NAME]:30303/logeye/support,
select ‘Advanced Settings’ in the select box. For each of the following properties, enter the property name in the text box of the ‘Name’ column, right-click the row, click ‘Edit’, enter the custom value and click ‘Save’
    a. Property name: cluster.allowedMasters Purpose: used to determine the name of the processor node Should be customized: always Custom value: [NODE_NAME] (of the processor node) Note that it is highly    recommended to consult XpoLog’s support before editing any of the following properties: 

    b. Property name: cluster.shouldCheckForMaster Purpose: used to indicate whether the UI nodes should take over the processor node activity in case the processor node is down Should be customized: only if UI    nodes should never take over the processor activity Custom value: false 
    c. Property name: cluster.takeOverAttempts.notAllowedMaster Purpose: used to indicate the number of minutes that should pass before a UI node attempts to take over the processor activity in case the processor node is down Should be customized: only when there’s a need to allow the processor node to be down for more than 5 minutes without a UI node taking over its activity Custom value: numeric value larger than 5

Verification
1. Open a web browser to each node at http://[NODE_ HOST_NAME]:30303, go to XpoLog > Settings > General and verify that the external configuration directory and cluster mode are active.
2. On the shared storage device go to EXTERNAL_CONFIGURATION_DIRECTORY/conf/general/cluster and verify that the file with suffix .masterNode is called [PROCESSOR_NODE_NAME].masterNode. In case the file is called differently, wait two minutes and check again. If the file still doesn't exist nor has a different name, verify configuration steps 3b and 4a once again.