HPCC Configuration Manager
Boca Raton Documentation Team
We welcome your comments and feedback about this document via
email to docfeedback@hpccsystems.com
Please include Documentation
Feedback in the subject line and reference the document name,
page numbers, and current Version Number in the text of the
message.
LexisNexis and the Knowledge Burst logo are registered trademarks
of Reed Elsevier Properties Inc., used under license.
HPCC Systems® is a registered trademark
of LexisNexis Risk Data Management Inc.
Other products, logos, and services may be trademarks or
registered trademarks of their respective companies.
All names and example data used in this manual are fictitious. Any
similarity to actual persons, living or dead, is purely
coincidental.
HPCC Systems®
Using Configuration Manager
Configuration Manager is the utility with which we configure the
HPCC platform. The HPCC platform's configuration is stored in an XML file
named environment.xml. When you install a
package, a default single-node environment.xml is generated. After that,
you can use the Configuration Manager to modify it and add nodes and
configure components.
The Configuration Manager Wizard creates a similar file, but after
it is generated, you must rename it and put it into place on each
node.
Configuration Manager also offers an Advanced
View which allows you to add instances of components or change
the default settings for components. Even if you plane to use Advanced
View, it is a good idea to start with a wizard generated configuration and
use Advanced View to finish it.
This document will guide you through configuring an HPCC environment
using the Configuration Manager.
Running the Configuration Manager
This document will guide you through configuring an HPCC
environment using the Configuration Manager.
The HPCC package should already be installed on ALL nodes.
You can use any tool or shell script you choose.
SSH to a node in your environment and login as a user with
sudo privileges. We would suggest that it would be the first node,
and that it is a support node, however that is up to your
discretion.
Start the Configuration Manager service on the node (again we
would suggest that it should be on a support node, and further that
you use the same node to start the Configuration Manager every time,
but this is also entirely up to you).
sudo /opt/HPCCSystems/sbin/configmgr
Using a Web browser, go to the Configuration Manager's
interface:
http://<ip of installed system>:8015
The Configuration Manager startup wizard displays.
There are different ways to configure your HPCC system. You can
use the Generate environment wizard and
use that environment or experienced users can then use the Advanced View for more specific customization.
There is also the option of using Create blank
environment to generate an empty environment that you could
then go in and add only the components you would want.
Environment Wizard
To use the wizard select the Generate
new environment using wizard button.
Provide a name for the environment file.
This will then be the name of the configuration XML file.
For example, we will name our environment
NewEnvironment and this will produce a
configuration XML file named
NewEnvironment.xml that we will
use.
Press the Next button.
Next you will need to define the IP addresses that your HPCC
system will be using.
Enter the IP addresses or hostname(s).
IP Addresses can be specified individually using semi-colon
delimiters. You can also specify a range of IPs using a hyphen
(for example, nnn.nnn.nnn.x-y). In the image below, we specified
the IP addresses 10.239.219.1 through 10.239.219.100 using the
range syntax, and also a single IP 10.239.219.111. Alternatively,
you can enter the hostnames.
Press the Next button.
Now you will define how many nodes to use for the Roxie and
Thor clusters.
Enter the appropriate values as indicated.
Number of support nodes:
Specify the number of nodes to use for support
components. The default is 1.
Number of nodes for Roxie cluster:
Specify the number of nodes to use for your Roxie
cluster. Enter zero (0) if you do not want a Roxie
cluster.
Number of slave nodes for Thor cluster
Specify the number of slave nodes to use in your Thor
cluster. A Thor master node will be added automatically.
Enter zero (0) if you do not want any Thor slaves.
Number of Thor slaves per node (default 1)
Specify the number of Thor slave processes to
instantiate on each slave node. Enter zero (0) if you do not
want a Thor cluster.
Enable Roxie on demand
Specify whether or not to allow queries to be run
immediately on Roxie. This must be enabled to run the
debugger. (Default is true)
Press the Next
button
The wizard displays the configuration parameters.
Press the Finish button to
accept these values or press the Advanced
View button to edit in advanced mode.
You will now be notified that you have completed the
wizard.
At this point, you have created a file named NewEnvironment.xml
in the /etc/HPCCSystems/source
directory
Keep in mind, that your HPCC configuration may be
different depending on your needs. For example, you may not
need a Roxie or you may need several smaller Roxie clusters.
In addition, in a production [Thor] system, you would ensure
that Thor and Roxie nodes are dedicated and have no other
processes running on them. This document is intended to show
you how to use the configuration tools. Capacity planning and
system design is covered in a training module.
Distribute the Configuration
Stop the HPCC system.
If it is running stop the HPCC system (on every node),
using a command such as this:
sudo /etc/init.d/hpcc-init stop
Note:
You may have a multi-node system and a custom script
such as the one illustrated in Appendix of the Installing and Running the HPCC
Platform document to start and stop your
system. If that is the case please use the appropriate
command for stopping your system on every node.
Be sure HPCC is stopped before attempting to
copy the environment.xml file.
Back up the original environment.xml file.
# For example
sudo -u hpcc cp /etc/HPCCSystems/environment.xml /etc/HPCCSystems/source/environment-date.xml
Note:
The live environment.xml file is located in your
/etc/HPCCSystems/
directory. ConfigManager works on files in /etc/HPCCSystems/source directory.
You must copy from this location to make an
environment.xml file active.
You can also choose to give the environment file a more
descriptive name, to help differentiate any differences.
Having environment files under source control is a good
way to archive your environment settings.
Copy the new .xml file from the source directory to the
/etc/HPCCSystems and rename the file to
environment.xml
# for example
sudo -u hpcc cp /etc/HPCCSystems/source/NewEnvironment.xml /etc/HPCCSystems/environment.xml
Copy the /etc/HPCCSystems/environment.xml to the
/etc/HPCCSystems/ on to
every node.
You may want to use a script to push out the XML file to
all nodes. See the Example Scripts section
in the Appendix of the Installing and
Running the HPCC Platform document. You can use the
scripts as a model to create your own script to copy the
environment.xml file out to all your nodes.
Restart the HPCC platform on all nodes.
Configuration Manager Advanced View
For the advanced user, the Advanced View offers access to adding
additional instances of components or making configuration settings for
individual components.
Using ConfigMgr in Advanced Mode
This section shows some of the configuration options in Advanced
Mode. There are a few different ways to configure your system. If you
are not an experienced user you can use the Generate environment wizard
discussed in the previous section. The following steps will detail the
Advanced set up.
SSH to the first box in your environment and login as a user
with sudo privileges.
If it is running, stop the HPCC system using this command on
every node:
sudo /etc/init.d/hpcc-init stop
Note:
If you have a large system with many nodes, you may want
to use a script to perform this step. See the
Example Scripts section in the Appendix
of the Installing and Running the
HPCC Platform document.
You can use this command to confirm HPCC processes
are stopped: sudo /etc/init.d/hpcc-init status
Start the Configuration Manager service on one node (usually
the first node is considered the head node and is used for this
task, but this is up to you).
sudo /opt/HPCCSystems/sbin/configmgr
Using a Web browser, go to the Configuration Manager's
interface:
http://<ip of installed system>:8015
The Configuration Manager startup wizard displays.
Select Advanced View, then
press the Next button.
There are a few different ways to configure your system. If
you are not an experienced user you can use the Generate environment
wizard discussed in the previous section.
Select an XML file from the drop list.
This list is populated from versions of an environment XML
file in your server's /etc/HPCCSystems/source/ directory.
The system will check the current environment file and if a
match is found here it will highlight in blue the current
environment file being used.
Press the Next button.
The Configuration Manager interface displays.
Default access is read-only. Many options are only
available when write-access is enabled. Gain write
access by checking the Write
Access checkbox. Unchecking this
box returns the environment to read-only mode. All menu
items are disabled in read-only mode. Closing
the web page automatically removes any write-access
locks.
Check the Write Access
box.
The Save
button validates and saves the environment.
The Save Environment As
button validates and lets you specify the
environment filename to save.
The Validate Environment
button just validates the current environment
including any changes that have not yet been saved.
The Open Environment button
allows you to open a new environment file to work on.
The Wizard button
will bring up the Configuration Manager chooser form which will
allow you to create or view an environment file where you can also
launch the configuration wizard.
These buttons are only enabled in Write Access mode.
XML View
In the advanced view of Configuration Manager, you can
optionally choose to work with the XML View.
To see the the configuration in XML View, click on the
Environment heading in the Navigator panel on the left side.
You can access all
attributes through the XML view.
If you wish to add an attribute that does not exist, right-click
on one of the components then you can choose to add an
attribute.
Hardware Section
This section allows you to define your list of servers. When
defining instances of components, you will choose from servers in this
list.
Select Hardware in the
Navigator panel on the left side.
Select the Computers
tab.
right-click on one of computers listed, then select New
Range.
Specify the following:
Name Prefix - any name that will help you to identify the
node or range
Start IP Address
Stop IP Address
The IP Addresses can be specified in a range if all your
host IP addresses are consecutively numbered. If the IP
addresses are not sequential you should repeat the process for
each individual IP address and just add the IP address in both
the start and stop IP address field. You will then need to
repeat the process for each node.
Press the OK button.
The list of nodes now displays with the nodes that you just
added.
Next, edit each System Server component instance and set it to
a newly defined node.
Click the
disk icon to save
Expand the Software section,
if necessary, in the Navigator panel on the left side, by clicking
on the
button.
Software Section
Use the software components section to configure software
components of the HPCC platform. Most software components are actual
running processes; however, some are just definitions used by the
system. These definitions are used by the configuration
generator.
Items that appear in red indicate
optional values. They are only written to the environment if you add to
or change that value. If untouched, they will not appear in the
environment XML file.
Backupnode
Backupnode allows you to back up Thor clusters at regular
intervals. The Backupnode component is a way to allow administrators
to manage the backupnode process without using a cron job.
To configure scheduled Thor node backups, add the backupnode
component, choose the hardware instance to run it on and then add Thor
groups to it.
Right click on the Software component in the Navigator panel (on the right side),
choose New Components then
backupnode from the drop
list.
From the tabs on the right side, select the Instances tab.
Right click on the computer column and choose Add Instances...
Select the computer for the backupnode component, or press
Add Hardware to add a new
computer instance. You would always want to run backupnode on
the Thor master of the cluster.
Select the Thor Node
Groups tab.
Right click on the the Interval column and choose the interval
and/or Thor group to back up.
Click the
disk icon to save
The default backup locations are:
/var/lib/HPCCSystems/hpcc-data/backupnode/<thorname>/last_backup
The interval attribute of the backupnode component determines
how many hours between backups.
Dali
Instances
Select Dali Server in
the Navigator panel on the left side.
Select the Instances tab.
In the computer column, choose a node from the drop list
as shown below:
Click the disk icon to save
DaliServer attributes
This section describes the DaliServer attributes.
DaliServer store
This section describes the attributes configuring how Dali
handles the system data store.
DaliServer LDAP options
This section describes the DaliServer LDAP tab.
DaliServer Notes
This tab allows you to add any notes pertinent to the
component's configuration. This can be useful to keep a record of
changes and to communicate this information to peers.
DaliServerPlugin
DaliServerPlugin allows you to add plugin functionality to a
Dail server.
DaliServerPlugin attributes
This section describes the DaliServerPlugin attributes.
DaliServerPlugin Options
This section describes the DaliServerPlugin options
These options are available for the DaliServerplugin when
configuring a Casandra server. See the System Administrator's Guide
for more details about configuring a Cassandra server as a system
datastore.
randomWuidSuffix
An integer value indicating how many randomized
digits to append to workunits. Set this if you need to
create workunits at a high rate to reduce the risk of
collisions (which would slow down the process of creating
a new unique workunit id).
traceLevel
An integer value indicating how much tracing to
output from Cassandra workunit operations. Set to zero or
do not set in normal usage.
partitions
An integer value indicating how many ways to
partition the data on a Cassandra cluster. The default is
2. The value only takes effect when a new Cassandra
workunit repository is created. Larger values permit
scaling to a more distributed store but at the expense of
some overhead on smaller stores where the scaling is not
needed.
prefixsize
An integer value specifying the minimum number of
characters that must be provided when wildcard searching
in the repository. Larger values will be more efficient
but also more restrictive on users. The default is 2. As
with partitions, this value only takes effect when a new
Cassandra workunit repository is created.
keyspace
The name of the Cassandra keyspace to use for the
HPCC data store. The default is
hpcc.
user
The username to use if the Cassandra server is
configured to require credentials.
password
The password to use if the Cassandra server is
configured to require credentials.
Dafilesrv Process
Dafilesrv Instances
Dafilesrv is a helper process that every node needs.
Select Dafilesrv in the Navigator panel on the left
side.
Select the Instances tab.
Right-click on a computer in the computer column, and
select Add Instance .
Select all computers in the list by checking the Select All box, then press the OK button.
Click the
disk icon to save
Dafilesrv attributes
This section describes the Dafilesrv attributes.
DFU Server
DfuServer Instances
Select DFU Server in the
Navigator panel on the left side.
Select the Instances tab.
In the computer column, choose a node from the drop list
as shown below:
Click the
disk icon to save
DfuServer Attributes Tab
This section describes the DfuServer attributes.
DfuServer SSH Options
This section describes the DfuServer SSH Options..
DfuServer Notes
This tab allows you to add any notes pertinent to the
component's configuration. This can be useful to keep a record of
changes and to communicate this information to peers.
Directories
The Directories component is a global definition used by other
components to determine the directories they will use for various
functions.
Name
Directory
Description
log
/var/log/[NAME]/[INST]
Location for Log files
temp
/var/lib/[NAME]/[INST]/temp
Location for temp files
data
Base Location for data files
data2
Base Location for 2nd copy of roxie data
data3
Reserved for future use
mirror
Base Location for mirror data files
query
Base Location for Queries
Drop Zone
A Drop Zone (or landing zone) is a location where files can be
transferred to or from your HPCC system. The drop zone is a logical
combination of a path and one or more servers.
Multiple drop zones allow you to configure different top level
folders for one or more servers. Multiple servers for a single drop
zone provides a logical grouping of distinct locations. Multiple drop
zones are useful to allow different permissions for users or
groups.
To add a drop zone:
Right-click on the Navigator panel on the left side and
choose New Components
Select Drop Zone
Drop Zone Attributes
You can change the configuration of your drop zone using the
attributes tab. If you have multiple drop zones, select the drop
zone to configure from the Navigator panel on the left side.
To change the drop zone attributes:
On the Attributes tab,
select the Attribute to modify.
Double-click on the value on the right side of the
attribute table for the value you wish to modify.
For example, select the name attribute, double click on the
value column and provide the
drop zone with a more meaningful name.
Click the disk icon to save.
Drop Zone Server List
This tab allows you to add any servers that you wish to
configure as a part of the selected drop zone.
To add a server to the current drop zone:
Select the Drop Zone to
configure from the Navigator panel on the left side.
Select the Server List
tab, right-click on the Server Address field and choose
Add.
Enter the hostname or IP address of the server.
Click the disk icon to save.
Drop Zone Notes
This tab allows you to add any notes pertinent to the
component's configuration. This can be useful to keep a record of
changes and to communicate this information to peers.
ECL Agent
Instances
Select ECL Agent in the Navigator panel on the left
side.
Select the Instances tab.
In the computer column, choose a node from the drop list
as shown below:
Click the
disk icon to save
EclAgent Attributes Tab
This section describes the EclAgent Attributes tab.
EclAgent Options Tab
This section describes the EclAgent Options tab.
EclAgentProcessNotes
This tab allows you to add any notes pertinent to the
component's configuration. This can be useful to keep a record of
changes and to communicate this information to peers.
ECL CC Server Process
Ecl CC Server Instances
Select Ecl CC Server - myeclccserver in the Navigator
panel on the left side.
Select the Instances tab.
In the computer column, choose a node from the drop list
as shown below:
Click the disk icon to save
Ecl CC Server Attributes Tab
This section describes the Ecl CC Server Attributes
tab.
EclCC Server Process Options
To add a custom option, right-click and select add. These
options are passed to the eclcc compiler.
See the ECL Compiler chapter in the Client Tools manual for details.
EclCC Server Process Notes
This tab allows you to add any notes pertinent to the
component's configuration. This can be useful to keep a record of
changes and to communicate this information to peers.
ECL Scheduler
Instances
Select ECL Scheduler in
the Navigator panel on the left side.
Select the Instances tab.
In the computer column, choose a node from the drop list
as shown below:
Click the
disk icon to save
EclScheduler Attributes Tab
This section describes the EclScheduler Attributes tab.
EclScheduler Notes
This tab allows you to add any notes pertinent to the
component's configuration. This can be useful to keep a record of
changes and to communicate this information to peers.
ESP Server
Esp Process Instances
Select ESP - myesp in the
Navigator panel on the left side.
Select the Instances tab.
In the computer column, choose a node from the drop list
as shown below:
Click the
disk icon to save
Esp - myesp Attributes Tab
This section describes the Esp - myesp Attributes tab.
Esp - myesp Service BindingsTab
This section describes the Esp - myesp Service Bindings tab.
This tab requires additional steps to configure the service
bindings.
You must first add the service bindings in the first table
(Right-click, add). Then you would configure the attributes in the
other tables on that tab. The next table describes the URL Authentication table.
The following tables describe the ESPProcess Service Bindings,
Feature Authentications.
Esp - myesp AuthenticationTab
This section describes the Esp - myesp Service Authentication
tab.
Additional information about the available Authentication
methods:
none
uses no authentication
local
uses the local credentials for the server running
the ESP
ldap
uses Lightweight Directory Access Protocol for
authentication
ldaps
similar to LDAP but uses a more secure (TLS)
protocol
secmgrPlugin
uses the security manager plug-in
Esp - myesp HTTPS Tab
This section describes the Esp - myesp HTTPS tab.
The cipherList attribute
allows you to set the ordered list of available ciphers for use by
openssl. See the documentation at openssl.org for more information
about ciphers.
EspProcess Notes
This tab allows you to add any notes pertinent to the
component's configuration. This can be useful to keep a record of
changes and to communicate this information to peers.
ESP Services
ESP Services provide a means to add functionality to an ESP
Server.
ECL Watch Service
Ecl Watch allows you to configure options for the ECL Watch
utility.
ECL Watch Attribute definitions.
ECL Watch Monitoring attributes.
WsECL Service
The WsECL service allows you to configure options for the
WsECL utility.
The Ws ECL configuration attributes.
Ws ECL VIPS option attributes.
Ws ECL Target Restrictions table.
FTSlave Process
FTSlave is a helper process that every node needs. This section
depicts an FTSlave installation.
Instances
Select FTSlave in the Navigator panel on the left
side.
Select the Instances tab.
Right-click on a computer in the computer column, and
select Add Instance.
Select all computers in the list, then press the OK button.
Click the
disk icon to save
FtSlave attributes
This section describes an FTSlaveProcess attributes
tab.
FtSlave Process Notes
This tab allows you to add any notes pertinent to the
component's configuration. This can be useful to keep a record of
changes and to communicate this information to peers.
LDAP Server Process
This section describes the configuration attributes of an
LDAPServer Installation in ConfigManager. For a complete description
of how to add LDAP Authentication see Using LDAP
Authentication section in the Installing and Running The HPCC Platform
document.
LDAP Server Process Instances
This tab allows you to add instances to your LDAP
Configuration. In order to add instances you would have previously
added the LDAP computers in the Hardware section. For a complete
description of how to add LDAP Authentication see Using
LDAP Authentication section in the Installing and Running The HPCC Platform
document.
On the Instances tab,
right-click on the table on the right hand side, choose
Add Instances...
Select the computer to use by checking the box next to
it.
This is the computer you added in the Hardware / Add New
Computers portion earlier.
LDAP Server Process Notes
This tab allows you to add any notes pertinent to the
component's configuration. This can be useful to keep a record of
changes and to communicate this information to peers.
Sasha Server
Instances
Select Sasha Server in the menu on the left side.
Select the Instances tab.
In the computer column, choose a node from the drop list
as shown below:
Sasha Server Attributes
This section described the SashaServerProcess Attribute tab values.
SashaServer Process Archiver
This section describes the SashaServer Process Archiver
tab.
SashaServer Process Coalescer
This section describes the SashaServer Process Coalescer
tab.
SashaServer Process DfuXRef
This section describes the SashaServer Process DfuXref
tab.
SashaServer Process DfuExpiry
This section describes the SashaServer Process DfuExpiry
tab.
SashaServer Process ThorQMon
This section describes the SashaServer Process ThorQMon
tab.
SashaServer Process DaFileSrvMonitor
This section describes the SashaServer Process
DaFileSrvMonitor tab.
SashaServer Process Notes
This tab allows you to add any notes pertinent to the
component's configuration. This can be useful to keep a record of
changes and to communicate this information to peers.
Thor
This section details how to define a Data Refinery (Thor)
cluster. Before you begin, you should decide the width of the cluster
(i.e., how many slave nodes will you have).
Select Thor Cluster -
mythor in the Navigator panel on the left side.
Select the Topology
tab.
Expand the Topology, if needed, then right-click the Master
and select Delete.
This deletes the sample one-node Thor.
You will replace this with a multi-node cluster.
right-click on the Topology and select Add Master.
Select a computer from the list, then press the OK
button.
Right-click on the Master and select Add Slaves.
Select the computers to use as slaves from the list, then
press the OK button. Use CTRL+CLICK to multi-select or SHIFT+CLICK
to select a range.
The Nodes now display below the Thor Master
node.
Select Thor Cluster - mythor in the Navigator panel on the
left side.
Select the Attributes tab.
Change the value of the localThor to false
Click the
disk icon to save
Changing Thor topology
If you want to designate a different node as the Thor master
when setting up a multi-node system, follow these steps.
Select Thor Cluster -
mythor in the Navigator panel on the left
side.
Select the Topology
tab.
Right-click on the Master node
Select the Replace Master
option.
You should only use this feature when initially
setting up your system. If there is data on the nodes
when attempting to Swap Master, you run the risk of
losing or corrupting some data.
ThorCluster Attributes
This section describes the Thor Cluster Attributes tab.
Thor Memory Settings
When the globalMemorySize
is left unset, Thor[master] detects total physical memory and
allocates 75% of it. If there are multiple slaves per node
(slavesPerNode>1) it divides the total among the slaves. If
globalMemorySize is defined, then it allocates that amount of
memory to each slave. The masterMemorySize attribute allocates
memory for the Thor master. If omitted, Thor master uses
globalMemorySize, or the default 75% of memory.
On systems with a lot of memory, the default 75% of physical
memory is probably too conservative and reserving total physical
minus 2GB (for the OS and other processes) is sensible. You should
then divide that number by the number of slavesPerNode.
If there are multiple Thors sharing the same nodes, then
globalMemorySize must be configured to take that into
account.
For example, if there are 2 Thors each with 2 slaves per
box, that will mean there are 4 slaves per physical node. So you
should use a formula similar to the following in your calculations
when configuring globalMemorySize:
globalMemorySize = (total-physical-memory)-2GB / (2*2)
Without any specified setting, Thor assumes it has exclusive
access to the memory and would therefore use too much (because
each Thor is unaware of the other's configuration and memory
usage).
If localThor is set to true
and masterMemorySize and
globalMemorySize are unspecified,
then the defaults will be 50% for globalMemorySize (divided by slavesPerNode) and 25% for masterMemorySize.
Although a configuration may be set using upper memory
limits that exceed total physical memory, Thor will not actually
reserve the memory ahead of time and will only hit memory problems
when and if your jobs use all of memory. So, for example, two
Thors that are configured to use all available memory could
peacefully co-exist until a query on each are simultaneously using
more memory than the node has available.
ThorCluster SSH Options
This section describes the ThorCluster SSH Options tab.
ThorCluster Debug
The debug tab is for internal use only
ThorCluster Swap Node
This section describes the ThorCluster Swap Node tab.
ThorCluster Notes
This tab allows you to add any notes pertinent to the
component's configuration. This can be useful to keep a record of
changes and to communicate this information to peers.
Roxie
This section details how to define a Rapid Data Delivery Engine
(Roxie) cluster. Before you begin, you should decide the width of the
cluster (i.e., how many agent nodes will you have).
Select Roxie Cluster in the
Navigator panel on the left side.
Note: If you did not
specify a value in the Number of nodes for Roxie cluster
field when you first set up your environment, you will
not have a Roxie Cluster. To add a Roxie Cluster component:
Right-click on the Software
component in the Navigator Panel, then select New Components then roxie from the drop lists.
Select the Servers
tab.
Right-click the Roxie Servers and select
Reconfigure Servers.
Select the computers to use as Servers from the list, then
press the OK button.
Select the Redundancy
tab.
Select the redundancy scheme to use. Typically, this is
cyclic redundancy, as shown below.
Click the
disk icon to save
Close Configuration Manager by pressing ctrl+C in the
command window where it is running.
Roxie Configuration Attributes
Roxie has many configurable attributes which can be used to
for customizing and tuning to your specific needs. The following
section expands on each of the Roxie tabs and the available
attributes. There is additional Roxie configuration information in
the section immediately following these tables.
Additional Roxie Configuration items
Add Servers to Roxie Farm
To add servers to a Roxie farm
Select the Roxie Cluster -
myroxie (default) from the Navigator window on the
left side.
Select the Servers
tab.
Right-click on Roxie
Servers.
Select Reconfigure
Servers.
Press the Add Hardware
button.
Enter the values for the new server(s) in the dialog
then press OK.
All configured servers are then used when you create
a port to listen on.
NOTE
If working with an older environment file this process
has changed. You no longer have to specify for a server to use
a specific port.
Redundancy
Roxie can be configured to utilize a few different redundancy
models.
Simple Redundancy - One channel per slave. Most commonly
used for a single node Roxie.
Full Redundancy - More slaves than the number of
channels. Multiple slaves host each channel.
Overloaded Redundancy - There are multiple channels per
slave.
Cyclic Redundancy - Each node hosts multiple channels in
rotation. The most commonly used configuration.
Topology
This section describes the topology tab.
Attribute name
Definition
Topology
describes the system topology
Cluster - thor
describes the Thor clusters
Cluster - hthor
describes the hthor clusters
Cluster - roxie
describes the Roxie clusters
Distribute Configuration Changes to all Nodes
Once your environment is set up as desired, you must copy the
configuration file to the other nodes.
If it is running, stop the system
Be sure system is stopped before attempting to
copy the Environment.xml file.
Back up the original environment.xml file
# for example
sudo -u hpcc cp /etc/HPCCSystems/environment.xml /etc/HPCCSystems/environment.bak
Note: the "live environment.xml file is located in your
/etc/HPCCSystems/ directory.
ConfigManager works on files in /etc/HPCCSystems/source directory. You
must copy from this location to make an environment.xml file
active.
Copy the NewEnvironment.xml file from the source directory
to the /etc/HPCCSystems and rename the file to
environment.xml
# for example
sudo -u hpcc cp /etc/HPCCSystems/source/NewEnvironment.xml /etc/HPCCSystems/environment.xml
Copy the /etc/HPCCSystems/environment.xml to the
/etc/HPCCSystems/ on every node.
Restart the HPCC system
You might prefer to script this process, especially if you
have many nodes. See the Example Scripts section in the Appendix
of the Installing_and_RunningtheHPCCPlatform document. You can
use the scripts as a model to create your own script to copy the
environment.xml file out to all your nodes.