|
3 gadi atpakaļ | |
---|---|---|
.. | ||
examples | 603c51ead0 HPCC-24232 Use common image and document how to add 3rd-party components | 5 gadi atpakaļ |
platform-build | d3e0f52bf3 Merge branch 'candidate-8.4.x' | 3 gadi atpakaļ |
platform-build-base | 8a5b048366 HPCC-26666 NodeJS version 16 | 3 gadi atpakaļ |
platform-build-incremental | 9e32123037 HPCC-24480 Do not upmerge release branches to .x ones | 4 gadi atpakaļ |
platform-build-incremental-container | 955cf10026 HPCC-26720 Target candidate-*.x only and rename images to smoketest-* | 3 gadi atpakaļ |
platform-build-ln | 98e63f98c2 HPCC-27043 Github action for LN builds | 3 gadi atpakaļ |
platform-core | effc42dc70 HPCC-27048 Post-mortem debug ability in cloud | 3 gadi atpakaļ |
platform-core-debug | 536dc3c527 HPCC-27420 platform-core-debug is missing apt-get update | 3 gadi atpakaļ |
platform-core-ln | 752153c5a5 HPCC-27043 Github action for LN builds | 3 gadi atpakaļ |
platform-gnn | eacaa17c9e HPCC-26464 upgrade TensorFlow to version 2.6.0 | 3 gadi atpakaļ |
platform-gnn-gpu | eacaa17c9e HPCC-26464 upgrade TensorFlow to version 2.6.0 | 3 gadi atpakaļ |
platform-ml | bf02af8ed5 HPCC-25648 move ml directories to dockerfiles/ | 4 gadi atpakaļ |
Dockerfile | d17a242d62 HPCC-23532 Dockerfiles should have some copyright headers | 5 gadi atpakaļ |
README.md | 13a5f3b08c HPCC-26127 Add ElasticStack support to Start/stopall scripts | 3 gadi atpakaļ |
action.yml | e66a3d6934 HPCC-27043 Github action for LN builds | 3 gadi atpakaļ |
buildall-common.sh | 9c3c38c315 HPCC-27043 Github action for LN builds | 3 gadi atpakaļ |
buildall.sh | dcb0ae6e77 HPCC-27433 Fix LN internal build tag | 3 gadi atpakaļ |
cleanup.sh | 9e32123037 HPCC-24480 Do not upmerge release branches to .x ones | 4 gadi atpakaļ |
incr.sh | fa3ae97bd2 HPCC-27132 incr.sh not identifying appropriate base image | 3 gadi atpakaļ |
startall.sh | 31e5c76255 HPCC-27421 Update schema to allow definition of alternative plane access | 3 gadi atpakaļ |
stopall.sh | 6c6b92b8d6 HPCC-27282 Add a force option to stopall.sh | 3 gadi atpakaļ |
Docker images related to HPCC are structured as follows
hpccsystems/platform-build-base
This image contains all the development packages required to build the hpcc platform, but no HPCC code or sources. It changes rarely. The current version is tagged 7.10 and is based on Ubuntu 20.04 base image
hpccsystems/platform-build
Building this image builds and installs the HPCC codebase for a specified git tag of the HPCC platform sources. The Dockerfile takes arguments naming the version of the platform-build-base image to use and the git tag to use. Sources are fetched from github. An image will be pushed to Dockerhub for every public tag on the HPCC-Platform repository in GitHub, which developers can use as a base for their own development.
There is a second Dockerfile in platform-build-incremental that can be used by developers working on a branch that is not yet tagged or merged into upstream, that uses hpccsystems/platform-build as a base in order to avoid the need for full rebuilds each time the image is built.
hpccsystems/plaform-core
This uses the build artefacts file from a hpccsystems/plaform-build image to install a copy of the full platform code, which can be used to run any HPCC component.
If you need additional components installed on your cluster, such as Python libraries, you can create a Docker image based on platform-core with additional components installed. An example can be found in the examples/numpy directory. You will then override the image name when deploying a helm chart in order to enabled your additional components.
buildall.sh - Used to create and publish a docker container corresponding to a github tag clean.sh - Clean up old docker images (if disk gets full) incr.sh - Build local images for testing (delta from a published image) startall.sh - Start a local k8s cluster, and optional Elastic Stack for log processing purposes stopall.sh - Stop a local k8s cluster, and optional Elastic Stack
The Helm chart in helm/hpcc/ can be used to deploy an entire HPCC environment to a K8s cluster.
global:
# The global section applies to all components within the HPCC system.
dali: esp: roxie: eclccserver: etc
# Each section will specify a list of one or more components of the specified type
# Within each section, there's a map specifying settings specific to that instance of the component,
# including (at least) name, plus any other required settings (which vary according to component type).
There are some helper templates in _util.tpl to assist in generation of the k8s yaml for each component. Many of these are used for standard boilerplate that ends up in every component:
hpcc.utils.addImageAttrs
Each component can specify local configuration via config: or configFile: settings - configFile names a file that is copied verbatim into the relevant ConfigMap, while config: allows the config file's contents to be specified inline.
In addition, global config info (same for every component) is generated into a global.json file and made available via ConfigMap mechanism. So far, this only contains
"version": {{ .root.Values.global.image.version | quote }}
but we can add more.
When running under K8s, Roxie has 3 fundamental modes of operation:
Scalable array of one-way roxie servers
Set localSlave=true, replicas=initial number of pods
Per-channel-scalable array of combined servers/slaves
localSlave=false, numChannels=nn, replicas=initial number of pods per channel (default 2)
There will be numChannels*replicas pods in total
Scalable array of servers with per-channel-scalable array of slaves
localSlave=false, numChannels=nn, replicas=pods/channel, serverReplicas=initial number of server pods
There will be numChannels*replicas slave pods and serverReplicas server pods in total
This mode is somewhat experimental at present!