Resolution:
ImagePullBack
or ErrPullImage
errors in their status?Potential Cause:
* The errors occur when the Docker pull limit is exceeded.
Resolution:
* For `omnia.yml` and `control_plane.yml` : Provide the docker username and password for the Docker Hub account in the *omnia_config.yml* file and execute the playbook.
* For HPC cluster, during `omnia.yml execution`, a kubernetes secret 'dockerregcred' will be created in default namespace and patched to service account. User needs to patch this secret in their respective namespace while deploying custom applications and use the secret as imagePullSecrets in yaml file to avoid ErrImagePull. [Click here for more info](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/)
Note: If the playbook is already executed and the pods are in ImagePullBack state, then run
kubeadm reset -f
in all the nodes before re-executing the playbook with the docker credentials.
The connection to the server head_node_ip:port was refused - did you specify the right host or port?
On the control plane or the manager node, run the following commands:
* `swapoff -a`
* `systemctl restart kubelet`
control_plane.yml
fails at the webui_awx stage?In the webui_awx/files
directory, delete the .tower_cli.cfg
and .tower_vault_key
files, and then re-run control_plane.yml
.
Potential Cause:
The hostnames of the manager and login nodes are not set in the correct format.
Resolution:
If you have enabled the option to install the login node in the cluster, set the hostnames of the nodes in the format: *hostname.domainname*. For example, *manager.omnia.test* is a valid hostname for the login node. **Note**: To find the cause for the failure of the FreeIPA server and client installation, see *ipaserver-install.log* in the manager node or */var/log/ipaclient-install.log* in the login node.
Potential Cause:
The provided device credentials may be invalid.
Resolution:
Manually validate/update the relevant login information on the AWX settings screen and re-run `device_inventory_job`. Optionally, wait 24 hours for the scheduled inventory job to run.
control_plane.yml
?Hosts that are not in DHCP mode do not get populated in the host list when control_plane.yml
is run.
Failure in talking to yum: Cannot find a valid baseurl for repo: base/7/x86_64.
Potential Cause:
There are connections missing on the NFS node.
Resolution:
Ensure that there are 3 nics being used on the NFS node:
1. For provisioning the OS
2. For connecting to the internet (Management purposes)
3. For connecting to PowerVault (Data Connection)
Error creating pod: container failed to start, ImagePullBackOff
?Potential Cause:
After running control_plane.yml
, the AWX image got deleted.
Resolution:
Run the following commands:<br>
1. `cd omnia/control_plane/roles/webui_awx/files`
2. `buildah bud -t custom-awx-ee awx_ee.yml`
Potential Cause:
The device name and connection name listed by the network manager in `/etc/sysconfig/network-scripts/ifcfg-<nic name>` do not match.
Resolution:
nmcli connection
to list all available connections and their attributes./etc/sysconfig/network-scripts/ifcfg-<nic name>
using vi editor.No. Before re-deploying the cluster, users have to manually delete all hosts from the awx UI.
control_plane.yml
fails?Potential Causes:
Resolution:
Wait for AWX UI to be accessible at http://<management-station-IP>:8081, and then run the control_plane.yml
file again, where management-station-IP is the IP address of the management node.
control_plane_common: Assert Value of idrac_support if mngmt_network container needed
?When device_config_support
is set to true, idrac_support
also needs to be set to true.
Wait for 15 minutes after the Kubernetes cluster reboots. Next, verify the status of the cluster using the following commands:
kubectl get nodes
on the manager node to get the real-time k8s cluster status.kubectl get pods --all-namespaces
on the manager node to check which the pods are in the Running state.kubectl cluster-info
on the manager node to verify that both the k8s master and kubeDNS are in the Running state.kubectl get pods --all-namespaces
to verify that all pods are in the Running state.kubectl delete pods <name of pod>
omnia.yml
, jupyterhub.yml
, or kubeflow.yml
.Run the command kubectl get pods --namespace default
to ensure nfs-client pod and all Prometheus server pods are in the Running state.
control_plane.yml
fail during the Run import command?Cause:
Resolution:
docker rm -f cobbler
and rerun control_plane.yml
.To enable routing, update the primary_dns
and secondary_dns
in base_vars
with the appropriate IPs (hostnames are currently not supported). For compute nodes that are not directly connected to the internet (ie only host network is configured), this configuration allows for internet connectivity.
Potential Causes:
Resolution:
docker rm -f cobbler
and docker image rm -f cobbler
.BIOS Setup -> Network Settings -> PXE Device
. For each listed device (typically 4), configure an active NIC under PXE device settings
systemctl restart slurmdbd
systemctl restart slurmctld
systemctl restart prometheus-slurm-exporter
systemctl status slurmd
to manually restart the following service on all the compute nodes.Potential Cause: The slurm.conf
is not configured properly.
Recommended Actions:
slurmdbd -Dvvv
slurmctld -Dvvv
/var/lib/log/slurmctld.log
file for more information.Cause: Slurm database connection fails.
Recommended Actions:
slurmdbd -Dvvv
slurmctld -Dvvv
/var/lib/log/slurmctld.log
file.netstat -antp | grep LISTEN
for PIDs in the listening state.slurmctl restart slurmctld
on manager node
systemctl restart slurmdbd
on manager node
systemctl restart slurmd
on compute node
Potential Cause: The host network is faulty causing DNS to be unresponsive
Resolution:
kubeadm reset -f
on all the nodes.omnia_config.yml
file to change the Kubernetes Pod Network CIDR. The suggested IP range is 192.168.0.0/16. Ensure that the IP provided is not in use on your host network.ansible-playbook omnia.yml --skip-tags slurm
Potential Cause: Unstable or slow Internet connectivity.
Resolution:
calico
to flannel
.Run kubectl rollout restart deployment awx -n awx
from the control plane and try to re-run the job.
If the above solution doesn't work,
/var/nfs_awx
.omnia/control_plane/roles/webui_awx/files/.tower_cli.cfg
.kubectl get pods --all-namespaces
/var/log/omnia.log
for more information.provision_os
and iso_file_path
in base_vars.yml
. Re-run control_plane.yml
with different values for provision_os
and iso_file_path
to restore the profiles.If device_config_support
is set to TRUE,
device_config_support
is set to FALSE, no reboots are required.idrac.yml
file or other .yml files from AWX?Potential Cause: The "PermissionError: [Errno 13] Permission denied" error is displayed if you have used the ansible-vault decrypt or encrypt commands.
Resolution:
chmod 664 <filename>.yml
It is recommended that the ansible-vault view or edit commands are used and not the ansible-vault decrypt or encrypt commands.
racadm getremoteservicesstatus
Error: The specified disk is not available. - Unavailable disk (0.x) in disk range '0.x-x'
:show disks
You cannot create a linear disk group when a virtual disk group exists on the system.
?At any given time only one type of disk group can be created on the system. That is, all disk groups on the system have to exclusively be linear or virtual. To fix the issue, either delete the existing disk group or change the type of pool you are creating.
Potential Cause: Older firmware version in PowerEdge servers. Omnia supports only iDRAC 8 based Dell EMC PowerEdge Servers with firmware versions 2.75.75.75 and above and iDRAC 9 based Dell EMC PowerEdge Servers with Firmware versions 4.40.40.00 and above.
/var/nfs_awx
/<project name>/control_plane/roles/webui_awx/files/.tower_cli.cfg
Once complete, it's safe to re-run control_plane.yml.
Potential Cause: The control_plane playbook does not support hostnames with an underscore in it such as 'mgmt_station'.
As defined in RFC 822, the only legal characters are the following:
Alphanumeric (a-z and 0-9): Both uppercase and lowercase letters are acceptable, and the hostname is case insensitive. In other words, dvader.empire.gov is identical to DVADER.EMPIRE.GOV and Dvader.Empire.Gov.
Hyphen (-): Neither the first nor the last character in a hostname field should be a hyphen.
Period (.): The period should be used only to delimit fields in a hostname (e.g., dvader.empire.gov)
Potential Cause: Your Docker pull limit has been exceeded. For more information, click here
helm delete jupyterhub -n jupyterhub
Potential Cause: Your Docker pull limit has been exceeded. For more information, click here
kfctl delete -V -f /root/k8s/omnia-kubeflow/kfctl_k8s_istio.v1.0.2.yaml
No. During Cobbler based deployment, only one OS is supported at a time. If the user would like to deploy both, please deploy one first, unmount /mnt/iso
and then re-run cobbler for the second OS.
Due to the latest catalog.xml
file, Firmware updates may fail for certain components. Omnia execution doesn't get interrupted but an error gets logged on AWX. For now, please download those individual updates manually.
infiniband.yml
?
To configure a new Infiniband Switch, it is required that HTTP and JSON gateway be enabled. To verify that they are enabled, run:
show web
(To check if HTTP is enabled)
show json-gw
(To check if JSON Gateway is enabled)
To correct the issue, run:
web http enable
(To enable the HTTP gateway)
json-gw enable
(To enable the JSON gateway)