소스 검색

Merge branch 'devel' into devel

Lucas A. Wilson 4 년 전
부모
커밋
67307e4d38
100개의 변경된 파일3705개의 추가작업 그리고 1140개의 파일을 삭제
  1. 39 0
      appliance/input_config.yml
  2. 4 8
      appliance/inventory.yml
  3. 15 3
      appliance/roles/common/tasks/docker_installation.yml
  4. 7 1
      appliance/roles/common/tasks/main.yml
  5. 1 1
      appliance/roles/common/tasks/package_installation.yml
  6. 117 0
      appliance/roles/common/tasks/password_config.yml
  7. 3 3
      appliance/roles/common/tasks/pre_requisite.yml
  8. 29 7
      appliance/roles/common/vars/main.yml
  9. 43 0
      appliance/roles/inventory/files/add_host.yml
  10. 93 0
      appliance/roles/inventory/files/create_inventory.yml
  11. 3 0
      appliance/roles/inventory/files/inventory
  12. 73 0
      appliance/roles/inventory/tasks/main.yml
  13. 16 0
      appliance/roles/inventory/vars/main.yml
  14. 0 1
      appliance/roles/provision/files/.users.digest
  15. 7 5
      appliance/roles/provision/files/Dockerfile
  16. 3 3
      appliance/roles/provision/files/ifcfg-eno1
  17. 34 0
      appliance/roles/provision/files/inventory_creation.yml
  18. 47 12
      appliance/roles/provision/files/kickstart.yml
  19. 2 2
      appliance/roles/provision/files/settings
  20. 27 0
      appliance/roles/provision/files/start_cobbler.yml
  21. 63 0
      appliance/roles/provision/files/temp_centos7.ks
  22. 0 51
      appliance/roles/provision/files/temp_centos8.ks
  23. 32 0
      appliance/roles/provision/files/tftp.yml
  24. 11 0
      appliance/roles/provision/tasks/check_prerequisites.yml
  25. 14 6
      appliance/roles/provision/tasks/configure_cobbler.yml
  26. 4 4
      appliance/roles/provision/tasks/configure_nic.yml
  27. 1 1
      appliance/roles/provision/tasks/firewall_settings.yml
  28. 1 2
      appliance/roles/provision/tasks/main.yml
  29. 3 3
      appliance/roles/provision/tasks/mount_iso.yml
  30. 40 80
      appliance/roles/provision/tasks/provision_password.yml
  31. 6 19
      appliance/roles/provision/vars/main.yml
  32. 0 153
      appliance/roles/web_ui/files/awx_configuration.yml
  33. 187 0
      appliance/roles/web_ui/tasks/awx_configuration.yml
  34. 0 118
      appliance/roles/web_ui/tasks/awx_password.yml
  35. 0 8
      appliance/roles/web_ui/tasks/check_prerequisites.yml
  36. 22 0
      appliance/roles/web_ui/tasks/clone_awx.yml
  37. 25 8
      appliance/roles/web_ui/tasks/install_awx.yml
  38. 4 2
      appliance/roles/web_ui/tasks/install_awx_cli.yml
  39. 45 8
      appliance/roles/web_ui/tasks/main.yml
  40. 13 19
      appliance/roles/web_ui/vars/main.yml
  41. 0 3
      appliance/test/cobbler_inventory
  42. 39 0
      appliance/test/input_config_empty.yml
  43. 39 0
      appliance/test/input_config_test.yml
  44. 3 0
      appliance/test/provisioned_hosts.yml
  45. 1333 15
      appliance/test/test_common.yml
  46. 365 67
      appliance/test/test_provision_cc.yml
  47. 83 322
      appliance/test/test_provision_cdip.yml
  48. 90 41
      appliance/test/test_provision_ndod.yml
  49. 18 11
      appliance/test/test_vars/test_common_vars.yml
  50. 21 8
      appliance/test/test_vars/test_provision_vars.yml
  51. 79 20
      omnia.yml
  52. 36 0
      roles/cluster_preperation/tasks/main.yml
  53. 65 0
      roles/cluster_preperation/tasks/passwordless_ssh.yml
  54. 19 0
      roles/cluster_preperation/vars/main.yml
  55. 34 0
      roles/cluster_validation/tasks/fetch_password.yml
  56. 22 0
      roles/cluster_validation/tasks/main.yml
  57. 36 0
      roles/cluster_validation/tasks/validations.yml
  58. 22 0
      roles/cluster_validation/vars/main.yml
  59. 1 1
      roles/common/files/daemon.json
  60. 1 1
      roles/common/files/inventory.fact
  61. 17 12
      roles/common/handlers/main.yml
  62. 5 59
      roles/common/tasks/main.yml
  63. 25 25
      roles/common/tasks/ntp.yml
  64. 1 1
      roles/common/tasks/nvidia.yml
  65. 1 2
      roles/common/templates/chrony.conf.j2
  66. 1 3
      roles/common/templates/ntp.conf.j2
  67. 7 17
      roles/common/vars/main.yml
  68. 0 0
      roles/k8s_common/files/k8s.conf
  69. 0 0
      roles/k8s_common/files/kubernetes.repo
  70. 28 0
      roles/k8s_common/handlers/main.yml
  71. 77 0
      roles/k8s_common/tasks/main.yml
  72. 31 0
      roles/k8s_common/vars/main.yml
  73. 2 2
      roles/firewalld/tasks/main.yml
  74. 1 2
      roles/firewalld/vars/main.yml
  75. 0 0
      roles/k8s_manager/tasks/main.yml
  76. 0 0
      roles/k8s_manager/vars/main.yml
  77. 40 0
      roles/k8s_nfs_client_setup/tasks/main.yml
  78. 20 0
      roles/k8s_nfs_client_setup/vars/main.yml
  79. 84 0
      roles/k8s_nfs_server_setup/tasks/main.yml
  80. 25 0
      roles/k8s_nfs_server_setup/vars/main.yml
  81. 0 0
      roles/k8s_start_manager/files/create_admin_user.yaml
  82. 0 0
      roles/k8s_start_manager/files/create_clusterRoleBinding.yaml
  83. 0 0
      roles/startmanager/files/data-pv.yaml
  84. 0 0
      roles/startmanager/files/data2-pv.yaml
  85. 0 0
      roles/startmanager/files/data3-pv.yaml
  86. 0 0
      roles/startmanager/files/data4-pv.yaml
  87. 0 0
      roles/startmanager/files/flannel_net.sh
  88. 0 0
      roles/startmanager/files/katib-pv.yaml
  89. 0 0
      roles/k8s_start_manager/files/kube-flannel.yaml
  90. 0 0
      roles/k8s_start_manager/files/kubeflow_persistent_volumes.yaml
  91. 0 0
      roles/startmanager/files/minio-pvc.yaml
  92. 0 0
      roles/startmanager/files/mysql-pv.yaml
  93. 0 0
      roles/k8s_start_manager/files/nfs-class.yaml
  94. 0 0
      roles/k8s_start_manager/files/nfs-deployment.yaml
  95. 0 0
      roles/k8s_start_manager/files/nfs-serviceaccount.yaml
  96. 0 0
      roles/k8s_start_manager/files/nfs_clusterrole.yaml
  97. 0 0
      roles/k8s_start_manager/files/nfs_clusterrolebinding.yaml
  98. 0 0
      roles/startmanager/files/notebook-pv.yaml
  99. 0 0
      roles/startmanager/files/persistent_volumes.yaml
  100. 0 0
      roles/startmanager/files/pvc.yaml

+ 39 - 0
appliance/input_config.yml

@@ -0,0 +1,39 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+# Password used while deploying OS on bare metal servers and for Cobbler UI.
+# The Length of the password should be atleast 8.
+# The password must not contain -,\, ',"
+provision_password: ""
+
+# Password used for the AWX UI.
+# The Length of the password should be atleast 8.
+# The password must not contain -,\, ',"
+awx_password: ""
+
+# Password used for Slurm database.
+# The Length of the password should be atleast 8.
+# The password must not contain -,\, ',"
+mariadb_password: ""
+
+# The nic/ethernet card that needs to be connected to the HPC switch.
+# This nic will be configured by Omnia for the DHCP server.
+# Default value of nic is em1.
+hpc_nic: "em1"
+
+# The nic card that needs to be connected to the public internet.
+# The public_nic should be em2, em1 or em3
+# Default value of nic is em2.
+public_nic: "em2"

+ 4 - 8
appliance/inventory.yml

@@ -1,4 +1,4 @@
-# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
 #
 # Licensed under the Apache License, Version 2.0 (the "License");
 # you may not use this file except in compliance with the License.
@@ -12,13 +12,9 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 ---
-
-# inventory playbook. Will be updated later
-- name: omnia
+- name: Dynamic Inventory
   hosts: localhost
   connection: local
   gather_facts: no
-  tasks:
-    - name: Hello
-      debug:
-        msg: "Hello inventory.yml"
+  roles:
+    - inventory

+ 15 - 3
appliance/roles/common/tasks/docker_installation.yml

@@ -30,8 +30,8 @@
 
 - name: Install docker
   package:
-    name: "{{ container_repo_install }}" 
-    state: latest
+    name: "{{ container_repo_install }}"
+    state: present
   become: yes
   tags: install
 
@@ -43,6 +43,18 @@
   become: yes
   tags: install
 
+- name: Uninstall docker-py using pip
+  pip:
+    name: ['docker-py','docker']
+    state: absent
+  tags: install
+
+- name: Install docker using pip
+  pip:
+    name: docker
+    state: present
+  tags: install
+
 - name: Installation using python3
   pip:
     name: "{{ docker_compose }}"
@@ -57,5 +69,5 @@
 
 - name: Restart docker
   service:
-    name: docker 
+    name: docker
     state: restarted

+ 7 - 1
appliance/roles/common/tasks/main.yml

@@ -12,6 +12,9 @@
 #  See the License for the specific language governing permissions and
 #  limitations under the License.
 ---
+- name: Mount Path
+  set_fact:
+    mount_path: "{{ role_path + '/../../..'  }}"
 
 - name: Pre-requisite validation
   import_tasks: pre_requisite.yml
@@ -26,4 +29,7 @@
   import_tasks: docker_installation.yml
 
 - name: Docker volume creation
-  import_tasks: docker_volume.yml
+  import_tasks: docker_volume.yml
+
+- name: Basic Configuration
+  import_tasks: password_config.yml

+ 1 - 1
appliance/roles/common/tasks/package_installation.yml

@@ -16,5 +16,5 @@
 - name: Install packages
   package:
     name: "{{ common_packages }}"
-    state: latest
+    state: present
   tags: install

+ 117 - 0
appliance/roles/common/tasks/password_config.yml

@@ -0,0 +1,117 @@
+# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+---
+
+- name: Check input config file is encrypted
+  command: cat {{ input_config_filename }}
+  changed_when: false
+  register: config_content
+
+- name: Decrpyt input_config.yml
+  command: ansible-vault decrypt {{ input_config_filename }} --vault-password-file {{ role_path }}/files/{{ vault_filename }}
+  changed_when: false
+  when: "'$ANSIBLE_VAULT;' in config_content.stdout"
+
+- name: Include variable file input_config.yml
+  include_vars: "{{ input_config_filename }}"
+
+- name: Validate input parameters are not empty
+  fail:
+    msg: "{{ input_config_failure_msg }}"
+  register: input_config_check
+  when: (provision_password | length < 1) or (awx_password | length < 1) or (mariadb_password | length < 1) or (hpc_nic | length < 1) or (public_nic | length < 1)
+
+- name: Save input variables from file
+  set_fact:
+    cobbler_password: "{{ provision_password }}"
+    admin_password: "{{ awx_password }}"
+    input_mariadb_password: "{{ mariadb_password }}"
+    nic:  "{{ hpc_nic }}"
+    internet_nic: "{{ public_nic }}"
+
+- name: Assert provision_password
+  assert:
+    that:
+      - cobbler_password | length > min_length | int - 1
+      - cobbler_password | length < max_length | int + 1
+      - '"-" not in cobbler_password '
+      - '"\\" not in cobbler_password '
+      - '"\"" not in cobbler_password '
+      - " \"'\" not in cobbler_password "
+    success_msg: "{{ success_msg_provision_password }}"
+    fail_msg: "{{ fail_msg_provision_password }}"
+  register: cobbler_password_check
+
+- name: Assert awx_password
+  assert:
+    that:
+        - admin_password | length > min_length | int - 1
+        - admin_password | length < max_length | int + 1
+        - '"-" not in admin_password '
+        - '"\\" not in admin_password '
+        - '"\"" not in admin_password '
+        - " \"'\" not in admin_password "
+    success_msg: "{{ success_msg_awx_password }}"
+    fail_msg: "{{ fail_msg_awx_password }}"
+  register: awx_password_check
+
+- name: Assert mariadb_password
+  assert:
+    that:
+        - input_mariadb_password | length > min_length | int - 1
+        - input_mariadb_password | length < max_length | int + 1
+        - '"-" not in input_mariadb_password '
+        - '"\\" not in input_mariadb_password '
+        - '"\"" not in input_mariadb_password '
+        - " \"'\" not in input_mariadb_password "
+    success_msg: "{{ success_msg_mariadb_password }}"
+    fail_msg: "{{ fail_msg_mariadb_password }}"
+  register: mariadb_password_check
+
+- name: Assert hpc_nic
+  assert:
+    that:
+      - nic | length > nic_min_length | int - 1
+      - nic != internet_nic
+    success_msg: "{{ success_msg_hpc_nic }}"
+    fail_msg: "{{ fail_msg_hpc_nic }}"
+  register: hpc_nic_check
+
+- name: Assert public_nic
+  assert:
+    that:
+      - internet_nic | length > nic_min_length | int - 1
+      - nic != internet_nic
+      - "('em1' in internet_nic) or ('em2' in internet_nic) or ('em3' in internet_nic)"
+    success_msg: "{{ success_msg_public_nic }}"
+    fail_msg: "{{ fail_msg_public_nic }}"
+  register: public_nic_check
+
+- name: Create ansible vault key
+  set_fact:
+    vault_key: "{{ lookup('password', '/dev/null chars=ascii_letters') }}"
+  when: "'$ANSIBLE_VAULT;' not in config_content.stdout"
+
+- name: Save vault key
+  copy:
+    dest: "{{ role_path }}/files/{{ vault_filename }}"
+    content: |
+      {{ vault_key }}
+    owner: root
+    force: yes
+  when: "'$ANSIBLE_VAULT;' not in config_content.stdout"
+
+- name: Encrypt input config file
+  command: ansible-vault encrypt {{ input_config_filename }} --vault-password-file {{ role_path }}/files/{{ vault_filename }}
+  changed_when: false

+ 3 - 3
appliance/roles/common/tasks/pre_requisite.yml

@@ -20,8 +20,8 @@
     replace: 'log_path = /var/log/omnia.log'
   tags: install
 
-- name: Check OS support 
-  fail: 
+- name: Check OS support
+  fail:
     msg: "{{ os_status }}"
   when: not(ansible_distribution == os_name and ansible_distribution_version >= os_version)
   register: os_value
@@ -33,7 +33,7 @@
   tags: install
 
 - name: Status of SElinux
-  fail: 
+  fail:
     msg: "{{ selinux_status }}"
   when: ansible_selinux.status != 'disabled'
   register: selinux_value

+ 29 - 7
appliance/roles/common/vars/main.yml

@@ -15,7 +15,7 @@
 
 # vars file for common
 
-# Usage: tasks/package_installation.yml
+# Usage: package_installation.yml
 common_packages:
   - epel-release
   - yum-utils
@@ -25,23 +25,27 @@ common_packages:
   - nodejs
   - device-mapper-persistent-data
   - bzip2
+  - python2-pip
   - python3-pip
   - nano
   - lvm2
   - gettext
+  - python-docker
 
-# Usage: tasks/pre_requisite.yml
+# Usage: pre_requisite.yml
 internet_delay: 0
 internet_timeout: 1
 hostname: github.com
 port_no: 22
 os_name: CentOS
-os_version: '8' 
-internet_status: "Failed:No Internet connection.Connect to Internet."
+os_version: '7.9' 
+internet_status: "Failed: No Internet connection.Connect to Internet."
 os_status: "Unsupported OS or OS version.OS must be {{ os_name }} and Version must be {{ os_version }} or more"
 selinux_status: "SElinux is not disabled. Disable it in /etc/sysconfig/selinux and reboot the system"
+iso_name: CentOS-7-x86_64-Minimal-2009.iso
+iso_fail: "Iso file absent: Download and copy the iso file in omnia/appliance/roles/provision/files"
 
-# Usage: tasks/docker_installation.yml
+# Usage: docker_installation.yml
 docker_repo_url: https://download.docker.com/linux/centos/docker-ce.repo
 docker_repo_dest: /etc/yum.repos.d/docker-ce.repo
 success: '0'
@@ -50,5 +54,23 @@ container_repo_install: docker-ce
 docker_compose: docker-compose
 daemon_dest: /etc/docker/
 
-# Usage: tasks/docker_volume.yml
-docker_volume_name: omnia-storage
+# Usage: docker_volume.yml
+docker_volume_name: omnia-storage
+
+# Usage: password_config.yml
+input_config_filename: "input_config.yml"
+fail_msg_provision_password: "Failed. Incorrect provision_password format provided in input_config.yml file"
+success_msg_provision_password: "provision_password validated"
+fail_msg_awx_password: "Failed. Incorrect awx_password format provided in input_config.yml file"
+success_msg_awx_password: "awx_password validated"
+fail_msg_mariadb_password: "Failed. Incorrect mariadb_password format provided in input_config.yml file"
+success_msg_mariadb_password: "mariadb_password validated"
+fail_msg_hpc_nic: "Failed. Incorrect hpc_nic format provided in input_config.yml file"
+success_msg_hpc_nic: "hpc_nic validated"
+fail_msg_public_nic: "Failed. Incorrect public_nic format provided in input_config.yml file"
+success_msg_public_nic: "public_nic validated"
+input_config_failure_msg: "Please provide all the required parameters in input_config.yml"
+min_length: 8
+max_length: 30
+nic_min_length: 3
+vault_filename: .vault_key

+ 43 - 0
appliance/roles/inventory/files/add_host.yml

@@ -0,0 +1,43 @@
+# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+---
+
+- name: Check if host already exists
+  command: awk "{{ '/'+ item + '/' }}" inventory
+  register: check_host
+  changed_when: no
+
+- name: Initialise host description
+  set_fact:
+    host_description: "Description Unavailable"
+
+- name: Fetch description
+  set_fact:
+    host_description: "CPU:{{ hostvars[item]['ansible_processor_count'] }}
+    Cores:{{ hostvars[item]['ansible_processor_cores'] }}
+    Memory:{{ hostvars[item]['ansible_memtotal_mb'] }}MB
+    BIOS:{{ hostvars[item]['ansible_bios_version']}}"
+  changed_when: no
+  ignore_errors: yes
+
+- name: Add host
+  lineinfile:
+    path:  "inventory"
+    line: "    {{ item }}:\n      _awx_description: {{ host_description }}"
+  when: not check_host.stdout | regex_search(item)
+
+- name: Host added msg
+  debug:
+    msg: "{{ host_added_msg + item }}"
+  when: not check_host.stdout | regex_search(item)

+ 93 - 0
appliance/roles/inventory/files/create_inventory.yml

@@ -0,0 +1,93 @@
+# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+---
+
+- name: Find reachable hosts
+  hosts: all
+  gather_facts: false
+  ignore_unreachable: true
+  ignore_errors: true
+  tasks:
+    - name: Check for reachable nodes
+      command: ping -c1 {{ inventory_hostname }}
+      delegate_to: localhost
+      register: ping_result
+      ignore_errors: yes
+      changed_when: false
+
+    - name: Group reachable hosts
+      group_by:
+        key: "reachable"
+      when: "'100% packet loss' not in ping_result.stdout"
+
+- name: Get provision password
+  hosts: localhost
+  connection: local
+  gather_facts: false
+  tasks:
+    - name: Include vars file of inventory role
+      include_vars: ../vars/main.yml
+
+- name: Set hostname on reachable nodes and gather facts
+  hosts: reachable
+  gather_facts: False
+  remote_user: "{{ cobbler_username }}"
+  vars:
+    ansible_password: "{{ cobbler_password }}"
+    ansible_become_pass: "{{ cobbler_password }}"
+  tasks:
+    - name: Setup
+      setup:
+       filter: ansible_*
+
+    - name: Set the system hostname
+      hostname:
+        name: "compute{{ inventory_hostname.split('.')[-2] + '.' + inventory_hostname.split('.')[-1] }}"
+      register: result_name
+
+    - name: Add new hostname to /etc/hosts
+      lineinfile:
+        dest: /etc/hosts
+        regexp: '^127\.0\.0\.1[ \t]+localhost'
+        line: "127.0.0.1 localhost 'compute{{ inventory_hostname.split('.')[-1] }}'"
+        state: present
+
+    - name: Ensure networking connection
+      command: nmcli networking off
+      changed_when: false
+
+    - name: Ensure networking connection
+      command: nmcli networking on
+      changed_when: false
+
+    - name: Ensure networking connection
+      command: nmcli networking on
+      changed_when: false
+
+- name: Update inventory
+  hosts: localhost
+  connection: local
+  gather_facts: false
+  tasks:
+    - name: Update inventory file
+      block:
+        - name: Fetch facts and add new hosts
+          include_tasks: add_host.yml
+          with_items: "{{ groups['reachable'] }}"
+      when: "'reachable' in groups"
+
+    - name: Show unreachable hosts
+      debug:
+        msg: "{{ host_unreachable_msg }} + {{ groups['ungrouped'] }}"
+      when: "'ungrouped' in groups"

+ 3 - 0
appliance/roles/inventory/files/inventory

@@ -0,0 +1,3 @@
+---
+all:
+  hosts:

+ 73 - 0
appliance/roles/inventory/tasks/main.yml

@@ -0,0 +1,73 @@
+# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+---
+- name: Set Facts
+  set_fact:
+    ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
+
+- name: Disable key host checking
+  replace:
+    path: /etc/ansible/ansible.cfg
+    regexp: '#host_key_checking = False'
+    replace: 'host_key_checking = False'
+
+- name: Disable host key checking
+  replace:
+    path: /etc/ssh/ssh_config
+    regexp: '#   StrictHostKeyChecking ask'
+    replace: 'StrictHostKeyChecking no'
+
+- name: Check if provisioned host file exists
+  stat:
+    path: "{{ role_path }}/files/provisioned_hosts.yml"
+  register: provisioned_file_result
+
+- name: Include vars file of common role
+  include_vars: "{{ role_path }}/../common/vars/main.yml"
+
+- name: Include vars file of web_ui role
+  include_vars: "{{ role_path }}/../web_ui/vars/main.yml"
+
+- name: Update inventory file
+  block:
+    - name: Decrpyt input_config.yml
+      command: >-
+        ansible-vault decrypt {{ input_config_filename }}
+        --vault-password-file roles/common/files/{{ vault_filename }}
+      changed_when: false
+
+    - name: Include variable file input_config.yml
+      include_vars: "{{ input_config_filename }}"
+
+    - name: Save input variables from file
+      set_fact:
+        cobbler_password: "{{ provision_password }}"
+
+    - name: Encrypt input config file
+      command: >-
+        ansible-vault encrypt {{ input_config_filename }}
+        --vault-password-file roles/common/files/{{ vault_filename }}
+
+    - name: add hosts with description to inventory file
+      command: >-
+        ansible-playbook -i {{ role_path }}/files/provisioned_hosts.yml
+        {{ role_path }}/files/create_inventory.yml
+        --extra-vars "cobbler_username={{ cobbler_username }} cobbler_password={{ cobbler_password }}"
+      ignore_errors: yes
+
+  when: provisioned_file_result.stat.exists
+
+- name: push inventory to AWX
+  command: awx-manage inventory_import --inventory-name {{ omnia_inventory_name }} --source {{ role_path }}/files/inventory
+  changed_when: no

+ 16 - 0
appliance/roles/inventory/vars/main.yml

@@ -0,0 +1,16 @@
+# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+---
+host_added_msg: "Added host to inventory: "
+host_unreachable_msg: "Following hosts are unreachable: "

+ 0 - 1
appliance/roles/provision/files/.users.digest

@@ -1 +0,0 @@
-cobbler:Cobbler:

+ 7 - 5
appliance/roles/provision/files/Dockerfile

@@ -15,12 +15,12 @@ RUN yum install -y \
   cobbler-web \
   ansible \
   pykickstart \
+  cronie \
   debmirror \
   curl \
-  wget \
   rsync \
   httpd\
-  dhcp\
+  dhcp \
   dnsmasq\
   xinetd \
   net-tools \
@@ -28,6 +28,8 @@ RUN yum install -y \
   && yum clean all \
   &&  rm -rf /var/cache/yum
 
+RUN mkdir /root/omnia
+
 #Copy Configuration files
 COPY settings /etc/cobbler/settings
 COPY dhcp.template  /etc/cobbler/dhcp.template
@@ -36,7 +38,9 @@ COPY modules.conf  /etc/cobbler/modules.conf
 COPY tftp /etc/xinetd.d/tftp
 COPY .users.digest /etc/cobbler/users.digest
 COPY kickstart.yml /root
-COPY centos8.ks /var/lib/cobbler/kickstarts
+COPY tftp.yml /root
+COPY inventory_creation.yml /root
+COPY centos7.ks /var/lib/cobbler/kickstarts
 COPY first-sync.sh /usr/local/bin/first-sync.sh
 
 EXPOSE 69 80 443 25151
@@ -48,6 +52,4 @@ RUN systemctl enable httpd
 RUN systemctl enable rsyncd
 RUN systemctl enable dnsmasq
 
-#RUN ansible-playbook /root/kickstart.yml
-
 CMD ["sbin/init"]

+ 3 - 3
appliance/roles/provision/files/ifcfg-eno1

@@ -9,9 +9,9 @@ IPV6_AUTOCONF=yes
 IPV6_DEFROUTE=yes
 IPV6_FAILURE_FATAL=no
 IPV6_ADDR_GEN_MODE=stable-privacy
-NAME=eno1
-UUID=468847a9-d146-4062-813b-85f74ffd6e2a
-DEVICE=eno1
+NAME=em1
+UUID=485d7133-2c49-462d-bbb4-b854fe98e0fe
+DEVICE=em1
 ONBOOT=yes
 IPV6_PRIVACY=no
 IPADDR=172.17.0.1

+ 34 - 0
appliance/roles/provision/files/inventory_creation.yml

@@ -0,0 +1,34 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+- hosts: localhost
+  connection: local
+  gather_facts: false
+  tasks:
+    - name: Read dhcp file
+      set_fact:
+        var: "{{ lookup('file', '/var/lib/dhcpd/dhcpd.leases').split()| unique | select| list }}"
+
+    - name: Filter the ip
+      set_fact:
+        vars_new: "{{ var| ipv4('address')| to_nice_yaml}}"
+
+    - name: Create the inventory
+      shell: |
+        echo "[all]" > omnia/appliance/roles/inventory/files/provisioned_hosts.yml
+        echo "{{ vars_new }}" > temp.txt
+        egrep -o '[1-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' temp.txt >>omnia/appliance/roles/inventory/files/provisioned_hosts.yml
+      changed_when: false
+

+ 47 - 12
appliance/roles/provision/files/kickstart.yml

@@ -17,53 +17,88 @@
   connection: local
   gather_facts: false
   vars:
-    name_iso: CentOS8
-    distro_name: CentOS8-x86_64
-    kernel_path: /var/www/cobbler/ks_mirror/CentOS8-x86_64/isolinux/vmlinuz
-
+    name_iso: CentOS7
+    distro_name: CentOS7-x86_64
   tasks:
   - name: Inside cobbler container
     debug:
       msg: "Hiii! I am cobbler"
 
-  - name: Start services
+  - name: Start xinetd
     service:
       name: "{{ item }}"
       state: started
     loop:
       - cobblerd
-      - httpd
-      - rsyncd
       - xinetd
+      - rsyncd
       - tftp
+      - httpd
 
   - name: Cobbler get-loaders
     command: cobbler get-loaders
     changed_when: false
 
+  - name: Get fence agents
+    package:
+      name: fence-agents
+      state: present
+
+  - name: Replace in /etc/debian
+    replace:
+      path: "/etc/debmirror.conf"
+      regexp: "^@dists=\"sid\";"
+      replace: "#@dists=\"sid\";"
+
+  - name: Replace in /etc/debian
+    replace:
+      path: "/etc/debmirror.conf"
+      regexp: "^@arches=\"i386\";"
+      replace: "#@arches=\"i386\";"
+
+  - name: Adding curl
+    shell: export PATH="/usr/bin/curl:$PATH"
+    changed_when: true
+
   - name: Run import command
     command: cobbler import --arch=x86_64 --path=/mnt --name="{{ name_iso }}"
     changed_when: false
 
   - name: Distro list
-    command: >-
-      cobbler distro edit --name="{{ distro_name }}" --kernel="{{ kernel_path }}" --initrd=/var/www/cobbler/ks_mirror/CentOS8-x86_64/isolinux/initrd.img
+    command: cobbler distro edit --name="{{ distro_name }}" --kernel=/var/www/cobbler/ks_mirror/CentOS7-x86_64/isolinux/vmlinuz --initrd=/var/www/cobbler/ks_mirror/CentOS7-x86_64/isolinux/initrd.img
     changed_when: false
 
   - name: Kickstart profile
-    command: cobbler profile edit --name="{{ distro_name }}" --kickstart=/var/lib/cobbler/kickstarts/centos8.ks
+    command: cobbler profile edit --name="{{ distro_name }}" --kickstart=/var/lib/cobbler/kickstarts/centos7.ks
     changed_when: false
 
   - name: Syncing of cobbler
     command: cobbler sync
     changed_when: false
 
-  - name: Start xinetd
+  - name: Restart cobbler
+    service:
+      name: cobblerd
+      state: restarted
+
+  - name: Restart xinetd
     service:
       name: xinetd
       state: restarted
 
-  - name: Start dhcp
+  - name: Restart dhcpd
     service:
       name: dhcpd
       state: restarted
+
+  - name: Add tftp cron job
+    cron:
+      name: Start tftp service
+      minute: "*"
+      job: "ansible-playbook /root/tftp.yml"
+
+  - name: Add inventory cron job
+    cron:
+      name: Create inventory
+      minute: "*/5"
+      job: "ansible-playbook /root/inventory_creation.yml"

+ 2 - 2
appliance/roles/provision/files/settings

@@ -98,7 +98,7 @@ default_ownership:
 # The simplest way to change the password is to run
 # openssl passwd -1
 # and put the output between the "" below.
-default_password_crypted: "$1$mF86/UHC$WvcIcX2t6crBz2onWxyac."
+default_password_crypted: "password"
 
 # the default template type to use in the absence of any
 # other detected template. If you do not specify the template
@@ -243,7 +243,7 @@ manage_dhcp: 1
 
 # set to 1 to enable Cobbler's DNS management features.
 # the choice of DNS mangement engine is in /etc/cobbler/modules.conf
-manage_dns: 1
+manage_dns: 0
 
 # set to path of bind chroot to create bind-chroot compatible bind
 # configuration files.  This should be automatically detected.

+ 27 - 0
appliance/roles/provision/files/start_cobbler.yml

@@ -0,0 +1,27 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+- name: Start cobbler on reboot
+  hosts: localhost
+  connection: local
+  gather_facts: false
+  tasks:
+    - name: Wait for 2 minutes
+      pause:
+        minutes: 2
+
+    - name: Execute cobbler sync in cobbler container
+      command: docker exec cobbler cobbler sync
+      changed_when: true

+ 63 - 0
appliance/roles/provision/files/temp_centos7.ks

@@ -0,0 +1,63 @@
+#version=DEVEL
+
+# Use network installation
+url --url http://ip/cblr/links/CentOS7-x86_64/
+
+# Install OS instead of upgrade
+install
+
+# Use text install
+text
+
+# SELinux configuration
+selinux --disabled
+
+# Firewall configuration
+firewall --disabled
+
+# Do not configure the X Window System
+skipx
+
+# Run the Setup Agent on first boot
+#firstboot --enable
+ignoredisk --only-use=sda
+
+# Keyboard layouts
+keyboard us
+
+# System language
+lang en_US
+
+# Network information
+network  --bootproto=dhcp --device=nic --onboot=on
+
+# Root password
+rootpw --iscrypted password
+
+# System services
+services --enabled="chronyd"
+
+# System timezone
+timezone Asia/Kolkata --isUtc
+
+# System bootloader configuration
+bootloader --location=mbr --boot-drive=sda
+
+# Partition clearing information
+clearpart --all --initlabel --drives=sda
+
+# Clear the Master Boot Record
+zerombr
+
+# Disk Partitioning
+partition /boot/efi --asprimary --fstype=vfat --label EFI  --size=200
+partition /boot     --asprimary --fstype=ext4 --label BOOT --size=500
+partition /         --asprimary --fstype=ext4 --label ROOT --size=4096 --grow
+
+# Reboot after installation
+reboot
+
+%packages
+@core
+%end
+

+ 0 - 51
appliance/roles/provision/files/temp_centos8.ks

@@ -1,51 +0,0 @@
-#platform=x86, AMD64, or Intel EM64T
-#version=DEVEL
-# Firewall configuration
-firewall --disabled
-# Install OS instead of upgrade
-install
-# Use network installation
-url --url http://ip/cblr/links/CentOS8-x86_64/
-#repo --name="CentOS" --baseurl=cdrom:sr0 --cost=100
-#Root password
-rootpw --iscrypted password
-# Use graphical install
-#graphical
-#Use text mode install
-text
-#System language
-lang en_US
-#System keyboard
-keyboard us
-#System timezone
-timezone America/Phoenix --isUtc
-# Run the Setup Agent on first boot
-#firstboot --enable
-# SELinux configuration
-selinux --disabled
-# Do not configure the X Window System
-skipx
-# Installation logging level
-#logging --level=info
-# Reboot after installation
-reboot
-# System services
-services --disabled="chronyd"
-ignoredisk --only-use=sda
-# Network information
-network  --bootproto=dhcp --device=em1 --onboot=on
-# System bootloader configuration
-bootloader --location=mbr --boot-drive=sda
-# Clear the Master Boot Record
-zerombr
-# Partition clearing information
-clearpart --all --initlabel
-# Disk partitioning information
-part /boot --fstype="xfs" --size=300
-part swap --fstype="swap" --size=2048
-part pv.01 --size=1 --grow
-volgroup root_vg01 pv.01
-logvol / --fstype xfs --name=lv_01 --vgname=root_vg01 --size=1 --grow
-%packages
-@core
-%end

+ 32 - 0
appliance/roles/provision/files/tftp.yml

@@ -0,0 +1,32 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+- name: Start tftp
+  hosts: localhost
+  connection: local
+  tasks:
+    - name: Fetch tftp status
+      command: systemctl is-active tftp
+      args:
+        warn: no
+      register: tftp_status
+      ignore_errors: yes
+      changed_when: false
+
+    - name: Start tftp if inactive state
+      command: systemctl start tftp.service
+      args:
+        warn: no
+      when: "('inactive' in tftp_status.stdout) or ('unknown' in tftp_status.stdout)"

+ 11 - 0
appliance/roles/provision/tasks/check_prerequisites.yml

@@ -13,6 +13,17 @@
 # limitations under the License.
 ---
 
+- name: Check availability of iso file
+  stat:
+    path: "{{ role_path }}/files/{{ iso_name }}"
+  register: iso_status
+
+- name: Iso file not present
+  fail:
+    msg: "{{ iso_fail }}"
+  when: iso_status.stat.exists == false
+  register: iso_file_check
+
 - name: Initialize variables
   set_fact:
     cobbler_status: false

+ 14 - 6
appliance/roles/provision/tasks/configure_cobbler.yml

@@ -13,13 +13,21 @@
 # limitations under the License.
 ---
 
-- name: Stop the firewall
-  service:
-    name: firewalld
-    state: stopped
-  tags: install
-
 - name: Configuring cobbler inside container (It may take 5-10 mins)
   command: docker exec cobbler ansible-playbook /root/kickstart.yml
   changed_when: false
   tags: install
+  when: not cobbler_status
+
+- name: Schedule task
+  cron:
+    name: "start cobbler on reboot"
+    special_time: reboot
+    job: "ansible-playbook {{ role_path }}/files/start_cobbler.yml"
+  tags: install
+  when: not cobbler_status
+
+- name: Execute cobbler sync in cobbler container
+  command: docker exec cobbler cobbler sync
+  changed_when: true
+  when: cobbler_status == true

+ 4 - 4
appliance/roles/provision/tasks/configure_nic.yml

@@ -15,17 +15,17 @@
 
 - name: Configure NIC-1
   copy:
-    src: "ifcfg-{{ eno }}"
-    dest: "/etc/sysconfig/network-scripts/ifcfg-{{ eno }}"
+    src: "ifcfg-{{ nic }}"
+    dest: "/etc/sysconfig/network-scripts/ifcfg-{{ nic }}"
     mode: 0644
   tags: install
 
 - name: Restart NIC
-  command: ifdown {{ eno }}
+  command: ifdown {{ nic }}
   changed_when: false
   tags: install
 
 - name: Restart NIC
-  command: ifup {{ eno }}
+  command: ifup {{ nic }}
   changed_when: false
   tags: install

+ 1 - 1
appliance/roles/provision/tasks/firewall_settings.yml

@@ -45,7 +45,7 @@
 
 - name:  Permit traffic in default zone on port 69/udp
   firewalld:
-    port: 69/tcp
+    port: 69/udp
     permanent: yes
     state: enabled
   tags: install

+ 1 - 2
appliance/roles/provision/tasks/main.yml

@@ -46,7 +46,6 @@
 
 - name: Cobbler configuration
   import_tasks: configure_cobbler.yml
-  when: not cobbler_status
 
 - name: Cobbler container status message
   block:
@@ -58,4 +57,4 @@
         msg: "{{ message_installed }}"
         verbosity: 2
       when: not cobbler_status
-  tags: install
+  tags: install

+ 3 - 3
appliance/roles/provision/tasks/mount_iso.yml

@@ -32,13 +32,13 @@
 
 - name: Update mount status
   set_fact:
-    mount_check: result.failed
+    mount_check: "{{ result.failed }}"
   tags: install
 
 - name: Mount the iso file
-  command: mount -o loop {{ role_path }}/files/{{ iso_image }} /mnt/{{ iso_path }}
+  command: mount -o loop {{ role_path }}/files/{{ iso_name }} /mnt/{{ iso_path }}
   changed_when: false
   args:
     warn: no
-  when:  mount_check
+  when: mount_check == true
   tags: install

+ 40 - 80
appliance/roles/provision/tasks/provision_password.yml

@@ -26,97 +26,46 @@
     mode: 0644
   tags: install
 
-- name: Take provision Password
-  block:
-  - name: Provision Password (Min length should be 8)
-    pause:
-      prompt: "{{ prompt_password }}"
-      echo: no
-    register: prompt_admin_password
-    until:
-      - prompt_admin_password.user_input | length >  min_length| int  - 1
-    retries: "{{ no_of_retry }}"
-    delay: "{{ retry_delay }}"
-    when: admin_password is not defined and no_prompt is not defined
-  rescue:
-  - name: Abort if password validation fails
-    fail:
-      msg: "{{ msg_incorrect_format }}"
-  tags: install
-
-- name: Assert admin_password if prompt not given
-  assert:
-    that:
-        - admin_password | length >  min_length| int  - 1
-    success_msg: "{{ success_msg_pwd_format }}"
-    fail_msg: "{{ fail_msg_pwd_format }}"
-  register: msg_pwd_format
-  when: admin_password is defined and no_prompt is defined
-  tags: install
-
-- name: Save admin password
-  set_fact:
-    admin_password: "{{ prompt_admin_password.user_input }}"
-  when: no_prompt is not defined
-  tags: install
-
-- name: Confirm password
-  block:
-  - name: Confirm provision password
-    pause:
-      prompt: "{{ confirm_password }}"
-      echo: no
-    register: prompt_admin_password_confirm
-    until: admin_password == prompt_admin_password_confirm.user_input
-    retries: "{{ no_of_retry }}"
-    delay: "{{ retry_delay }}"
-    when: admin_password_confirm is not defined and no_prompt is not defined
-  rescue:
-  - name: Abort if password confirmation failed
-    fail:
-      msg: "{{ msg_failed_password_confirm }}"
-  tags: install
-
-- name: Assert admin_password_confirm if prompt not given
-  assert:
-    that: admin_password == admin_password_confirm
-    success_msg: "{{ success_msg_pwd_confirm }}"
-    fail_msg: "{{ fail_msg_pwd_confirm }}"
-  register: msg_pwd_confirm
-  when: admin_password_confirm is defined and no_prompt is defined
-  tags: install
-
 - name: Encrypt cobbler password
-  shell: >
-     set -o pipefail && \
-     digest="$( printf "%s:%s:%s" {{ username }} "Cobbler" {{ admin_password }} | md5sum | awk '{print $1}' )"
-     printf "%s:%s:%s\n" "{{ username }}" "Cobbler" "$digest" > "{{ role_path }}/files/.users.digest"
-  args:
-    executable: /bin/bash
+  shell: printf "%s:%s:%s" {{ username }} "Cobbler" {{ cobbler_password }} | md5sum | awk '{print $1}'
   changed_when: false
+  register: encrypt_password
   tags: install
 
-- name: Read password file
-  set_fact:
-    var: "{{ lookup('file', role_path+'/files/.users.digest').splitlines() }}"
+- name: Copy cobbler password to cobbler config file
+  shell: printf "%s:%s:%s\n" "{{ username }}" "Cobbler" "{{ encrypt_password.stdout }}" > "{{ role_path }}/files/.users.digest"
+  changed_when: false
   tags: install
 
-- name: Get encrypted password
-  set_fact:
-    encrypted_pass: "{{ var[0].split(':')[2] }}"
-
 - name: Create the kickstart file
   copy:
-    src: "{{ role_path }}/files/temp_centos8.ks"
-    dest: "{{ role_path }}/files/centos8.ks"
+    src: "{{ role_path }}/files/temp_centos7.ks"
+    dest: "{{ role_path }}/files/centos7.ks"
     mode: 0775
   tags: install
 
 - name: Configure kickstart file
   replace:
-    path: "{{ role_path }}/files/centos8.ks"
-    regexp: '^url --url http://ip/cblr/links/CentOS8-x86_64/'
-    replace: url --url http://{{ ansible_eno2.ipv4.address }}/cblr/links/CentOS8-x86_64/
+    path: "{{ role_path }}/files/centos7.ks"
+    regexp: '^url --url http://ip/cblr/links/CentOS7-x86_64/'
+    replace: url --url http://{{ ansible_em1.ipv4.address }}/cblr/links/CentOS7-x86_64/
+  when: internet_nic == "em1"
+  tags: install
+
+- name: Configure kickstart file
+  replace:
+    path: "{{ role_path }}/files/centos7.ks"
+    regexp: '^url --url http://ip/cblr/links/CentOS7-x86_64/'
+    replace: url --url http://{{ ansible_em2.ipv4.address }}/cblr/links/CentOS7-x86_64/
+  when: internet_nic == "em2"
+  tags: install
+
+- name: Configure kickstart file
+  replace:
+    path: "{{ role_path }}/files/centos7.ks"
+    regexp: '^url --url http://ip/cblr/links/CentOS7-x86_64/'
+    replace: url --url http://{{ ansible_em3.ipv4.address }}/cblr/links/CentOS7-x86_64/
+  when: internet_nic == "em3"
   tags: install
 
 - name: Random phrase generation
@@ -131,14 +80,25 @@
   tags: install
 
 - name: Login password
-  command: openssl passwd -1 -salt {{ random_phrase }} {{ admin_password }}
+  command: openssl passwd -1 -salt {{ random_phrase }} {{ cobbler_password }}
   changed_when: false
   register: login_pass
   tags: install
 
 - name: Configure kickstart file
   replace:
-    path: "{{ role_path }}/files/centos8.ks"
+    path: "{{ role_path }}/files/centos7.ks"
     regexp: '^rootpw --iscrypted password'
     replace: 'rootpw --iscrypted {{ login_pass.stdout }}'
   tags: install
+
+- name: Configure kickstart file
+  replace:
+    path: "{{ role_path }}/files/centos7.ks"
+    regexp: '^network  --bootproto=dhcp --device=nic --onboot=on'
+    replace: 'network  --bootproto=dhcp --device={{ nic }} --onboot=on'
+  tags: install
+
+- name: Configure default password in settings
+  local_action: copy content="{{ login_pass.stdout }}" dest="{{ role_path }}/files/.node_login.digest"
+  tags: install

+ 6 - 19
appliance/roles/provision/vars/main.yml

@@ -15,36 +15,23 @@
 
 # vars file for provision
 
+#Usage: check_prerequisite.yml
+iso_name: CentOS-7-x86_64-Minimal-2009.iso
+iso_fail: "Iso file absent: Download and copy the iso file in omnia/appliance/roles/provision/files"
+
 # Usage: provision_password.yml
 provision_encrypted_dest: ../files/
-min_length: 8
-no_of_retry: 3
-retry_delay: 0.001
 username: cobbler
-prompt_password: "Enter cobbler password.( Min. Length of Password should be {{ min_length| int }}." 
-confirm_password: "Confirm cobbler Password"
-msg_incorrect_format: "Failed. Incorrect format."
-msg_failed_password_confirm: "Failed. Passwords did not match"
-success_msg_pwd_format: "admin_password validated"
-fail_msg_pwd_format: "admin_password validation failed"
-success_msg_pwd_confirm: "admin_password confirmed"
-fail_msg_pwd_confirm: "admin_password confirmation failed"
-success_msg_format: "random_phrase validated"
-fail_msg_format: "random_phrase validation failed"
 
 # Usage: cobbler_image.yml
 docker_image_name: cobbler
 docker_image_tag: latest
-cobbler_run_command: docker run -itd --privileged --net=host --restart=always -v cobbler_www:/var/www/cobbler:Z -v cobbler_backup:/var/lib/cobbler/backup:Z -v /mnt/iso:/mnt:Z -p 69:69/udp -p 81:80 -p 443:443 -p 25151:25151 --name cobbler  cobbler:latest  /sbin/init
+cobbler_run_command: docker run -itd --privileged --net=host --restart=always -v {{ mount_path }}:/root/omnia  -v cobbler_www:/var/www/cobbler:Z -v cobbler_backup:/var/lib/cobbler/backup:Z -v /mnt/iso:/mnt:Z -p 69:69/udp -p 81:80 -p 443:443 -p 25151:25151 --name cobbler  cobbler:latest  /sbin/init
 
 
 # Usage: main.yml
 message_skipped: "Installation Skipped: Cobbler instance is already running on your system"
 message_installed: "Installation Successful"
 
-# Usage: os_provsion.yml
-iso_image: CentOS-8.2.2004-x86_64-minimal.iso 
+# Usage: mount_iso.yml
 iso_path: iso
-
-# Usage: configure_nic.yml
-eno: eno1

+ 0 - 153
appliance/roles/web_ui/files/awx_configuration.yml

@@ -1,153 +0,0 @@
-# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
----
-
-# Playbook to configure AWX
-- name: Configure AWX
-  hosts: localhost
-  connection: local
-  gather_facts: no
-  tasks:
-    - name: Include vars file
-      include_vars: ../vars/main.yml
-
-    # Get Current AWX configuration
-    - name: Get organization list
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        organizations list -f human
-      register: organizations_list
-
-    - name: Get project list
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        projects list -f human
-      register: projects_list
-
-    - name: Get inventory list
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        inventory list -f human
-      register: inventory_list
-
-    - name: Get template list
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        job_templates list -f human
-      register: job_templates_list
-
-    - name: If omnia-inventory exists, fetch group names in the inventory
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        groups list --inventory "{{ omnia_inventory_name }}" -f human
-      register: groups_list
-      when: omnia_inventory_name in inventory_list.stdout
-
-    - name: Get schedules list
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        schedules list -f human
-      register: schedules_list
-
-    # Delete Default Configurations
-    - name: Delete default organization
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        organizations delete "{{ default_org }}"
-      when: default_org in organizations_list.stdout
-
-    - name: Delete default job template
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        job_templates delete "{{ default_template }}"
-      when: default_template in job_templates_list.stdout
-
-    - name: Delete default project
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        projects delete "{{ default_projects }}"
-      when: default_projects in projects_list.stdout
-
-    # Create required configuration if not present
-    - name: Create organisation
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        organizations create --name "{{ organization_name }}"
-      when: organization_name not in organizations_list.stdout
-
-    - name: Create new project
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        projects create --name "{{ project_name }}" --organization "{{ organization_name }}"
-        --local_path "{{ dir_name }}"
-      when: project_name not in projects_list.stdout
-
-    - name: Create new omnia inventory
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        inventory create --name "{{ omnia_inventory_name }}" --organization "{{ organization_name }}"
-      when: omnia_inventory_name not in inventory_list.stdout
-
-    - name: Create groups in omnia inventory
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        groups create --name "{{ item }}" --inventory "{{ omnia_inventory_name }}"
-      when: omnia_inventory_name not in inventory_list.stdout or item not in groups_list.stdout
-      loop: "{{ group_names }}"
-
-    - name: Create template to deploy omnia
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        job_templates create
-        --name "{{ omnia_template_name }}"
-        --job_type run
-        --inventory "{{ omnia_inventory_name }}"
-        --project "{{ project_name }}"
-        --playbook "{{ omnia_playbook }}"
-        --verbosity "{{ playbooks_verbosity }}"
-        --ask_skip_tags_on_launch true
-      when: omnia_template_name not in job_templates_list.stdout
-
-    - name: Create template to fetch dynamic inventory
-      command: >-
-        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-        job_templates create
-        --name "{{ inventory_template_name }}"
-        --job_type run
-        --inventory "{{ omnia_inventory_name }}"
-        --project "{{ project_name }}"
-        --playbook "{{ inventory_playbook }}"
-        --verbosity "{{ playbooks_verbosity }}"
-        --use_fact_cache true
-      when: inventory_template_name not in job_templates_list.stdout
-
-    - name: Schedule dynamic inventory template
-      block:
-        - name: Get unified job template list
-          command: >-
-            awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-            unified_job_templates list --name "{{ inventory_template_name }}" -f human
-          register: unified_job_template_list
-
-        - name: Get job ID
-          set_fact:
-            job_id: "{{ unified_job_template_list.stdout | regex_search('[0-9]+') }}"
-
-        - name: Schedule dynamic inventory job
-          command: >-
-            awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
-            schedules create --name "{{ schedule_name }}"
-            --unified_job_template="{{ job_id }}" --rrule="{{ schedule_rule }}"
-
-      when: schedule_name not in schedules_list.stdout

+ 187 - 0
appliance/roles/web_ui/tasks/awx_configuration.yml

@@ -0,0 +1,187 @@
+# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+---
+
+# Get Current AWX configuration
+- name: Get organization list
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    organizations list -f human
+  register: organizations_list
+  changed_when: no
+
+- name: Get project list
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    projects list -f human
+  register: projects_list
+  changed_when: no
+
+- name: Get inventory list
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    inventory list -f human
+  register: inventory_list
+  changed_when: no
+
+- name: Get credentials list
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    credentials list -f human
+  register: credentials_list
+  changed_when: no
+
+- name: Get template list
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    job_templates list -f human
+  register: job_templates_list
+  changed_when: no
+
+- name: If omnia-inventory exists, fetch group names in the inventory
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    groups list --inventory "{{ omnia_inventory_name }}" -f human
+  register: groups_list
+  when: omnia_inventory_name in inventory_list.stdout
+
+- name: Get schedules list
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    schedules list -f human
+  register: schedules_list
+  changed_when: no
+
+# Delete Default Configurations
+- name: Delete default organization
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    organizations delete "{{ default_org }}"
+  when: default_org in organizations_list.stdout
+
+- name: Delete default job template
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    job_templates delete "{{ default_template }}"
+  when: default_template in job_templates_list.stdout
+
+- name: Delete default project
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    projects delete "{{ default_projects }}"
+  when: default_projects in projects_list.stdout
+
+- name: Delete default credential
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    credentials delete "{{ default_credentials }}"
+  when: default_credentials in credentials_list.stdout
+
+# Create required configuration if not present
+- name: Create organisation
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    organizations create --name "{{ organization_name }}"
+  when: organization_name not in organizations_list.stdout
+
+- name: Create new project
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    projects create --name "{{ project_name }}" --organization "{{ organization_name }}"
+    --local_path "{{ dir_name }}"
+  when: project_name not in projects_list.stdout
+
+- name: Create new omnia inventory
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    inventory create --name "{{ omnia_inventory_name }}" --organization "{{ organization_name }}"
+  when: omnia_inventory_name not in inventory_list.stdout
+
+- name: Create groups in omnia inventory
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    groups create --name "{{ item }}" --inventory "{{ omnia_inventory_name }}"
+  when: omnia_inventory_name not in inventory_list.stdout or item not in groups_list.stdout
+  loop: "{{ group_names }}"
+
+- name: Create credentials for omnia
+  command: >-
+    awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+    credentials create --name "{{ credential_name }}" --organization "{{ organization_name }}"
+    --credential_type "{{ credential_type }}"
+    --inputs '{"username": "{{ cobbler_username }}", "password": "{{ cobbler_password }}"}'
+  when: credential_name not in credentials_list.stdout
+
+- name: DeployOmnia Template
+  block:
+    - name: Create template to deploy omnia
+      command: >-
+        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+        job_templates create
+        --name "{{ omnia_template_name }}"
+        --job_type run
+        --inventory "{{ omnia_inventory_name }}"
+        --project "{{ project_name }}"
+        --playbook "{{ omnia_playbook }}"
+        --verbosity "{{ playbooks_verbosity }}"
+        --ask_skip_tags_on_launch true
+
+    - name: Associate credential
+      command: >-
+        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+        job_templates associate "{{ omnia_template_name }}"
+        --credential ""{{ credential_name }}""
+
+  when: omnia_template_name not in job_templates_list.stdout
+
+- name: DynamicInventory template
+  block:
+    - name: Create template to fetch dynamic inventory
+      command: >-
+        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+        job_templates create
+        --name "{{ inventory_template_name }}"
+        --job_type run
+        --inventory "{{ omnia_inventory_name }}"
+        --project "{{ project_name }}"
+        --playbook "{{ inventory_playbook }}"
+        --verbosity "{{ playbooks_verbosity }}"
+        --use_fact_cache true
+
+    - name: Associate credential
+      command: >-
+        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+        job_templates associate "{{ inventory_template_name }}"
+        --credential ""{{ credential_name }}""
+  when: inventory_template_name not in job_templates_list.stdout
+
+- name: Schedule dynamic inventory template
+  block:
+    - name: Get unified job template list
+      command: >-
+        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+        unified_job_templates list --name "{{ inventory_template_name }}" -f human
+      register: unified_job_template_list
+
+    - name: Get job ID
+      set_fact:
+        job_id: "{{ unified_job_template_list.stdout | regex_search('[0-9]+') }}"
+
+    - name: Schedule dynamic inventory job
+      command: >-
+        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+        schedules create --name "{{ schedule_name }}"
+        --unified_job_template="{{ job_id }}" --rrule="{{ schedule_rule }}"
+
+  when: schedule_name not in schedules_list.stdout

+ 0 - 118
appliance/roles/web_ui/tasks/awx_password.yml

@@ -1,118 +0,0 @@
-# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-#     http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
----
-
-#Tasks for getting and encrypting AWX Password
-- name: Clone AWX repo
-  git:
-    repo: "{{ awx_git_repo }}"
-    dest: "{{ awx_repo_path }}"
-    force: yes
-  tags: install
-
-- name: AWX password
-  block:
-    - name: Take awx password
-      pause:
-        prompt: "{{ prompt_password }}"
-        echo: no
-      register: prompt_admin_password
-      until:
-        - prompt_admin_password.user_input | length >  min_length| int  - 1
-        - '"-" not in prompt_admin_password.user_input '
-        - '"\\" not in prompt_admin_password.user_input '
-        - '"\"" not in prompt_admin_password.user_input '
-        - " \"'\" not in prompt_admin_password.user_input "
-      retries: "{{ retries }}"
-      delay: "{{ retry_delay }}"
-      when: admin_password is not defined and no_prompt is not defined
-  rescue:
-    - name: Abort if password validation fails
-      fail:
-        msg: "{{ msg_incorrect_password_format }}"
-  tags: install
-
-- name: Assert admin_password if prompt not given
-  assert:
-    that:
-        - admin_password | length >  min_length| int  - 1
-        - '"-" not in admin_password '
-        - '"\\" not in admin_password '
-        - '"\"" not in admin_password '
-        - " \"'\" not in admin_password "
-    success_msg: "{{ success_msg_pwd_format }}"
-    fail_msg: "{{ fail_msg_pwd_format }}"
-  register: msg_pwd_format
-  when: admin_password is defined and no_prompt is defined
-
-- name: Save admin password
-  set_fact:
-    admin_password: "{{ prompt_admin_password.user_input }}"
-  when: no_prompt is not defined
-
-- name: Confirmation
-  block:
-    - name: Confirm AWX password
-      pause:
-        prompt: "{{ confirm_password }}"
-        echo: no
-      register: prompt_admin_password_confirm
-      until: admin_password == prompt_admin_password_confirm.user_input
-      retries: "{{ confirm_retries }}"
-      delay: "{{ retry_delay }}"
-      when: admin_password_confirm is not defined and no_prompt is not defined
-  rescue:
-    - name: Abort if password confirmation failed
-      fail:
-        msg: "{{ msg_failed_password_confirm }}"
-  tags: install
-
-- name: Assert admin_password_confirm if prompt not given
-  assert:
-    that: admin_password == admin_password_confirm
-    success_msg: "{{ success_msg_pwd_confirm }}"
-    fail_msg: "{{ fail_msg_pwd_confirm }}"
-  register: msg_pwd_confirm
-  when: admin_password_confirm is defined and no_prompt is defined
-
-- name: Create ansible vault key
-  set_fact:
-    vault_key: "{{ lookup('password', '/dev/null chars=ascii_letters') }}"
-  tags: install
-
-- name: Save vault key
-  copy:
-    dest: "{{ awx_installer_path + vault_file }}"
-    content: |
-      {{ vault_key }}
-    owner: root
-    force: yes
-  tags: install
-
-- name: Encrypt awx password
-  command: ansible-vault encrypt_string "{{ admin_password }}" --name admin_password --vault-password-file "{{ vault_file }}"
-  register: encrypt_password
-  args:
-    chdir: "{{ awx_installer_path }}"
-  tags: install
-
-- name: Store encrypted password
-  copy:
-    dest: "{{ awx_installer_path + awx_password_file }}"
-    content: |
-      ---
-      {{ encrypt_password.stdout }}
-    force: yes
-    owner: root
-  tags: install

+ 0 - 8
appliance/roles/web_ui/tasks/check_prerequisites.yml

@@ -17,7 +17,6 @@
 - name: Initialize variables
   set_fact:
     awx_status: false
-    awx_task_status: false
   tags: install
 
 - name: Check awx_task status on the machine
@@ -26,17 +25,10 @@
   register: awx_task_result
   tags: install
 
-- name: Update awx status
-  set_fact:
-    awx_task_status: true
-  when: awx_task_result.exists
-  tags: install
-
 - name: Check awx_web status on the machine
   docker_container_info:
     name: awx_web
   register: awx_web_result
-  when: awx_task_status
   tags: install
 
 - name: Update awx status

+ 22 - 0
appliance/roles/web_ui/tasks/clone_awx.yml

@@ -0,0 +1,22 @@
+# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+---
+
+- name: Clone AWX repo
+  git:
+    repo: "{{ awx_git_repo }}"
+    dest: "{{ awx_repo_path }}"
+    force: yes
+    version: devel
+  tags: install

+ 25 - 8
appliance/roles/web_ui/tasks/install_awx.yml

@@ -14,10 +14,6 @@
 ---
 
 # Tasks for installing AWX
-- name: Store omnia parent directory path
-  set_fact:
-     dir_path:
-  tags: install
 
 - name: Change inventory file
   replace:
@@ -32,14 +28,35 @@
     label: "{{ item.name }}"
   tags: install
 
+- name: Ensure port is 8081
+  lineinfile:
+    path: "{{ awx_inventory_path }}"
+    regexp: "{{ port_old }}"
+    line: "{{ port_new }}"
+    state: present
+
 - name: Create pgdocker directory
   file:
     path: "{{ pgdocker_dir_path }}"
     state: directory
+    mode: 0775
   tags: install
 
-- name: Run AWX install.yml file
-  command: ansible-playbook -i inventory install.yml -e @"{{ awx_password_file }}" --vault-password-file "{{ vault_file }}"
-  args:
-    chdir: "{{ awx_installer_path }}"
+- name: Install AWX
+  block:
+    - name: Run AWX install.yml file
+      command: ansible-playbook -i inventory install.yml --extra-vars "admin_password={{ admin_password }}"
+      args:
+        chdir: "{{ awx_installer_path }}"
+      register: awx_installation
+
+  rescue:
+    - name: Check AWX status on machine
+      include_tasks: check_awx_status.yml
+
+    - name: Fail if container are not running
+      fail:
+        msg: "AWX installation failed."
+      when: not awx_status
+
   tags: install

+ 4 - 2
appliance/roles/web_ui/tasks/install_awx_cli.yml

@@ -16,10 +16,12 @@
 # Tasks for installing AWX-CLI
 - name: Add AWX CLI repo
   block:
-    - get_url:
+    - name: Get repo
+      get_url:
         url: "{{ awx_cli_repo }}"
         dest: "{{ awx_cli_repo_path }}"
-    - replace:
+    - name: Disable gpgcheck
+      replace:
         path: "{{ awx_cli_repo_path }}"
         regexp: 'gpgcheck=1'
         replace: 'gpgcheck=0'

+ 45 - 8
appliance/roles/web_ui/tasks/main.yml

@@ -15,7 +15,7 @@
 
 # Tasks for Deploying AWX on the system
 - name: Check AWX status on machine
-  include_tasks: check_prerequisites.yml
+  include_tasks: check_awx_status.yml
   tags: install
 
 - name: Include common variables
@@ -28,7 +28,7 @@
   tags: install
 
 - name: Get and encrypt AWX password
-  include_tasks: awx_password.yml
+  include_tasks: clone_awx.yml
   when: not awx_status
   tags: install
 
@@ -58,13 +58,50 @@
   include_tasks: ../../common/tasks/internet_validation.yml
   tags: install
 
+- name: Waiting for AWX UI to be accessible
+  wait_for:
+    timeout: 300
+  delegate_to: localhost
+  tags: install
+
+- name: Re-install if in migrating state
+  block:
+    - name: Check if AWX UI is accessible
+      command: >-
+        awx --conf.host "{{ awx_ip }}" --conf.username "{{ awx_user }}" --conf.password "{{ admin_password }}"
+        organizations list -f human
+      changed_when: no
+
+  rescue:
+    - name: Remove old containers
+      docker_container:
+        name: "{{ item }}"
+        state: absent
+      loop:
+        - awx_task
+        - awx_web
+
+    - name: Restart docker
+      service:
+        name: docker
+        state: restarted
+
+    - name: Run AWX install.yml file
+      command: ansible-playbook -i inventory install.yml --extra-vars "admin_password={{ admin_password }}"
+      args:
+        chdir: "{{ awx_installer_path }}"
+      ignore_errors: yes
+
+    - name: Waiting for AWX UI to be accessible
+      wait_for:
+        timeout: 150
+      delegate_to: localhost
+  tags: install
+
 - name: Install AWX-CLI
   include_tasks: install_awx_cli.yml
   tags: install
 
-- name: AWX configuration
-  command: >-
-    ansible-playbook "{{ role_path }}"/files/awx_configuration.yml
-    -e @"{{ awx_installer_path + awx_password_file }}"
-    --vault-password-file "{{ awx_installer_path + vault_file }}"
-  tags: install
+- name: Configure AWX
+  include_tasks: awx_configuration.yml
+  tags: install

+ 13 - 19
appliance/roles/web_ui/vars/main.yml

@@ -15,25 +15,11 @@
 
 # vars file for web_ui
 
-# Usage: awx_password.yml
+# Usage: clone_awx.yml
 awx_git_repo: "https://github.com/ansible/awx.git"
-min_length: 8
-retries: 3
-confirm_retries: 1
-retry_delay: 0.01
-prompt_password: "Enter AWX password.( Min. Length of Password should be {{ min_length| int }}. Dont use chars: - \' \\ \" )"
-confirm_password: "Confirm AWX Password"
-msg_incorrect_password_format: "Failed. Password format not correct."
-msg_failed_password_confirm: "Failed. Passwords did not match"
 docker_volume: "/var/lib/docker/volumes/{{ docker_volume_name }}"
 awx_repo_path: "{{ docker_volume }}/awx/"
 awx_installer_path: "{{ awx_repo_path }}/installer/"
-vault_file: .vault_key
-awx_password_file: .password.yml
-success_msg_pwd_format: "admin_password validated"
-fail_msg_pwd_format: "admin_password validation failed"
-success_msg_pwd_confirm: "admin_password confirmed"
-fail_msg_pwd_confirm: "admin_password confirmation failed"
 
 # Usage: install_awx.yml
 awx_inventory_path: "{{ awx_repo_path }}/installer/inventory"
@@ -44,21 +30,26 @@ awx_alternate_dns_servers_old: '#awx_alternate_dns_servers="10.1.2.3,10.2.3.4"'
 awx_alternate_dns_servers_new: 'awx_alternate_dns_servers="8.8.8.8,8.8.4.4"'
 admin_password_old: "admin_password=password"
 admin_password_new: "#admin_password=password"
+port_old: "host_port=80"
+port_new: "host_port=8081"
 
 # Usage: main.yml
 message_skipped: "Installation Skipped: AWX instance is already running on your system"
 message_installed: "Installation Successful"
+awx_ip: http://localhost:8081
+return_status: 200
+migrating_msg: "IsMigrating"
 
 # Usage: install_awx_cli.yml
-awx_cli_repo: "https://releases.ansible.com/ansible-tower/cli/ansible-tower-cli-centos8.repo"
-awx_cli_repo_path: "/etc/yum.repos.d/ansible-tower-cli-centos8.repo"
+awx_cli_repo: "https://releases.ansible.com/ansible-tower/cli/ansible-tower-cli-centos7.repo"
+awx_cli_repo_path: "/etc/yum.repos.d/ansible-tower-cli-centos7.repo"
 
 # Usage: awx_configuration.yml
-awx_ip: http://localhost
 awx_user: admin         #Don't change it. It is set as admin while installing AWX
 default_org: Default
 default_template: 'Demo Job Template'
 default_projects: 'Demo Project'
+default_credentials: 'Demo Credential'
 dir_name: omnia
 organization_name: DellEMC
 project_name: omnia
@@ -66,10 +57,13 @@ omnia_inventory_name: omnia_inventory
 group_names:
   - manager
   - compute
+credential_name: omnia_credential
+credential_type: Machine
+cobbler_username: root
 omnia_template_name: DeployOmnia
 omnia_playbook: omnia.yml
 inventory_template_name: DynamicInventory
 inventory_playbook: appliance/inventory.yml
 playbooks_verbosity: 0
 schedule_name: DynamicInventorySchedule
-schedule_rule: "DTSTART:20201201T000000Z RRULE:FREQ=MINUTELY;INTERVAL=10"
+schedule_rule: "DTSTART:20201201T000000Z RRULE:FREQ=MINUTELY;INTERVAL=10"

+ 0 - 3
appliance/test/cobbler_inventory

@@ -1,3 +0,0 @@
-[cobbler_servers]
-172.17.0.10
-100.98.24.231

+ 39 - 0
appliance/test/input_config_empty.yml

@@ -0,0 +1,39 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+# Password used while deploying OS on bare metal servers and for Cobbler UI.
+# The Length of the password should be more than 7.
+# The password must not contain -,\, ',"
+provision_password: ""
+
+# Password used for the AWX UI.
+# The Length of the password should be more than 7.
+# The password must not contain -,\, ',"
+awx_password: ""
+
+# Password used for Slurm database.
+# The Length of the password should be more than 7.
+# The password must not contain -,\, ',"
+mariadb_password: ""
+
+# The nic/ethernet card that needs to be connected to the HPC switch.
+# This nic will be configured by Omnia for the DHCP server.
+# Default value of nic is em1.
+hpc_nic: "em1"
+
+# The nic card that needs to be connected to the public internet.
+# The public_nic should be em2, em1 or em3
+# Default value of nic is em2.
+public_nic: "em2"

+ 39 - 0
appliance/test/input_config_test.yml

@@ -0,0 +1,39 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+# Password used while deploying OS on bare metal servers and for Cobbler UI.
+# The Length of the password should be more than 7.
+# The password must not contain -,\, ',"
+provision_password: "omnia@123"
+
+# Password used for the AWX UI.
+# The Length of the password should be more than 7.
+# The password must not contain -,\, ',"
+awx_password: "omnia@123"
+
+# Password used for Slurm database.
+# The Length of the password should be more than 7.
+# The password must not contain -,\, ',"
+mariadb_password: "omnia@123"
+
+# The nic/ethernet card that needs to be connected to the HPC switch.
+# This nic will be configured by Omnia for the DHCP server.
+# Default value of nic is em1.
+hpc_nic: "em1"
+
+# The nic card that needs to be connected to the public internet.
+# The public_nic should be em2, em1 or em3
+# Default value of nic is em2.
+public_nic: "em2"

+ 3 - 0
appliance/test/provisioned_hosts.yml

@@ -0,0 +1,3 @@
+[all]
+172.17.0.10
+172.17.0.15

파일 크기가 너무 크기때문에 변경 상태를 표시하지 않습니다.
+ 1333 - 15
appliance/test/test_common.yml


+ 365 - 67
appliance/test/test_provision_cc.yml

@@ -13,37 +13,204 @@
 #  limitations under the License.
 ---
 
-# Testcase OMNIA_DIO_US_CC_TC_010
+# Testcase OMNIA_DIO_US_CC_TC_004
 # Execute provision role in management station and verify cobbler configuration
-- name: OMNIA_DIO_US_CC_TC_010
+- name: OMNIA_DIO_US_CC_TC_004
   hosts: localhost
   connection: local
   vars_files:
     - test_vars/test_provision_vars.yml
     - ../roles/provision/vars/main.yml
   tasks:
+    - name: Check the iso file is present
+      stat:
+        path: "{{ iso_file_path }}/{{ iso_name }}"
+      register: iso_status
+      tags: TC_004
+
+    - name: Fail if iso file is missing
+      fail:
+        msg: "{{ iso_fail }}"
+      when: iso_status.stat.exists == false
+      tags: TC_004
+
     - name: Delete the cobbler container if exits
       docker_container:
         name: "{{ docker_container_name }}"
         state: absent
-      tags: TC_010
+      tags: TC_004
 
     - name: Delete docker image if exists
       docker_image:
         name: "{{ docker_image_name }}"
         tag: "{{ docker_image_tag }}"
         state: absent
-      tags: TC_010
+      tags: TC_004
 
     - block:
+        - name: Call common role
+          include_role:
+            name: ../roles/common
+          vars:
+            input_config_filename: "{{ test_input_config_filename }}"
+
         - name: Call provision role
           include_role:
             name: ../roles/provision
+      tags: TC_004
+
+    - name: Check the connection to cobbler UI and it returns a status 200
+      uri:
+        url: https://localhost/cobbler_web
+        status_code: 200
+        return_content: yes
+        validate_certs: no
+      tags: TC_004,VERIFY_004
+
+    - name: Fetch cobbler version in cobbler container
+      command: docker exec {{ docker_container_name }} cobbler version
+      changed_when: false
+      register: cobbler_version
+      tags: TC_004,VERIFY_004
+
+    - name: Verify cobbler version
+      assert:
+        that:
+          - "'Cobbler' in cobbler_version.stdout"
+          - "'Error' not in cobbler_version.stdout"
+        fail_msg: "{{ cobbler_version_fail_msg }}"
+        success_msg: "{{ cobbler_version_success_msg }}"
+      tags: TC_004,VERIFY_004
+
+    - name: Run cobbler check command in cobbler container
+      command: docker exec {{ docker_container_name }} cobbler check
+      changed_when: false
+      register: cobbler_check
+      tags: TC_004,VERIFY_004
+
+    - name: Verify cobbler check command output
+      assert:
+        that:
+          - "'The following are potential configuration items that you may want to fix' not in cobbler_check.stdout"
+          - "'Error' not in cobbler_check.stdout"
+        fail_msg: "{{ cobbler_check_fail_msg }}"
+        success_msg: "{{ cobbler_check_success_msg }}"
+      ignore_errors: yes
+      tags: TC_004,VERIFY_004
+
+    - name: Run cobbler sync command in cobbler container
+      command: docker exec {{ docker_container_name }} cobbler sync
+      changed_when: false
+      register: cobbler_sync
+      tags: TC_004,VERIFY_004
+
+    - name: Verify cobbler sync command output
+      assert:
+        that:
+          - "'TASK COMPLETE' in cobbler_sync.stdout"
+          - "'Fail' not in cobbler_sync.stdout"
+          - "'Error' not in cobbler_sync.stdout"
+        fail_msg: "{{ cobbler_sync_fail_msg }}"
+        success_msg: "{{ cobbler_sync_success_msg }}"
+      tags: TC_004,VERIFY_004
+
+    - name: Fetch cobbler distro list
+      command: docker exec {{ docker_container_name }} cobbler distro list
+      changed_when: false
+      register: cobbler_distro_list
+      tags: TC_004,VERIFY_004
+
+    - name: Verify cobbler distro list
+      assert:
+        that:
+          - "'CentOS' in cobbler_distro_list.stdout"
+        fail_msg: "{{ cobbler_distro_list_fail_msg }}"
+        success_msg: "{{ cobbler_distro_list_success_msg }}"
+      tags: TC_004,VERIFY_004
+
+    - name: Fetch cobbler profile list
+      command: docker exec cobbler cobbler profile list
+      changed_when: false
+      register: cobbler_profile_list
+      tags: TC_004,VERIFY_004
+
+    - name: Verify cobbler profile list
+      assert:
+        that:
+          - "'CentOS' in cobbler_profile_list.stdout"
+        fail_msg: "{{ cobbler_profile_list_fail_msg }}"
+        success_msg: "{{ cobbler_profile_list_success_msg }}"
+      tags: TC_004,VERIFY_004
+
+    - name: Check kickstart file
+      shell: |
+        docker exec {{ docker_container_name }} [ -f /var/lib/cobbler/kickstarts/{{ kickstart_filename }} ] && echo "File exist" || echo "File does not exist"
+      changed_when: false
+      register: kickstart_file_status
+      tags: TC_004,VERIFY_004
+
+    - name: Verify kickstart file present
+      assert:
+        that:
+          - "'File exist' in kickstart_file_status.stdout"
+        fail_msg: "{{ kickstart_file_fail_msg }}"
+        success_msg: "{{ kickstart_file_success_msg }}"
+      tags: TC_004,VERIFY_004
+
+    - name: Check crontab list
+      command: docker exec cobbler crontab -l
+      changed_when: false
+      register: crontab_list
+      tags: TC_004,VERIFY_004
+
+    - name: Verify crontab list
+      assert:
+        that:
+          - "'* * * * * ansible-playbook /root/tftp.yml' in crontab_list.stdout"
+          - "'5 * * * * ansible-playbook /root/inventory_creation.yml' in crontab_list.stdout"
+        fail_msg: "{{ crontab_list_fail_msg }}"
+        success_msg: "{{ crontab_list_success_msg }}"
+      tags: TC_004,VERIFY_004
+
+    - name: Check tftp,dhcpd,xinetd,cobblerd service is running
+      command: docker exec cobbler systemctl is-active {{ item }}
+      changed_when: false
+      ignore_errors: yes
+      register: cobbler_service_check
+      with_items: "{{ cobbler_services }}"
+      tags: TC_004,VERIFY_004
+
+    - name: Verify tftp,dhcpd,xinetd,cobblerd service is running
+      assert:
+        that:
+          - "'active' in cobbler_service_check.results[{{ item }}].stdout"
+          - "'inactive' not in cobbler_service_check.results[{{ item }}].stdout"
+          - "'unknown' not in cobbler_service_check.results[{{ item }}].stdout"
+        fail_msg: "{{ cobbler_service_check_fail_msg }}"
+        success_msg: "{{ cobbler_service_check_success_msg }}"
+      with_sequence: start=0 end=3
+      tags: TC_004,VERIFY_004
+
+# Testcase OMNIA_DIO_US_CDIP_TC_005
+# Execute provison role in management station where cobbler container is configured
+- name: OMNIA_DIO_US_CDIP_TC_005
+  hosts: localhost
+  connection: local
+  vars_files:
+    - test_vars/test_provision_vars.yml
+    - ../roles/provision/vars/main.yml
+  tasks:
+    - block:
+        - name: Call common role
+          include_role:
+            name: ../roles/common
           vars:
-            no_prompt: true
-            admin_password: "{{ boundary_password }}"
-            admin_password_confirm: "{{ boundary_password }}"
-      tags: TC_010
+            input_config_filename: "{{ test_input_config_filename }}"
+
+        - name: Call provision role
+          include_role:
+            name: ../roles/provision
+      tags: TC_005
 
     - name: Check the connection to cobbler UI and it returns a status 200
       uri:
@@ -51,13 +218,13 @@
         status_code: 200
         return_content: yes
         validate_certs: no
-      tags: TC_010
+      tags: TC_005,VERIFY_005
 
     - name: Fetch cobbler version in cobbler container
       command: docker exec {{ docker_container_name }} cobbler version
       changed_when: false
       register: cobbler_version
-      tags: TC_010
+      tags: TC_005,VERIFY_005
 
     - name: Verify cobbler version
       assert:
@@ -66,13 +233,13 @@
           - "'Error' not in cobbler_version.stdout"
         fail_msg: "{{ cobbler_version_fail_msg }}"
         success_msg: "{{ cobbler_version_success_msg }}"
-      tags: TC_010
+      tags: TC_005,VERIFY_005
 
     - name: Run cobbler check command in cobbler container
       command: docker exec {{ docker_container_name }} cobbler check
       changed_when: false
       register: cobbler_check
-      tags: TC_010
+      tags: TC_005,VERIFY_005
 
     - name: Verify cobbler check command output
       assert:
@@ -82,13 +249,13 @@
         fail_msg: "{{ cobbler_check_fail_msg }}"
         success_msg: "{{ cobbler_check_success_msg }}"
       ignore_errors: yes
-      tags: TC_010
+      tags: TC_005,VERIFY_005
 
     - name: Run cobbler sync command in cobbler container
       command: docker exec {{ docker_container_name }} cobbler sync
       changed_when: false
       register: cobbler_sync
-      tags: TC_010
+      tags: TC_005,VERIFY_005
 
     - name: Verify cobbler sync command output
       assert:
@@ -98,13 +265,13 @@
           - "'Error' not in cobbler_sync.stdout"
         fail_msg: "{{ cobbler_sync_fail_msg }}"
         success_msg: "{{ cobbler_sync_success_msg }}"
-      tags: TC_010
+      tags: TC_005,VERIFY_005
 
     - name: Fetch cobbler distro list
       command: docker exec {{ docker_container_name }} cobbler distro list
       changed_when: false
       register: cobbler_distro_list
-      tags: TC_010
+      tags: TC_005,VERIFY_005
 
     - name: Verify cobbler distro list
       assert:
@@ -112,13 +279,13 @@
           - "'CentOS' in cobbler_distro_list.stdout"
         fail_msg: "{{ cobbler_distro_list_fail_msg }}"
         success_msg: "{{ cobbler_distro_list_success_msg }}"
-      tags: TC_010
+      tags: TC_005,VERIFY_005
 
     - name: Fetch cobbler profile list
       command: docker exec cobbler cobbler profile list
       changed_when: false
       register: cobbler_profile_list
-      tags: TC_010
+      tags: TC_005,VERIFY_005
 
     - name: Verify cobbler profile list
       assert:
@@ -126,14 +293,14 @@
           - "'CentOS' in cobbler_profile_list.stdout"
         fail_msg: "{{ cobbler_profile_list_fail_msg }}"
         success_msg: "{{ cobbler_profile_list_success_msg }}"
-      tags: TC_010
+      tags: TC_005,VERIFY_005
 
     - name: Check kickstart file
       shell: |
         docker exec {{ docker_container_name }} [ -f /var/lib/cobbler/kickstarts/{{ kickstart_filename }} ] && echo "File exist" || echo "File does not exist"
       changed_when: false
       register: kickstart_file_status
-      tags: TC_010
+      tags: TC_005,VERIFY_005
 
     - name: Verify kickstart file present
       assert:
@@ -141,11 +308,45 @@
           - "'File exist' in kickstart_file_status.stdout"
         fail_msg: "{{ kickstart_file_fail_msg }}"
         success_msg: "{{ kickstart_file_success_msg }}"
-      tags: TC_010
+      tags: TC_005,VERIFY_005
+
+    - name: Check crontab list
+      command: docker exec cobbler crontab -l
+      changed_when: false
+      register: crontab_list
+      tags: TC_005,VERIFY_005
+
+    - name: Verify crontab list
+      assert:
+        that:
+          - "'* * * * * ansible-playbook /root/tftp.yml' in crontab_list.stdout"
+          - "'5 * * * * ansible-playbook /root/inventory_creation.yml' in crontab_list.stdout"
+        fail_msg: "{{ crontab_list_fail_msg }}"
+        success_msg: "{{ crontab_list_success_msg }}"
+      tags: TC_005,VERIFY_005
+
+    - name: Check tftp,dhcpd,xinetd,cobblerd service is running
+      command: docker exec cobbler systemctl is-active {{ item }}
+      changed_when: false
+      ignore_errors: yes
+      register: cobbler_service_check
+      with_items: "{{ cobbler_services }}"
+      tags: TC_005,VERIFY_005
 
-# Testcase OMNIA_DIO_US_CC_TC_011
+    - name: Verify tftp,dhcpd,xinetd,cobblerd service is running
+      assert:
+        that:
+          - "'active' in cobbler_service_check.results[{{ item }}].stdout"
+          - "'inactive' not in cobbler_service_check.results[{{ item }}].stdout"
+          - "'unknown' not in cobbler_service_check.results[{{ item }}].stdout"
+        fail_msg: "{{ cobbler_service_check_fail_msg }}"
+        success_msg: "{{ cobbler_service_check_success_msg }}"
+      with_sequence: start=0 end=3
+      tags: TC_005,VERIFY_005
+
+# Testcase OMNIA_DIO_US_CC_TC_006
 # Execute provision role in management station where already one container present
-- name: OMNIA_DIO_US_CC_TC_011
+- name: OMNIA_DIO_US_CC_TC_006
   hosts: localhost
   connection: local
   vars_files:
@@ -156,21 +357,21 @@
       docker_container:
         name: "{{ docker_container_name }}"
         state: absent
-      tags: TC_011
+      tags: TC_006
 
     - name: Delete docker image if exists
       docker_image:
         name: "{{ docker_image_name }}"
         tag: "{{ docker_image_tag }}"
         state: absent
-      tags: TC_011
+      tags: TC_006
 
     - name: Create docker image
       docker_image:
         name: ubuntu
         tag: latest
         source: pull
-      tags: TC_011
+      tags: TC_006
 
     - name: Create docker container
       command: docker run -dit ubuntu
@@ -178,17 +379,19 @@
       changed_when: true
       args:
         warn: false
-      tags: TC_011
+      tags: TC_006
 
     - block:
+        - name: Call common role
+          include_role:
+            name: ../roles/common
+          vars:
+            input_config_filename: "{{ test_input_config_filename }}"
+
         - name: Call provision role
           include_role:
             name: ../roles/provision
-          vars:
-            no_prompt: true
-            admin_password: "{{ boundary_password }}"
-            admin_password_confirm: "{{ boundary_password }}"
-      tags: TC_011
+      tags: TC_006
 
     - name: Check the connection to cobbler UI and it returns a status 200
       uri:
@@ -196,13 +399,13 @@
         status_code: 200
         return_content: yes
         validate_certs: no
-      tags: TC_011
+      tags: TC_006,VERIFY_006
 
     - name: Fetch cobbler version in cobbler container
       command: docker exec {{ docker_container_name }} cobbler version
       changed_when: false
       register: cobbler_version
-      tags: TC_011
+      tags: TC_006,VERIFY_006
 
     - name: Verify cobbler version
       assert:
@@ -211,13 +414,13 @@
           - "'Error' not in cobbler_version.stdout"
         fail_msg: "{{ cobbler_version_fail_msg }}"
         success_msg: "{{ cobbler_version_success_msg }}"
-      tags: TC_011
+      tags: TC_006,VERIFY_006
 
     - name: Run cobbler check command in cobbler container
       command: docker exec {{ docker_container_name }} cobbler check
       changed_when: false
       register: cobbler_check
-      tags: TC_011
+      tags: TC_006,VERIFY_006
 
     - name: Verify cobbler check command output
       assert:
@@ -227,13 +430,13 @@
         fail_msg: "{{ cobbler_check_fail_msg }}"
         success_msg: "{{ cobbler_check_success_msg }}"
       ignore_errors: yes
-      tags: TC_011
+      tags: TC_006,VERIFY_006
 
     - name: Run cobbler sync command in cobbler container
       command: docker exec {{ docker_container_name }} cobbler sync
       changed_when: false
       register: cobbler_sync
-      tags: TC_011
+      tags: TC_006,VERIFY_006
 
     - name: Verify cobbler sync command output
       assert:
@@ -243,13 +446,13 @@
           - "'Error' not in cobbler_sync.stdout"
         fail_msg: "{{ cobbler_sync_fail_msg }}"
         success_msg: "{{ cobbler_sync_success_msg }}"
-      tags: TC_011
+      tags: TC_006,VERIFY_006
 
     - name: Fetch cobbler distro list
       command: docker exec {{ docker_container_name }} cobbler distro list
       changed_when: false
       register: cobbler_distro_list
-      tags: TC_011
+      tags: TC_006,VERIFY_006
 
     - name: Verify cobbler distro list
       assert:
@@ -257,13 +460,13 @@
           - "'CentOS' in cobbler_distro_list.stdout"
         fail_msg: "{{ cobbler_distro_list_fail_msg }}"
         success_msg: "{{ cobbler_distro_list_success_msg }}"
-      tags: TC_011
+      tags: TC_006,VERIFY_006
 
     - name: Fetch cobbler profile list
       command: docker exec cobbler cobbler profile list
       changed_when: false
       register: cobbler_profile_list
-      tags: TC_011
+      tags: TC_006,VERIFY_006
 
     - name: Verify cobbler profile list
       assert:
@@ -271,14 +474,14 @@
           - "'CentOS' in cobbler_profile_list.stdout"
         fail_msg: "{{ cobbler_profile_list_fail_msg }}"
         success_msg: "{{ cobbler_profile_list_success_msg }}"
-      tags: TC_011
+      tags: TC_006,VERIFY_006
 
     - name: Check kickstart file
       shell: |
         docker exec {{ docker_container_name }} [ -f /var/lib/cobbler/kickstarts/{{ kickstart_filename }} ] && echo "File exist" || echo "File does not exist"
       changed_when: false
       register: kickstart_file_status
-      tags: TC_011
+      tags: TC_006,VERIFY_006
 
     - name: Verify kickstart file present
       assert:
@@ -286,23 +489,57 @@
           - "'File exist' in kickstart_file_status.stdout"
         fail_msg: "{{ kickstart_file_fail_msg }}"
         success_msg: "{{ kickstart_file_success_msg }}"
-      tags: TC_011
+      tags: TC_006,VERIFY_006
+
+    - name: Check crontab list
+      command: docker exec cobbler crontab -l
+      changed_when: false
+      register: crontab_list
+      tags: TC_006,VERIFY_006
+
+    - name: Verify crontab list
+      assert:
+        that:
+          - "'* * * * * ansible-playbook /root/tftp.yml' in crontab_list.stdout"
+          - "'5 * * * * ansible-playbook /root/inventory_creation.yml' in crontab_list.stdout"
+        fail_msg: "{{ crontab_list_fail_msg }}"
+        success_msg: "{{ crontab_list_success_msg }}"
+      tags: TC_006,VERIFY_006
+
+    - name: Check tftp,dhcpd,xinetd,cobblerd service is running
+      command: docker exec cobbler systemctl is-active {{ item }}
+      changed_when: false
+      ignore_errors: yes
+      register: cobbler_service_check
+      with_items: "{{ cobbler_services }}"
+      tags: TC_006,VERIFY_006
+
+    - name: Verify tftp,dhcpd,xinetd,cobblerd service is running
+      assert:
+        that:
+          - "'active' in cobbler_service_check.results[{{ item }}].stdout"
+          - "'inactive' not in cobbler_service_check.results[{{ item }}].stdout"
+          - "'unknown' not in cobbler_service_check.results[{{ item }}].stdout"
+        fail_msg: "{{ cobbler_service_check_fail_msg }}"
+        success_msg: "{{ cobbler_service_check_success_msg }}"
+      with_sequence: start=0 end=3
+      tags: TC_006,VERIFY_006
 
     - name: Delete the ubuntu container
       docker_container:
         name: "{{ create_docker_container.stdout }}"
         state: absent
-      tags: TC_011
+      tags: TC_006
 
     - name: Delete the ubuntu umage
       docker_image:
         name: ubuntu
         state: absent
-      tags: TC_011
+      tags: TC_006
 
-# Testcase OMNIA_DIO_US_CC_TC_012
+# Testcase OMNIA_DIO_US_CC_TC_007
 # Execute provision role in management station and reboot management station
-- name: OMNIA_DIO_US_CC_TC_012
+- name: OMNIA_DIO_US_CC_TC_007
   hosts: localhost
   connection: local
   vars_files:
@@ -310,54 +547,115 @@
     - ../roles/provision/vars/main.yml
   tasks:
     - name: Check last uptime of the server
-      shell: |
-        current_time=$(date +"%Y-%m-%d %H")
-        uptime -s | grep "$current_time"
+      command: uptime -s
       register: uptime_status
       changed_when: false
       ignore_errors: yes
-      tags: TC_012
+      tags: TC_007
+
+    - name: Check current date
+      command: date +"%Y-%m-%d %H"
+      register: current_time
+      changed_when: false
+      ignore_errors: yes
+      tags: TC_007
 
     - name: Delete the cobbler container if exits
       docker_container:
         name: "{{ docker_container_name }}"
         state: absent
-      when: uptime_status.stdout|length < 1
-      tags: TC_012
+      when: current_time.stdout not in uptime_status.stdout
+      tags: TC_007
 
     - name: Delete docker image if exists
       docker_image:
         name: "{{ docker_image_name }}"
         tag: "{{ docker_image_tag }}"
         state: absent
-      when: uptime_status.stdout|length < 1
-      tags: TC_012
+      when: current_time.stdout not in uptime_status.stdout
+      tags: TC_007
 
     - block:
+        - name: Call common role
+          include_role:
+            name: ../roles/common
+          vars:
+            input_config_filename: "{{ test_input_config_filename }}"
+
         - name: Call provision role
           include_role:
             name: ../roles/provision
-          vars:
-            no_prompt: true
-            admin_password: "{{ boundary_password }}"
-            admin_password_confirm: "{{ boundary_password }}"
-      when: uptime_status.stdout|length < 1
-      tags: TC_012
+      when: current_time.stdout not in uptime_status.stdout
+      tags: TC_007
 
     - name: Reboot localhost
       command: reboot
-      when: uptime_status.stdout|length < 1
-      tags: TC_012
+      when: current_time.stdout not in uptime_status.stdout
+      tags: TC_007
 
     - name: Inspect cobbler container
       docker_container_info:
         name: "{{ docker_container_name }}"
       register: cobbler_cnt_status
-      tags: TC_012
+      tags: TC_007,VERIFY_007
 
     - name: Verify cobbler container is running after reboot
       assert:
         that: "'running' in cobbler_cnt_status.container.State.Status"
         fail_msg: "{{ cobbler_reboot_fail_msg }}"
         success_msg: "{{ cobbler_reboot_success_msg }}"
-      tags: TC_012
+      tags: TC_007,VERIFY_007
+
+# Testcase OMNIA_DIO_US_CC_TC_008
+# Execute provison role in management station with centos iso file not present in files folder of provision role
+- name: OMNIA_DIO_US_CC_TC_008
+  hosts: localhost
+  connection: local
+  vars_files:
+    - test_vars/test_provision_vars.yml
+    - ../roles/provision/vars/main.yml
+  tasks:
+    - name: Check the iso file is present
+      stat:
+        path: "{{ iso_file_path }}/{{ iso_name }}"
+      register: iso_status
+      tags: TC_008
+
+    - name: Copy iso file to different name
+      copy:
+        src: "{{ iso_file_path }}/{{ iso_name }}"
+        dest: "{{ iso_file_path }}/{{ temp_iso_name }}"
+      when: iso_status.stat.exists == true
+      tags: TC_008
+
+    - name: Delete iso file
+      file:
+        path: "{{ iso_file_path }}/{{ iso_name }}"
+        state: "absent"
+      when: iso_status.stat.exists == true
+      tags: TC_008
+
+    - block:
+        - name: Call common role
+          include_role:
+            name: ../roles/common
+          vars:
+            input_config_filename: "{{ test_input_config_filename }}"
+
+        - name: Call provision role
+          include_role:
+            name: ../roles/provision
+      rescue:
+        - name: Validate iso missing error
+          assert:
+            that: iso_fail in iso_file_check.msg
+            success_msg: "{{ iso_check_success_msg }}"
+            fail_msg: "{{ iso_check_fail_msg }}"
+      tags: TC_008
+
+    - name: Copy iso file to old name
+      copy:
+        src: "{{ iso_file_path }}/{{ temp_iso_name }}"
+        dest: "{{ iso_file_path }}/{{ iso_name }}"
+      when: iso_status.stat.exists == true
+      tags: TC_008

+ 83 - 322
appliance/test/test_provision_cdip.yml

@@ -14,7 +14,7 @@
 ---
 
 # Testcase OMNIA_DIO_US_CDIP_TC_001
-# Execute provison role in management station with cobbler as empty
+# Execute provison role in management station with os installed centos 7
 - name: OMNIA_DIO_US_CDIP_TC_001
   hosts: localhost
   connection: local
@@ -36,320 +36,25 @@
       tags: TC_001
 
     - block:
-        - name: Test cobbler password with empty string
+        - name: Call common role
           include_role:
-            name: ../roles/provision
-            tasks_from: "{{ item }}"
-          with_items:
-           - "{{ cobbler_image_files }}"
-          vars:
-            no_prompt: true
-            admin_password: "{{ empty_password }}"
-            admin_password_confirm: "{{ empty_password }}"
-      rescue:
-        - name: Validate failure message
-          assert:
-            that: fail_msg_pwd_format in msg_pwd_format.msg
-            success_msg: "{{ validate_password_success_msg }}"
-            fail_msg: "{{ validate_password_fail_msg }}"
-      tags: TC_001
-
-# Testcase OMNIA_DIO_US_CDIP_TC_002
-# Execute provison role in management station with cobbler password of length 8 characters
-- name: OMNIA_DIO_US_CDIP_TC_002
-  hosts: localhost
-  connection: local
-  vars_files:
-    - test_vars/test_provision_vars.yml
-    - ../roles/provision/vars/main.yml
-  tasks:
-    - name: Delete the cobbler container if exits
-      docker_container:
-        name: "{{ docker_container_name }}"
-        state: absent
-      tags: TC_002
-
-    - name: Delete docker image if exists
-      docker_image:
-        name: "{{ docker_image_name }}"
-        tag: "{{ docker_image_tag }}"
-        state: absent
-      tags: TC_002
-
-    - block:
-        - name: Test cobbler password with 8 characters
-          include_role:
-            name: ../roles/provision
-            tasks_from: "{{ item }}"
-          with_items:
-           - "{{ cobbler_image_files }}"
-          vars:
-            no_prompt: true
-            admin_password: "{{ boundary_password }}"
-            admin_password_confirm: "{{ boundary_password }}"
-      always:
-        - name: Validate success message
-          assert:
-            that:  success_msg_pwd_format in msg_pwd_format.msg
-            success_msg: "{{ validate_password_success_msg }}"
-            fail_msg: "{{ validate_password_fail_msg }}"
-      tags: TC_002
-
-# Testcase OMNIA_DIO_US_CDIP_TC_003
-# Execute provison role in management station with cobbler password of length greather than 15 characters
-- name: OMNIA_DIO_US_CDIP_TC_003
-  hosts: localhost
-  connection: local
-  vars_files:
-    - test_vars/test_provision_vars.yml
-    - ../roles/provision/vars/main.yml
-  tasks:
-    - name: Delete the cobbler container if exits
-      docker_container:
-        name: "{{ docker_container_name }}"
-        state: absent
-      tags: TC_003
-
-    - name: Delete docker image if exists
-      docker_image:
-        name: "{{ docker_image_name }}"
-        tag: "{{ docker_image_tag }}"
-        state: absent
-      tags: TC_003
-
-    - block:
-        - name: Test cobbler password with lengthy string
-          include_role:
-             name: ../roles/provision
-             tasks_from: "{{ item }}"
-          with_items:
-           - "{{ cobbler_image_files }}"
-          vars:
-            no_prompt: true
-            admin_password: "{{ lengthy_password }}"
-            admin_password_confirm: "{{ lengthy_password }}"
-      always:
-        - name: Validate success message
-          assert:
-            that:  success_msg_pwd_format in msg_pwd_format.msg
-            success_msg: "{{ validate_password_success_msg }}"
-            fail_msg: "{{ validate_password_fail_msg }}"
-      tags: TC_003
-
-# Testcase OMNIA_DIO_US_CDIP_TC_004
-# Execute provison role in management station with cobbler password contains white spaces
-- name: OMNIA_DIO_US_CDIP_TC_004
-  hosts: localhost
-  connection: local
-  vars_files:
-    - test_vars/test_provision_vars.yml
-    - ../roles/provision/vars/main.yml
-  tasks:
-    - name: Delete the cobbler container if exits
-      docker_container:
-        name: "{{ docker_container_name }}"
-        state: absent
-      tags: TC_004
-
-    - name: Delete docker image if exists
-      docker_image:
-        name: "{{ docker_image_name }}"
-        tag: "{{ docker_image_tag }}"
-        state: absent
-      tags: TC_004
-
-    - block:
-        - name: Test cobbler password with string contains white space
-          include_role:
-            name: ../roles/provision
-            tasks_from: "{{ item }}"
-          with_items:
-           - "{{ cobbler_image_files }}"
-          vars:
-            no_prompt: true
-            admin_password: "{{ whitespace_password }}"
-            admin_password_confirm: "{{ whitespace_password }}"
-      always:
-        - name: Validate success message
-          assert:
-            that:  success_msg_pwd_format in msg_pwd_format.msg
-            success_msg: "{{ validate_password_success_msg }}"
-            fail_msg: "{{ validate_password_fail_msg }}"
-      tags: TC_004
-
-# Testcase OMNIA_DIO_US_CDIP_TC_005
-# Execute provison role in management station with cobbler password as string with special characters
-- name: OMNIA_DIO_US_CDIP_TC_005
-  hosts: localhost
-  connection: local
-  vars_files:
-    - test_vars/test_provision_vars.yml
-    - ../roles/provision/vars/main.yml
-  tasks:
-    - name: Delete the cobbler container if exits
-      docker_container:
-        name: "{{ docker_container_name }}"
-        state: absent
-      tags: TC_005
-
-    - name: Delete docker image if exists
-      docker_image:
-        name: "{{ docker_image_name }}"
-        tag: "{{ docker_image_tag }}"
-        state: absent
-      tags: TC_005
-
-    - block:
-        - name: Test cobbler password with string contains special characters
-          include_role:
-            name: ../roles/provision
-            tasks_from: "{{ item }}"
-          with_items:
-           - "{{ cobbler_image_files }}"
-          vars:
-            no_prompt: true
-            admin_password: "{{ special_character_password }}"
-            admin_password_confirm: "{{ special_character_password }}"
-      always:
-        - name: Validate success message
-          assert:
-            that:  success_msg_pwd_format in msg_pwd_format.msg
-            success_msg: "{{ validate_password_success_msg }}"
-            fail_msg: "{{ validate_password_success_msg }}"
-      tags: TC_005
-
-# Testcase OMNIA_DIO_US_CDIP_TC_006
-# Execute provison role in management station with cobbler password and cobbler password confirm having unequal values
-- name: OMNIA_DIO_US_CDIP_TC_006
-  hosts: localhost
-  connection: local
-  vars_files:
-    - test_vars/test_provision_vars.yml
-    - ../roles/provision/vars/main.yml
-  tasks:
-    - name: Delete the cobbler container if exits
-      docker_container:
-        name: "{{ docker_container_name }}"
-        state: absent
-      tags: TC_006
-
-    - name: Delete docker image if exists
-      docker_image:
-        name: "{{ docker_image_name }}"
-        tag: "{{ docker_image_tag }}"
-        state: absent
-      tags: TC_006
-
-    - block:
-        - name: Test cobbler password with unequal values
-          include_role:
-            name: ../roles/provision
-            tasks_from: "{{ item }}"
-          with_items:
-           - "{{ cobbler_image_files }}"
-          vars:
-            no_prompt: true
-            admin_password: "{{ boundary_password }}"
-            admin_password_confirm: "{{ lengthy_password }}"
-      rescue:
-        - name: Validate failure message
-          assert:
-            that:  fail_msg_pwd_confirm in msg_pwd_confirm.msg
-            success_msg: "{{ validate_password_success_msg }}"
-            fail_msg: "{{ validate_password_success_msg }}"
-      tags: TC_006
-
-# Testcase OMNIA_DIO_US_CDIP_TC_007
-# Execute provison role in management station where docker service not running
-- name: OMNIA_DIO_US_CDIP_TC_007
-  hosts: localhost
-  connection: local
-  vars_files:
-    - test_vars/test_provision_vars.yml
-    - ../roles/provision/vars/main.yml
-  tasks:
-    - name: Delete the cobbler container if exits
-      docker_container:
-        name: "{{ docker_container_name }}"
-        state: absent
-      tags: TC_007
-
-    - name: Delete docker image if exists
-      docker_image:
-        name: "{{ docker_image_name }}"
-        tag: "{{ docker_image_tag }}"
-        state: absent
-      tags: TC_007
-
-    - name: Stop docker service
-      service:
-        name: docker
-        state: stopped
-      tags: TC_007
-
-    - block:
-        - name: Call provision role
-          include_role:
-            name: ../roles/provision
+            name: ../roles/common
           vars:
-            no_prompt: true
-            admin_password: "{{ boundary_password }}"
-            admin_password_confirm: "{{ boundary_password }}"
-
-        - name: Docker service stopped usecase fail message
-          fail:
-            msg: "{{ docker_check_fail_msg }}"
-      rescue:
-        - name: Docker service stopped usecase success message
-          debug:
-            msg: "{{ docker_check_success_msg }}"
-      always:
-        - name: Start docker service
-          service:
-            name: docker
-            state: started
-      tags: TC_007
-
-# Testcase OMNIA_DIO_US_CDIP_TC_008
-# Execute provison role in management station with os installed centos 8.2
-- name: OMNIA_DIO_US_CDIP_TC_008
-  hosts: localhost
-  connection: local
-  vars_files:
-    - test_vars/test_provision_vars.yml
-    - ../roles/provision/vars/main.yml
-  tasks:
-    - name: Delete the cobbler container if exits
-      docker_container:
-        name: "{{ docker_container_name }}"
-        state: absent
-      tags: TC_008
-
-    - name: Delete docker image if exists
-      docker_image:
-        name: "{{ docker_image_name }}"
-        tag: "{{ docker_image_tag }}"
-        state: absent
-      tags: TC_008
+            input_config_filename: "{{ test_input_config_filename }}"
 
-    - block:
         - name: Call provision role
           include_role:
             name: ../roles/provision
             tasks_from: "{{ item }}"
           with_items:
            - "{{ cobbler_image_files }}"
-          vars:
-            no_prompt: true
-            admin_password: "{{ boundary_password }}"
-            admin_password_confirm: "{{ boundary_password }}"
-      tags: TC_008
+      tags: TC_001
 
     - name: Inspect cobbler docker image
       docker_image_info:
         name: "{{ docker_image_name }}"
       register: cobbler_image_status
-      tags: TC_008
+      tags: TC_001,VERIFY_001
 
     - name: Validate cobbler docker image
       assert:
@@ -357,13 +62,13 @@
           - cobbler_image_status.images
         fail_msg: "{{ cobbler_img_fail_msg }}"
         success_msg: "{{ cobbler_img_success_msg }}"
-      tags: TC_008
+      tags: TC_001,VERIFY_001
 
     - name: Inspect cobbler container
       docker_container_info:
         name: "{{ docker_container_name }}"
       register: cobbler_cnt_status
-      tags: TC_008
+      tags: TC_001,VERIFY_001
 
     - name: Validate cobbler docker container
       assert:
@@ -371,7 +76,7 @@
           - cobbler_cnt_status.exists
         fail_msg: "{{ cobbler_cnt_fail_msg }}"
         success_msg: "{{ cobbler_cnt_success_msg }}"
-      tags: TC_008
+      tags: TC_001,VERIFY_001
 
     - name: Validate first NIC is not assigned to public internet
       shell: |
@@ -382,18 +87,18 @@
         executable: /bin/bash
       failed_when: first_nic in nic_output.stdout
       changed_when: false
-      tags: TC_008
+      tags: TC_001,VERIFY_001
 
     - name: "Validate NIC-1 is assigned to IP {{ nic1_ip_address }}"
       assert:
-        that: "'{{ nic1_ip_address }}' in ansible_eno1.ipv4.address"
+        that: "'{{ nic1_ip_address }}' in ansible_em1.ipv4.address"
         fail_msg: "{{ nic_check_fail_msg }}"
         success_msg: "{{ nic_check_success_msg }}"
-      tags: TC_008
+      tags: TC_001,VERIFY_001
 
-# Testcase OMNIA_DIO_US_CDIP_TC_009
+# Testcase OMNIA_DIO_US_CDIP_TC_002
 # Execute provison role in management station where cobbler container and image already created
-- name: OMNIA_DIO_US_CDIP_TC_009
+- name: OMNIA_DIO_US_CDIP_TC_002
   hosts: localhost
   connection: local
   vars_files:
@@ -401,21 +106,22 @@
     - ../roles/provision/vars/main.yml
   tasks:
     - block:
+        - name: Call common role
+          include_role:
+            name: ../roles/common
+          vars:
+            input_config_filename: "{{ test_input_config_filename }}"
+
         - name: Call provision role
           include_role:
             name: ../roles/provision
-          vars:
-            no_prompt: true
-            username: "{{ cobbler_username }}"
-            admin_password: "{{ boundary_password }}"
-            admin_password_confirm: "{{ boundary_password }}"
-      tags: TC_009
+      tags: TC_002
 
     - name: Inspect cobbler docker image
       docker_image_info:
         name: "{{ docker_image_name }}"
       register: cobbler_image_status
-      tags: TC_009
+      tags: TC_002,VERIFY_002
 
     - name: Validate cobbler docker image
       assert:
@@ -423,13 +129,13 @@
           - cobbler_image_status.images
         fail_msg: "{{ cobbler_img_fail_msg }}"
         success_msg: "{{ cobbler_img_success_msg }}"
-      tags: TC_009
+      tags: TC_002,VERIFY_002
 
     - name: Inspect cobbler container
       docker_container_info:
         name: "{{ docker_container_name }}"
       register: cobbler_cnt_status
-      tags: TC_009
+      tags: TC_002,VERIFY_002
 
     - name: Validate cobbler docker container
       assert:
@@ -437,7 +143,7 @@
           - cobbler_cnt_status.exists
         fail_msg: "{{ cobbler_cnt_fail_msg }}"
         success_msg: "{{ cobbler_cnt_success_msg }}"
-      tags: TC_009
+      tags: TC_002,VERIFY_002
 
     - name: Validate first NIC is not assigned to public internet
       shell: |
@@ -448,11 +154,66 @@
         executable: /bin/bash
       failed_when: first_nic in nic_output.stdout
       changed_when: false
-      tags: TC_009
+      tags: TC_002,VERIFY_002
 
     - name: "Validate NIC-1 is assigned to IP {{ nic1_ip_address }}"
       assert:
-        that: "'{{ nic1_ip_address }}' in ansible_eno1.ipv4.address"
+        that: "'{{ nic1_ip_address }}' in ansible_em1.ipv4.address"
         fail_msg: "{{ nic_check_fail_msg }}"
         success_msg: "{{ nic_check_success_msg }}"
-      tags: TC_009
+      tags: TC_002,VERIFY_002
+
+# Testcase OMNIA_DIO_US_CDIP_TC_003
+# Execute provison role in management station where docker service not running
+- name: OMNIA_DIO_US_CDIP_TC_003
+  hosts: localhost
+  connection: local
+  vars_files:
+    - test_vars/test_provision_vars.yml
+    - ../roles/provision/vars/main.yml
+  tasks:
+    - name: Delete the cobbler container if exits
+      docker_container:
+        name: "{{ docker_container_name }}"
+        state: absent
+      tags: TC_003
+
+    - name: Delete docker image if exists
+      docker_image:
+        name: "{{ docker_image_name }}"
+        tag: "{{ docker_image_tag }}"
+        state: absent
+      tags: TC_003
+
+    - name: Stop docker service
+      service:
+        name: docker
+        state: stopped
+      tags: TC_003
+
+    - block:
+        - name: Call common role
+          include_role:
+            name: ../roles/common
+          vars:
+            input_config_filename: "{{ test_input_config_filename }}"
+
+        - name: Call provision role
+          include_role:
+            name: ../roles/provision
+
+        - name: Docker service stopped usecase success message
+          debug:
+            msg: "{{ docker_check_success_msg }}"
+
+      rescue:
+        - name: Docker service stopped usecase fail message
+          fail:
+            msg: "{{ docker_check_fail_msg }}"
+
+      always:
+        - name: Start docker service
+          service:
+            name: docker
+            state: started
+      tags: TC_003

+ 90 - 41
appliance/test/test_provision_ndod.yml

@@ -13,44 +13,61 @@
 #  limitations under the License.
 ---
 
-# OMNIA_DIO_US_NDOD_TC_013
-# Execute provison role in management station and  PXE boot one compute node 
-- name: OMNIA_DIO_US_NDOD_TC_013
+# OMNIA_DIO_US_NDOD_TC_009
+# Execute provison role in management station and  PXE boot one compute node
+- name: OMNIA_DIO_US_NDOD_TC_009
   hosts: localhost
   connection: local
   gather_subset:
     - 'min'
   vars_files:
     - test_vars/test_provision_vars.yml
+    - ../roles/common/vars/main.yml
   tasks:
     - name: Set ip address of the compute node
       set_fact:
         single_node_ip_address: "{{ groups[cobbler_groupname][0] }}"
-      tags: TC_013
+      tags: TC_009,VERIFY_009
 
     - name: Delete inventory if exists
       file:
         path: inventory
         state: absent
-      tags: TC_013
+      tags: TC_009,VERIFY_009
+
+    - name: Check input config file is encrypted
+      command: cat {{ test_input_config_filename }}
+      changed_when: false
+      register: config_content
+      tags: TC_009,VERIFY_009
+
+    - name: Decrpyt input_config.yml
+      command: ansible-vault decrypt {{ test_input_config_filename }} --vault-password-file {{ vault_path }}
+      changed_when: false
+      when: "'$ANSIBLE_VAULT;' in config_content.stdout"
+      tags: TC_009,VERIFY_009
+
+    - name: Include variable file input_config.yml
+      include_vars: "{{ test_input_config_filename }}"
+      tags: TC_009,VERIFY_009
 
     - name: Create inventory file
       lineinfile:
         path: inventory
-        line: "{{ single_node_ip_address }} ansible_user=root ansible_password={{ boundary_password }} ansible_ssh_common_args='-o StrictHostKeyChecking=no'"
+        line: "{{ single_node_ip_address }} ansible_user=root ansible_password={{ provision_password }} ansible_ssh_common_args='-o StrictHostKeyChecking=no'"
         create: yes
         mode: '{{ file_permission }}'
-      tags: TC_013
+      tags: TC_009,VERIFY_009
 
     - meta: refresh_inventory
-      tags: TC_013
+      tags: TC_009,VERIFY_009
 
     - name: Validate authentication of username and password
       command: ansible {{ single_node_ip_address }} -m ping -i inventory
       register: validate_login
       changed_when: false
       ignore_errors: yes
-      tags: TC_013
+      tags: TC_009,VERIFY_009
 
     - name: Validate the authentication output
       assert:
@@ -60,31 +77,31 @@
           - "'UNREACHABLE' not in validate_login.stdout"
         fail_msg: "{{ authentication_fail_msg }}"
         success_msg: "{{ authentication_success_msg }}"
-      tags: TC_013
+      tags: TC_009,VERIFY_009
 
     - name: Check hostname
       command: ansible {{ single_node_ip_address }} -m shell -a hostname -i inventory
       register: validate_hostname
       changed_when: false
       ignore_errors: yes
-      tags: TC_013
+      tags: TC_009,VERIFY_009
 
     - name: Validate the hostname
       assert:
         that: "'localhost' not in validate_hostname.stdout"
         fail_msg: "{{ hostname_fail_msg }}"
         success_msg: "{{ hostname_success_msg }}"
-      tags: TC_013
+      tags: TC_009,VERIFY_009
 
     - name: Delete inventory if exists
       file:
         path: inventory
         state: absent
-      tags: TC_013
+      tags: TC_009,VERIFY_009
 
-# OMNIA_DIO_US_NDOD_TC_014
+# OMNIA_DIO_US_NDOD_TC_010
 # Execute provison role in management station and PXE boot two compute node
-- name: OMNIA_DIO_US_NDOD_TC_014
+- name: OMNIA_DIO_US_NDOD_TC_010
   hosts: localhost
   connection: local
   gather_subset:
@@ -97,7 +114,23 @@
       file:
         path: inventory
         state: absent
-      tags: TC_014
+      tags: TC_010,VERIFY_010
+
+    - name: Check input config file is encrypted
+      command: cat {{ test_input_config_filename }}
+      changed_when: false
+      register: config_content
+      tags: TC_010,VERIFY_010
+
+    - name: Decrpyt input_config.yml
+      command: ansible-vault decrypt {{ test_input_config_filename }} --vault-password-file {{ vault_path }}
+      changed_when: false
+      when: "'$ANSIBLE_VAULT;' in config_content.stdout"
+      tags: TC_010,VERIFY_010
+
+    - name: Include variable file input_config.yml
+      include_vars: "{{ test_input_config_filename }}"
+      tags: TC_010,VERIFY_010
 
     - name: Create inventory file
       lineinfile:
@@ -105,18 +138,18 @@
         line: "[nodes]"
         create: yes
         mode: '{{ file_permission }}'
-      tags: TC_014
+      tags: TC_010,VERIFY_010
 
     - name: Edit inventory file
       lineinfile:
         path: inventory
-        line: "{{ item }} ansible_user=root ansible_password={{ boundary_password }} ansible_ssh_common_args='-o StrictHostKeyChecking=no'"
+        line: "{{ item }} ansible_user=root ansible_password={{ provision_password }} ansible_ssh_common_args='-o StrictHostKeyChecking=no'"
       with_items:
         - "{{ groups[cobbler_groupname] }}"
-      tags: TC_014
+      tags: TC_010,VERIFY_010
 
     - meta: refresh_inventory
-      tags: TC_014
+      tags: TC_010,VERIFY_010
 
     - name: Validate ip address is different for both servers
       assert:
@@ -125,14 +158,14 @@
         success_msg: "{{ ip_address_success_msg }}"
       delegate_to: localhost
       run_once: yes
-      tags: TC_014
+      tags: TC_010,VERIFY_010
 
     - name: Check hostname of both servers
       command: ansible nodes -m shell -a hostname -i inventory
       register: node_hostname
       changed_when: false
       ignore_errors: yes
-      tags: TC_014
+      tags: TC_010,VERIFY_010
 
     - name: Validate hostname is different for both servers
       assert:
@@ -144,7 +177,7 @@
         success_msg: "{{ hostname_success_msg }}"
       delegate_to: localhost
       run_once: yes
-      tags: TC_014
+      tags: TC_010,VERIFY_010
 
     - name: Delete inventory if exists
       file:
@@ -152,11 +185,11 @@
         state: absent
       delegate_to: localhost
       run_once: yes
-      tags: TC_014
+      tags: TC_010,VERIFY_010
 
-# OMNIA_DIO_US_NDOD_TC_015
+# OMNIA_DIO_US_NDOD_TC_011
 # Validate passwordless ssh connection established or not with compute nodes
-- name: OMNIA_DIO_US_NDOD_TC_015
+- name: OMNIA_DIO_US_NDOD_TC_011
   hosts: localhost
   gather_subset:
     - 'min'
@@ -165,11 +198,11 @@
     - ../roles/provision/vars/main.yml
   tasks:
     - name: Validate authentication of username and password
-      command: "ansible {{ cobbler_groupname }} -m ping -i cobbler_inventory"
+      command: "ansible {{ cobbler_groupname }} -m ping -i {{ inventory_file }}"
       register: validate_login
       changed_when: false
       ignore_errors: yes
-      tags: TC_015
+      tags: TC_011,VERIFY_011
 
     - name: Validate the passwordless SSH connection
       assert:
@@ -179,11 +212,11 @@
           - "'UNREACHABLE' not in validate_login.stdout"
         success_msg: "{{ authentication_success_msg }}"
         fail_msg: "{{ authentication_fail_msg }}"
-      tags: TC_015
+      tags: TC_011,VERIFY_011
 
-# OMNIA_DIO_US_NDOD_TC_016
+# OMNIA_DIO_US_NDOD_TC_012
 # Execute provison role in management station and reboot compute node after os provision again
-- name: OMNIA_DIO_US_NDOD_TC_016
+- name: OMNIA_DIO_US_NDOD_TC_012
   hosts: localhost
   connection: local
   gather_subset:
@@ -194,13 +227,29 @@
     - name: Set ip address of the compute node
       set_fact:
         single_node_ip_address: "{{ groups[cobbler_groupname][0] }}"
-      tags: TC_016
+      tags: TC_012,VERIFY_012
 
     - name: Delete inventory if exists
       file:
         path: inventory
         state: absent
-      tags: TC_016
+      tags: TC_012,VERIFY_012
+
+    - name: Check input config file is encrypted
+      command: cat {{ test_input_config_filename }}
+      changed_when: false
+      register: config_content
+      tags: TC_012,VERIFY_012
+
+    - name: Decrpyt input_config.yml
+      command: ansible-vault decrypt {{ test_input_config_filename }} --vault-password-file {{ vault_path }}
+      changed_when: false
+      when: "'$ANSIBLE_VAULT;' in config_content.stdout"
+      tags: TC_012,VERIFY_012
+
+    - name: Include variable file input_config.yml
+      include_vars: "{{ test_input_config_filename }}"
+      tags: TC_012,VERIFY_012
 
     - name: Create inventory file
       lineinfile:
@@ -208,38 +257,38 @@
         line: "[nodes]"
         create: yes
         mode: '{{ file_permission }}'
-      tags: TC_016
+      tags: TC_012,VERIFY_012
 
     - name: Edit inventory file
       lineinfile:
         path: inventory
-        line: "{{ single_node_ip_address }} ansible_user=root ansible_password={{ boundary_password }} ansible_ssh_common_args='-o StrictHostKeyChecking=no'"
-      tags: TC_016
+        line: "{{ single_node_ip_address }} ansible_user=root ansible_password={{ provision_password }} ansible_ssh_common_args='-o StrictHostKeyChecking=no'"
+      tags: TC_012,VERIFY_012
 
     - meta: refresh_inventory
-      tags: TC_016
+      tags: TC_012,VERIFY_012
 
     - name: Reboot servers
       command: ansible nodes -m command -a reboot -i inventory
       ignore_errors: yes
       changed_when: true
-      tags: TC_016
+      tags: TC_012,VERIFY_012
 
     - name: Wait for 10 minutes
       pause:
         minutes: 10
-      tags: TC_016
+      tags: TC_012,VERIFY_012
 
     - name: Check ip address of servers
       command: ansible nodes -m command -a 'ip a' -i inventory
       ignore_errors: yes
       changed_when: false
       register: ip_address_after_reboot
-      tags: TC_016
+      tags: TC_012,VERIFY_012
 
     - name: Validate ip address is same after reboot
       assert:
         that: "'{{ single_node_ip_address }}' in ip_address_after_reboot.stdout"
         fail_msg: "{{ ip_address_fail_msg }}"
         success_msg: "{{ ip_address_success_msg }}"
-      tags: TC_016
+      tags: TC_012,VERIFY_012

+ 18 - 11
appliance/test/test_vars/test_common_vars.yml

@@ -14,24 +14,31 @@
 ---
 
 # vars file for test_common.yml file
-docker_volume_fail_msg: "Docker volume omnia-storage does not exist"
-
-docker_volume_success_msg: "Docker volume omnia-storage exists"
-
 centos_version: '7.8'
+test_input_config_filename: "input_config_test.yml"
+empty_input_config_filename: "input_config_empty.yml"
+new_input_config_filename: "input_config_new.yml"
+password_config_file: "password_config"
+min_length_password: "testpass"
+max_length_password: "helloworld123helloworld12hello"
+long_password: "helloworld123hellowordl12hello3"
+white_space_password: "hello world 123"
+special_character_password1: "hello-world/"
+special_character_password2: "hello@$%!world"
 
+docker_volume_success_msg: "Docker volume omnia-storage exists"
+docker_volume_fail_msg: "Docker volume omnia-storage does not exist"
+input_config_success_msg: "Input config file is encrypted using ansible-vault successfully"
+input_config_fail_msg: "Input config file is failed to encrypt using ansible-vault"
 os_check_success_msg: "OS check passed"
-
 os_check_fail_msg: "OS check failed"
-
 internet_check_success_msg: "Internet connectivity check passed"
-
 internet_check_fail_msg: "Internet connectivity check failed"
-
 different_user_check_success_msg: "Different user execution check passed"
-
 different_user_check_fail_msg: "Different user execution check failed"
-
 selinux_check_success_msg: "selinux check passed"
-
 selinux_check_fail_msg: "selinux check failed"
+input_config_check_success_msg: "input_config.yml validation passed"
+input_config_check_fail_msg: "input_config.yml validation failed"
+install_package_success_msg: "Installation of package is successful"
+install_package_fail_msg: "Installation of package is failed"

+ 21 - 8
appliance/test/test_vars/test_provision_vars.yml

@@ -14,11 +14,7 @@
 ---
 
 # Usage: test_provision_cdip.yml
-empty_password: ""
-lengthy_password: "a1b2c3d4e5f6g7h8i9j10k11"
-whitespace_password: "hello world 123"
-special_character_password: "hello@123#%"
-first_nic: "eno1"
+first_nic: "em1"
 nic1_ip_address: 172.17.0.1
 validate_password_success_msg: "Password validation successful"
 validate_password_fail_msg: "Password validation failed"
@@ -35,6 +31,8 @@ cobbler_image_files:
  - firewall_settings
  - provision_password
  - cobbler_image
+password_config_file: "password_config"
+test_input_config_filename: "input_config_test.yml"
 
 # Usage: test_provision_cc.yml
 docker_check_success_msg: "Docker service stopped usescase validation successful"
@@ -55,7 +53,20 @@ kickstart_file_fail_msg: "Kickstart file validation failed"
 kickstart_file_success_msg: "Kickstart file validation successful"
 cobbler_reboot_fail_msg: "Cobbler container failed to start after reboot"
 cobbler_reboot_success_msg: "Cobbler container started successfully after reboot"
-kickstart_filename: "centos8.ks"
+crontab_list_fail_msg: "Crontab list validation failed"
+crontab_list_success_msg: "Crontab list validation successful"
+iso_check_fail_msg: "centos iso file check validation failed"
+iso_check_success_msg: "centos iso file check validation successful"
+cobbler_service_check_fail_msg: "TFTP service validation failed"
+cobbler_service_check_success_msg: "TFTP service validation successful"
+kickstart_filename: "centos7.ks"
+iso_file_path: "../roles/provision/files"
+temp_iso_name: "temp_centos.iso"
+cobbler_services:
+ - tftp
+ - dhcpd
+ - cobblerd
+ - xinetd
 
 # Usage: test_provision_cdip.yml, test_provision_cc.yml, test_provision_ndod.yml
 docker_container_name: "cobbler"
@@ -68,5 +79,7 @@ authentication_fail_msg: "Server authentication validation failed"
 authentication_success_msg: "Server authentication validation successful"
 ip_address_fail_msg: "IP address validation failed"
 ip_address_success_msg: "IP address validation successful"
-cobbler_groupname: "cobbler_servers"
-file_permission: 0644
+cobbler_groupname: "all"
+inventory_file: "provisioned_hosts.yml"
+file_permission: 0644
+vault_path: ../roles/common/files/.vault_key

+ 79 - 20
omnia.yml

@@ -13,72 +13,124 @@
 # limitations under the License.
 ---
 
-# Omnia playbook. Will be updated later.
+- name: Validate the cluster
+  hosts: localhost
+  connection: local
+  gather_facts: no
+  roles:
+    - cluster_validation
 
 - name: Gather facts from all the nodes
   hosts: all
 
+- name: Prepare the cluster with passwordless ssh from manager to compute
+  hosts: manager
+  gather_facts: false
+  pre_tasks:
+    - name: Set Fact
+      set_fact:
+        ssh_to: "{{ groups['compute'] }}"
+  roles:
+    - cluster_preperation
+
+- name: Prepare the cluster with passwordless ssh from compute to manager
+  hosts: compute
+  gather_facts: false
+  pre_tasks:
+    - name: Set Fact
+      set_fact:
+        ssh_to: "{{ groups['manager'] }}"
+  roles:
+    - cluster_preperation
+    
 - name: Apply common installation and config
   hosts: manager, compute
   gather_facts: false
   roles:
     - common
- 
-#- name: Apply GPU node config
-#  hosts: gpus
-#  gather_facts: false
-#  roles:
-#    - compute_gpu
+  tags: common
+
+- name: Apply common K8s installation and config
+  hosts: manager, compute
+  gather_facts: false
+  roles:
+    - k8s_common
+  tags: kubernetes
+
+- name: Apply GPU node config
+  hosts: gpus
+  gather_facts: false
+  roles:
+    - compute_gpu
 
 - name: Apply K8s manager config
   hosts: manager
   gather_facts: true
   roles:
-    - manager
+    - k8s_manager
+  tags: kubernetes
 
 - name: Apply K8s firewalld config on manager and compute nodes
   hosts: manager, compute
   gather_facts: false
   roles:
-    - firewalld
+    - k8s_firewalld
+  tags: kubernetes
+
+- name: Apply NFS server setup on manager node
+  hosts: manager
+  gather_facts: false
+  roles:
+    - k8s_nfs_server_setup
+  tags: kubernetes
+
+- name: Apply NFS client setup on compute nodes
+  hosts: compute
+  gather_facts: false
+  roles:
+    - k8s_nfs_client_setup
+  tags: kubernetes
 
 - name: Start K8s on manager server
   hosts: manager
   gather_facts: true
   roles:
-    - startmanager
+    - k8s_start_manager
+  tags: kubernetes
 
 - name: Start K8s worker servers on compute nodes
   hosts: compute
   gather_facts: false
   roles:
-    - startworkers
+    - k8s_start_workers
+  tags: kubernetes
 
 - name: Start K8s worker servers on manager nodes
   hosts: manager
   gather_facts: false
   roles:
-    - startservices
+    - k8s_start_services
+  tags: kubernetes
 
-- name: Apply SLURM manager config
-  hosts: manager
+- name: Apply common Slurm installation and config
+  hosts: manager, compute
   gather_facts: false
   roles:
-    - slurm_manager
+    - slurm_common
   tags: slurm
 
-- name: Apply common Slurm installation and config
-  hosts: manager, compute
+- name: Apply Slurm manager config
+  hosts: manager
   gather_facts: false
   roles:
-    - slurm_common
+    - slurm_manager
   tags: slurm
 
-- name: Start slurm workers
+- name: Start Slurm workers
   hosts: compute
   gather_facts: false
   roles:
-    - start_slurm_workers
+    - slurm_workers
   tags: slurm
 
 - name: Start Slurm services
@@ -87,3 +139,10 @@
   roles:
     - slurm_start_services
   tags: slurm
+
+- name: Install slurm exporter
+  hosts: manager
+  gather_facts: false
+  roles:
+    - slurm_exporter
+  tags: slurm

+ 36 - 0
roles/cluster_preperation/tasks/main.yml

@@ -0,0 +1,36 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+- name: Set Facts
+  set_fact:
+    ansible_ssh_common_args: '-o StrictHostKeyChecking=no'
+
+- name: Disable host key checking
+  replace:
+    path: /etc/ssh/ssh_config
+    regexp: '#   StrictHostKeyChecking ask'
+    replace: 'StrictHostKeyChecking no'
+
+- name: Install sshpass
+  package:
+    name: sshpass
+    state: present
+
+- name: Verify and set passwordless ssh from manager to compute nodes
+  block:
+    - name: Execute on individual hosts
+      include_tasks: passwordless_ssh.yml
+      with_items: "{{ ssh_to }}"
+      loop_control:
+        pause: 5

+ 65 - 0
roles/cluster_preperation/tasks/passwordless_ssh.yml

@@ -0,0 +1,65 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+- name: Initialize variables
+  set_fact:
+    ssh_status: false
+    current_host: "{{ item }}"
+
+- name: Verify whether passwordless ssh is set on the remote host
+  command: ssh -o PasswordAuthentication=no root@"{{ current_host }}" 'hostname'
+  register: ssh_output
+  ignore_errors: yes
+  changed_when: False
+
+- name: Update ssh connection status
+  set_fact:
+    ssh_status: true
+  when: "'Permission denied' not in ssh_output.stderr"
+
+- name: Verify the public key file existence
+  stat:
+    path: "{{ rsa_id_file }}"
+  register: verify_rsa_id_file
+  when: not ssh_status
+
+- name: Generate ssh key pair
+  command: ssh-keygen -t rsa -b 4096 -f "{{ rsa_id_file }}" -q -N "{{ passphrase }}"
+  when:
+    - not ssh_status
+    - not verify_rsa_id_file.stat.exists
+
+- name: Add the key identity
+  shell: |
+    eval `ssh-agent -s`
+    ssh-add "{{ rsa_id_file }}"
+  when: not ssh_status
+
+- name: Create .ssh directory
+  command: >-
+    sshpass -p "{{ hostvars['127.0.0.1']['cobbler_password'] }}"
+    ssh root@"{{ current_host }}" mkdir -p /root/.ssh
+  when: not ssh_status
+
+- name: Copy the public key to remote host
+  shell: >-
+    set -o pipefail && cat "{{ rsa_id_file }}".pub
+    | sshpass -p "{{ hostvars['127.0.0.1']['cobbler_password'] }}"
+    ssh root@"{{ current_host }}" 'cat >> "{{ auth_key_path }}"'
+  when: not ssh_status
+
+- name: Change permissions on the remote host
+  shell: sshpass -p "{{ hostvars['127.0.0.1']['cobbler_password'] }}" ssh root@"{{ current_host }}" 'chmod 700 .ssh; chmod 640 "{{ auth_key_path }}"'
+  when: not ssh_status

+ 19 - 0
roles/cluster_preperation/vars/main.yml

@@ -0,0 +1,19 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+#Usage: passwordless_ssh.yml
+rsa_id_file: "/root/.ssh/id_rsa"
+passphrase: ""
+auth_key_path: "/root/.ssh/authorized_keys"

+ 34 - 0
roles/cluster_validation/tasks/fetch_password.yml

@@ -0,0 +1,34 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+- name: Decrpyt input_config.yml
+  command: >-
+    ansible-vault decrypt {{ role_path }}/../../appliance/{{ input_config_filename }}
+    --vault-password-file {{ role_path }}/../../appliance/roles/common/files/{{ vault_filename }}
+  changed_when: false
+
+- name: Include variable file input_config.yml
+  include_vars: "{{ role_path }}/../../appliance/{{ input_config_filename }}"
+
+- name: Save input variables from file
+  set_fact:
+    cobbler_password: "{{ provision_password }}"
+    db_password: "{{ mariadb_password }}"
+
+- name: Encrypt input config file
+  command: >-
+    ansible-vault encrypt {{ role_path }}/../../appliance/{{ input_config_filename }}
+    --vault-password-file {{ role_path }}/../../appliance/roles/common/files/{{ vault_filename }}
+  changed_when: false

+ 22 - 0
roles/cluster_validation/tasks/main.yml

@@ -0,0 +1,22 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+- name: Include vars file of common role
+  include_vars: "{{ role_path }}/../../appliance/roles/common/vars/main.yml"
+
+- name: Perform validations
+  include_tasks: validations.yml
+
+- name: Fetch cobbler password
+  include_tasks: fetch_password.yml

+ 36 - 0
roles/cluster_validation/tasks/validations.yml

@@ -0,0 +1,36 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+- name: Validate skip tags
+  fail:
+    msg: "{{ skip_tag_fail_msg }}"
+  when: "'slurm' in ansible_skip_tags and 'kubernetes' in ansible_skip_tags"
+
+- name: Manager group to contain exactly 1 node
+  assert:
+    that: "groups['manager'] | length | int == 1"
+    fail_msg: "{{ manager_group_fail_msg }}"
+    success_msg: "{{ manager_group_success_msg }}"
+
+- name: Compute group to contain atleast 1 node
+  assert:
+    that: "groups['compute'] | length | int >= 1"
+    fail_msg: "{{ compute_group_fail_msg }}"
+    success_msg: "{{ compute_group_success_msg }}"
+
+- name: Manager and compute groups should be disjoint
+  assert:
+    that: "groups['manager'][0] not in groups['compute']"
+    fail_msg: "{{ disjoint_fail_msg }}"
+    success_msg: "{{ disjoint_success_msg }}"

+ 22 - 0
roles/cluster_validation/vars/main.yml

@@ -0,0 +1,22 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+skip_tag_fail_msg: "Can't skip both slurm and kubernetes"
+manager_group_fail_msg: "manager group should contain exactly 1 node"
+manager_group_success_msg: "manager group check passed"
+compute_group_fail_msg: "compute group should contain atleast 1 node"
+compute_group_success_msg: "compute group check passed"
+disjoint_fail_msg: "manager and compute groups should be disjoint"
+disjoint_success_msg: "manager and compute groups are disjoint"

+ 1 - 1
roles/common/files/daemon.json

@@ -6,4 +6,4 @@
     }
   },
   "default-runtime": "nvidia"
-}
+}

+ 1 - 1
roles/common/files/inventory.fact

@@ -15,4 +15,4 @@ cat << EOF
 }
 EOF
 
-rm -f $INVENTORY
+rm -f $INVENTORY

+ 17 - 12
roles/common/handlers/main.yml

@@ -1,18 +1,23 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
 ---
 
-- name: Start and Enable docker service
-  service:
-    name: docker
-    state: restarted
-    enabled: yes
-  #tags: install
-
-- name: Start and Enable Kubernetes - kubelet
-  service:
-    name: kubelet
+- name: Restart ntpd
+  systemd:
+    name: ntpd
     state: started
     enabled: yes
-  #tags: install
 
 - name: Restart chrony
   service:
@@ -32,4 +37,4 @@
   register: chrony_src
   until:  chrony_src.stdout.find('^*') > -1
   retries: "{{ retry_count }}"
-  delay: "{{ delay_count }}"
+  delay: "{{ delay_count }}"

+ 5 - 59
roles/common/tasks/main.yml

@@ -15,23 +15,17 @@
 
 - name: Create a custom fact directory on each host
   file:
-    path: /etc/ansible/facts.d
+    path: "{{ custom_fact_dir }}"
     state: directory
+    mode: "{{ custom_fact_dir_mode }}"
 
 - name: Install accelerator discovery script
   copy:
     src: inventory.fact
-    dest: /etc/ansible/facts.d/inventory.fact
-    mode: 0755
-
-- name: Add kubernetes repo
-  copy:
-    src: kubernetes.repo
-    dest: "{{ k8s_repo_dest }}"
+    dest: "{{ accelerator_discovery_script_dest }}"
     owner: root
     group: root
-    mode: "{{ k8s_repo_file_mode }}"
-  tags: install
+    mode: "{{ accelerator_discovery_script_mode }}"
 
 - name: Add elrepo GPG key
   rpm_key:
@@ -45,26 +39,6 @@
     state: present
   tags: install
 
-- name: Add docker community edition repository
-  get_url:
-    url: "{{ docker_repo_url }}"
-    dest: "{{ docker_repo_dest }}"
-  tags: install
-
-- name: Update sysctl to handle incorrectly routed traffic when iptables is bypassed
-  copy:
-    src: k8s.conf
-    dest: "{{ k8s_conf_dest }}"
-    owner: root
-    group: root
-    mode: "{{ k8s_conf_file_mode }}"
-  tags: install
-
-- name: Update sysctl
-  command: /sbin/sysctl --system
-  changed_when: true
-  tags: install
-
 - name: Disable swap
   command: /sbin/swapoff -a
   changed_when: true
@@ -84,44 +58,16 @@
 - name: Collect host facts (including acclerator information)
   setup: ~
 
-- name: Install k8s packages
-  package:
-    name: "{{ k8s_packages }}"
-    state: present
-  tags: install
-
-- name: Versionlock kubernetes
-  command: "yum versionlock '{{ item }}'"
-  args:
-    warn: false
-  with_items:
-    - "{{ k8s_packages }}"
-  changed_when: true
-  tags: install
-
 - name: Install infiniBand support
   package:
     name: "@Infiniband Support"
     state: present
   tags: install
 
-- name: Start and enable docker service
-  service:
-    name: docker
-    state: restarted
-    enabled: yes
-  tags: install
-
-- name: Start and enable kubernetes - kubelet
-  service:
-    name: kubelet
-    state: restarted
-    enabled: yes
-
 - name: Deploy time ntp/chrony
   include_tasks: ntp.yml
   tags: install
 
 - name: Install Nvidia drivers and software components
   include_tasks: nvidia.yml
-  when: ansible_local.inventory.nvidia_gpu > 0
+  when: ansible_local.inventory.nvidia_gpu > 0

+ 25 - 25
roles/common/tasks/ntp.yml

@@ -13,28 +13,28 @@
 #  limitations under the License.
 ---
 
-#- name: Deploy ntp servers
-#block:
-#- name: Deploy ntpd
-#package:
-#name: ntp
-#state: present
-#- name: Deploy ntpdate
-#package:
-#name: ntpdate
-#state: present
-#- name: Update ntp servers
-#template:
-#src: ntp.conf.j2
-#dest: "{{ ntp_path }}"
-#owner: root
-#group: root
-#mode: "{{ ntp_mode }}"
-          #backup: yes
-          #notify:
-          #- restart ntpd
-            #- sync ntp clocks
-            #when:  ( ansible_distribution == "CentOS" or   ansible_distribution == "RedHat" ) and ansible_distribution_major_version  < os_higher_version
+  - name: Deploy ntp servers
+    block:
+      - name: Deploy ntpd
+        package:
+          name: ntp
+          state: present
+      - name: Deploy ntpdate
+        package:
+          name: ntpdate
+          state: present
+      - name: Update ntp servers
+        template:
+          src: ntp.conf.j2
+          dest: "{{ ntp_path }}"
+          owner: root
+          group: root
+          mode: "{{ ntp_mode }}"
+          backup: yes
+        notify:
+          - Restart ntpd
+          - Sync tp clocks
+    when:  ( ansible_distribution == "CentOS" or   ansible_distribution == "RedHat" ) and ansible_distribution_major_version  < os_higher_version
 
   - name: Deploy chrony server
     block:
@@ -51,6 +51,6 @@
           mode: "{{ ntp_mode }}"
           backup: yes
         notify:
-          - restart chrony
-          - sync chrony sources
-    when:  ( ansible_distribution == "CentOS" or   ansible_distribution == "RedHat" ) and ansible_distribution_major_version  > os_version
+          - Restart chrony
+          - Sync chrony sources
+    when:  ( ansible_distribution == "CentOS" or   ansible_distribution == "RedHat" ) and ansible_distribution_major_version  > os_version

+ 1 - 1
roles/common/tasks/nvidia.yml

@@ -58,4 +58,4 @@
     name: kubelet
     state: restarted
     enabled: yes
-  tags: install
+  tags: install

+ 1 - 2
roles/common/templates/chrony.conf.j2

@@ -38,5 +38,4 @@ leapsectz right/UTC
 logdir /var/log/chrony
 
 # Select which information is logged.
-#log measurements statistics tracking
-
+#log measurements statistics tracking

+ 1 - 3
roles/common/templates/ntp.conf.j2

@@ -11,6 +11,4 @@ server  {{ item }} iburst
 
 includefile /etc/ntp/crypto/pw
 
-keys /etc/ntp/keys
-
-
+keys /etc/ntp/keys

+ 7 - 17
roles/common/vars/main.yml

@@ -19,32 +19,22 @@ common_packages:
   - gcc
   - nfs-utils
   - python3-pip
-  - docker-ce
   - bash-completion
   - nvidia-detect
   - chrony
   - pciutils
 
-k8s_packages:
-  - kubelet-1.16.7
-  - kubeadm-1.16.7
-  - kubectl-1.16.7
+custom_fact_dir: /etc/ansible/facts.d
 
-k8s_repo_dest: /etc/yum.repos.d/
+custom_fact_dir_mode: 0755
 
-elrepo_gpg_key_url: https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
-
-elrepo_rpm_url: https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
-
-docker_repo_url: https://download.docker.com/linux/centos/docker-ce.repo
+accelerator_discovery_script_dest: /etc/ansible/facts.d/inventory.fact
 
-docker_repo_dest: /etc/yum.repos.d/docker-ce.repo
+accelerator_discovery_script_mode: 0755
 
-k8s_conf_dest: /etc/sysctl.d/
-
-k8s_repo_file_mode: 0644
+elrepo_gpg_key_url: https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
 
-k8s_conf_file_mode: 0644
+elrepo_rpm_url: https://www.elrepo.org/elrepo-release-7.el7.elrepo.noarch.rpm
 
 chrony_path: "/etc/chrony.conf"
 ntp_path: "/etc/ntp.conf"
@@ -73,4 +63,4 @@ nvidia_packages:
   - nvidia-docker2
 
 daemon_file_dest: /etc/docker/
-daemon_file_mode: 0644
+daemon_file_mode: 0644

roles/common/files/k8s.conf → roles/k8s_common/files/k8s.conf


roles/common/files/kubernetes.repo → roles/k8s_common/files/kubernetes.repo


+ 28 - 0
roles/k8s_common/handlers/main.yml

@@ -0,0 +1,28 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+- name: Start and Enable docker service
+  service:
+    name: docker
+    state: restarted
+    enabled: yes
+  tags: install
+
+- name: Start and Enable Kubernetes - kubelet
+  service:
+    name: kubelet
+    state: started
+    enabled: yes
+  tags: install

+ 77 - 0
roles/k8s_common/tasks/main.yml

@@ -0,0 +1,77 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+- name: Add kubernetes repo
+  copy:
+    src: kubernetes.repo
+    dest: "{{ k8s_repo_dest }}"
+    owner: root
+    group: root
+    mode: "{{ k8s_repo_file_mode }}"
+  tags: install
+
+- name: Add docker community edition repository
+  get_url:
+    url: "{{ docker_repo_url }}"
+    dest: "{{ docker_repo_dest }}"
+  tags: install
+
+- name: Update sysctl to handle incorrectly routed traffic when iptables is bypassed
+  copy:
+    src: k8s.conf
+    dest: "{{ k8s_conf_dest }}"
+    owner: root
+    group: root
+    mode: "{{ k8s_conf_file_mode }}"
+  tags: install
+
+- name: Update sysctl
+  command: /sbin/sysctl --system
+  changed_when: true
+  tags: install
+
+- name: Install docker
+  package:
+    name: docker-ce
+    state: present
+  tags: install
+
+- name: Install k8s packages
+  package:
+    name: "{{ k8s_packages }}"
+    state: present
+  tags: install
+
+- name: Versionlock kubernetes
+  command: "yum versionlock '{{ item }}'"
+  args:
+    warn: false
+  with_items:
+    - "{{ k8s_packages }}"
+  changed_when: true
+  tags: install
+
+- name: Start and enable docker service
+  service:
+    name: docker
+    state: restarted
+    enabled: yes
+  tags: install
+
+- name: Start and enable kubernetes - kubelet
+  service:
+    name: kubelet
+    state: restarted
+    enabled: yes

+ 31 - 0
roles/k8s_common/vars/main.yml

@@ -0,0 +1,31 @@
+#  Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+#  Licensed under the Apache License, Version 2.0 (the "License");
+#  you may not use this file except in compliance with the License.
+#  You may obtain a copy of the License at
+#
+#      http://www.apache.org/licenses/LICENSE-2.0
+#
+#  Unless required by applicable law or agreed to in writing, software
+#  distributed under the License is distributed on an "AS IS" BASIS,
+#  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#  See the License for the specific language governing permissions and
+#  limitations under the License.
+---
+
+k8s_packages:
+  - kubelet-1.16.7
+  - kubeadm-1.16.7
+  - kubectl-1.16.7
+
+k8s_repo_dest: /etc/yum.repos.d/
+
+docker_repo_url: https://download.docker.com/linux/centos/docker-ce.repo
+
+docker_repo_dest: /etc/yum.repos.d/docker-ce.repo
+
+k8s_conf_dest: /etc/sysctl.d/
+
+k8s_repo_file_mode: 0644
+
+k8s_conf_file_mode: 0644

+ 2 - 2
roles/firewalld/tasks/main.yml

@@ -40,7 +40,7 @@
     port: "{{ item }}/tcp"
     permanent: yes
     state: enabled
-  with_items: '{{ k8s_worker_ports }}'
+  with_items: '{{ k8s_compute_ports }}'
   when: "'compute' in group_names"
   tags: firewalld
 
@@ -81,4 +81,4 @@
     name: firewalld
     state: stopped
     enabled: no
-  tags: firewalld
+  tags: firewalld

+ 1 - 2
roles/firewalld/vars/main.yml

@@ -25,7 +25,7 @@ k8s_master_ports:
   - 10252
 
 # Worker nodes firewall ports
-k8s_worker_ports:
+k8s_compute_ports:
   - 10250
   - 30000-32767
 
@@ -35,7 +35,6 @@ calico_udp_ports:
 calico_tcp_ports:
   - 5473
   - 179
-  - 5473
 
 # Flannel CNI firewall ports
 flannel_udp_ports:

roles/manager/tasks/main.yml → roles/k8s_manager/tasks/main.yml


roles/manager/vars/main.yml → roles/k8s_manager/vars/main.yml


+ 40 - 0
roles/k8s_nfs_client_setup/tasks/main.yml

@@ -0,0 +1,40 @@
+# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+---
+
+- name: Install nfs-utils
+  package:
+    name: nfs-utils
+    state: present
+  tags: nfs_client
+
+- name: Creating directory to mount NFS Share
+  file:
+    path: "{{ nfs_mnt_dir }}"
+    state: directory
+    mode: "{{ nfs_mnt_dir_mode }}"
+  tags: nfs_client
+
+- name: Mounting NFS Share
+  command: "mount {{ groups['manager'] }}:{{ nfs_mnt_dir }} {{ nfs_mnt_dir }}"
+  changed_when: true
+  args:
+    warn: false
+  tags: nfs_client
+
+- name: Configuring Automount NFS Shares on reboot
+  lineinfile:
+    path: "{{ fstab_file_path }}"
+    line: "{{ groups['manager'] }}:{{ nfs_mnt_dir }}     {{ nfs_mnt_dir }}  nfs     nosuid,rw,sync,hard,intr 0 0"
+  tags: nfs_client

+ 20 - 0
roles/k8s_nfs_client_setup/vars/main.yml

@@ -0,0 +1,20 @@
+# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+---
+
+nfs_mnt_dir: /home/k8snfs
+
+nfs_mnt_dir_mode: 0755
+
+fstab_file_path: /etc/fstab

+ 84 - 0
roles/k8s_nfs_server_setup/tasks/main.yml

@@ -0,0 +1,84 @@
+# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+---
+
+- name: Install nfs-utils
+  package:
+    name: nfs-utils
+    state: present
+  tags: nfs_server
+
+- name: Install firewalld
+  package:
+    name: firewalld
+    state: present
+  tags: firewalld
+
+- name: Start and enable firewalld
+  service:
+    name: firewalld
+    state: started
+    enabled: yes
+  tags: firewalld
+
+- name: Start and enable rpcbind and nfs-server service
+  service:
+    name: "{{ item }}"
+    state: restarted
+    enabled: yes
+  with_items:
+    - rpcbind
+    - nfs-server
+  tags: nfs_server
+
+- name: Creating NFS share directory
+  file:
+    path: "{{ nfs_share_dir }}"
+    state: directory
+    mode: "{{ nfs_share_dir_mode }}"
+  tags: nfs_server
+
+- name: Adding NFS share entries in /etc/exports
+  lineinfile:
+    path: "{{ exports_file_path }}"
+    line: "{{ nfs_share_dir }} {{ item }}(rw,sync,no_root_squash)"
+  with_items:
+    - "{{ groups['compute'] }}"
+  tags: nfs_server
+
+- name: Exporting the shared directories
+  command: exportfs -r
+  changed_when: true
+  tags: nfs_server
+
+- name: Configuring firewall
+  firewalld:
+    service: "{{ item }}"
+    permanent: true
+    state: enabled
+  with_items:
+    - "{{ nfs_services }}"
+  tags: nfs_server
+
+- name: Reload firewalld
+  command: firewall-cmd --reload
+  changed_when: true
+  tags: nfs_server
+
+- name: Stop and disable firewalld
+  service:
+    name: firewalld
+    state: stopped
+    enabled: no
+  tags: firewalld

+ 25 - 0
roles/k8s_nfs_server_setup/vars/main.yml

@@ -0,0 +1,25 @@
+# Copyright 2020 Dell Inc. or its subsidiaries. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+#     http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+---
+
+nfs_share_dir: /home/k8snfs
+
+nfs_share_dir_mode: 0777
+
+exports_file_path: /etc/exports
+
+nfs_services:
+  - mountd
+  - rpc-bind
+  - nfs

roles/startmanager/files/create_admin_user.yaml → roles/k8s_start_manager/files/create_admin_user.yaml


roles/startmanager/files/create_clusterRoleBinding.yaml → roles/k8s_start_manager/files/create_clusterRoleBinding.yaml


+ 0 - 0
roles/startmanager/files/data-pv.yaml


+ 0 - 0
roles/startmanager/files/data2-pv.yaml


+ 0 - 0
roles/startmanager/files/data3-pv.yaml


+ 0 - 0
roles/startmanager/files/data4-pv.yaml


+ 0 - 0
roles/startmanager/files/flannel_net.sh


+ 0 - 0
roles/startmanager/files/katib-pv.yaml


roles/startmanager/files/kube-flannel.yaml → roles/k8s_start_manager/files/kube-flannel.yaml


roles/startmanager/files/kubeflow_persistent_volumes.yaml → roles/k8s_start_manager/files/kubeflow_persistent_volumes.yaml


+ 0 - 0
roles/startmanager/files/minio-pvc.yaml


+ 0 - 0
roles/startmanager/files/mysql-pv.yaml


roles/startmanager/files/nfs-class.yaml → roles/k8s_start_manager/files/nfs-class.yaml


roles/startmanager/files/nfs-deployment.yaml → roles/k8s_start_manager/files/nfs-deployment.yaml


roles/startmanager/files/nfs-serviceaccount.yaml → roles/k8s_start_manager/files/nfs-serviceaccount.yaml


roles/startmanager/files/nfs_clusterrole.yaml → roles/k8s_start_manager/files/nfs_clusterrole.yaml


roles/startmanager/files/nfs_clusterrolebinding.yaml → roles/k8s_start_manager/files/nfs_clusterrolebinding.yaml


+ 0 - 0
roles/startmanager/files/notebook-pv.yaml


+ 0 - 0
roles/startmanager/files/persistent_volumes.yaml


+ 0 - 0
roles/startmanager/files/pvc.yaml


이 변경점에서 너무 많은 파일들이 변경되어 몇몇 파일들은 표시되지 않았습니다.