post-deployment
===============

.. _post-deployment_ceph-health:

ceph-health
-----------

Check the status of the ceph cluster.

Uses `ceph health` to check if cluster is in HEALTH_WARN state
and prints a debug message.


- **hosts**: ceph_mon
- **groups**: backup-and-restore, post-deployment, post-ceph
- **parameters**:

  - **tripleo_delegate_to**: {{ groups['ceph_mon'] | default([]) }}

  - **osd_percentage_min**: 0
- **roles**: ceph

Role documentation

.. toctree::

   roles/role-ceph

.. _post-deployment_check-kernel-version:

check-kernel-version
--------------------

Verify the kernel version contains el8 in its name.

This validation checks the kernel has been upgaded by checking
el8 is in kernel (uname -r) version string


- **hosts**: all
- **groups**: post-deployment
- **parameters**:
- **roles**: check_kernel_version

Role documentation

.. toctree::

   roles/role-check_kernel_version

.. _post-deployment_container-status:

container-status
----------------

Ensure container status.

Detect failed containers and raise an error.


- **hosts**: undercloud, allovercloud
- **groups**: backup-and-restore, pre-upgrade, pre-update, post-deployment, post-upgrade
- **parameters**:
- **roles**: container_status

Role documentation

.. toctree::

   roles/role-container_status

.. _post-deployment_controller-token:

controller-token
----------------

Verify that keystone admin token is disabled.

This validation checks that keystone admin token is disabled on both
undercloud and overcloud controller after deployment.


- **hosts**: ['undercloud', "{{ controller_rolename | default('Controller') }}"]
- **groups**: post-deployment
- **parameters**:

  - **keystone_conf_file**: /var/lib/config-data/puppet-generated/keystone/etc/keystone/keystone.conf
- **roles**: controller_token

Role documentation

.. toctree::

   roles/role-controller_token

.. _post-deployment_controller-ulimits:

controller-ulimits
------------------

Check controller ulimits.

This will check the ulimits of each controller.


- **hosts**: {{ controller_rolename | default('Controller') }}
- **groups**: post-deployment
- **parameters**:

  - **nofiles_min**: 1024

  - **nproc_min**: 2048
- **roles**: controller_ulimits

Role documentation

.. toctree::

   roles/role-controller_ulimits

.. _post-deployment_frr-status:

frr-status
----------

FRR Daemons Status Check.

Runs 'show watchfrr' and checks for any non-operational daemon.

A failed status post-deployment indicates at least one enabled FRR
daemon is not operational.


- **hosts**: all
- **groups**: post-deployment
- **parameters**:
- **roles**: frr_status

Role documentation

.. toctree::

   roles/role-frr_status

.. _post-deployment_healthcheck-service-status:

healthcheck-service-status
--------------------------

Healthcheck systemd services Check.

Check for failed healthcheck systemd services.


- **hosts**: undercloud, allovercloud
- **groups**: backup-and-restore, post-deployment
- **parameters**:

  - **retries_number**: 1

  - **delay_number**: 1

  - **inflight_healthcheck_services**: []
- **roles**: healthcheck_service_status

Role documentation

.. toctree::

   roles/role-healthcheck_service_status

.. _post-deployment_image-serve:

image-serve
-----------

Verify image-serve service is working and answering.

Ensures image-serve vhost is configured and httpd is running.


- **hosts**: undercloud
- **groups**: backup-and-restore, pre-upgrade, post-deployment, post-upgrade
- **parameters**:
- **roles**: image_serve

Role documentation

.. toctree::

   roles/role-image_serve

.. _post-deployment_mysql-open-files-limit:

mysql-open-files-limit
----------------------

MySQL Open Files Limit.

Verify the `open-files-limit` configuration is high enough

https://access.redhat.com/solutions/1598733


- **hosts**: ["{{ controller_rolename | default('Controller') }}", 'mysql']
- **groups**: post-deployment
- **parameters**:

  - **min_open_files_limit**: 16384
- **roles**: mysql_open_files_limit

Role documentation

.. toctree::

   roles/role-mysql_open_files_limit

.. _post-deployment_neutron-sanity-check:

neutron-sanity-check
--------------------

Neutron Sanity Check.

Run `neutron-sanity-check` on the controller nodes to find out
potential issues with Neutron's configuration.

The tool expects all the configuration files that are passed
to the Neutron services.


- **hosts**: {{ controller_rolename | default('Controller') }}
- **groups**: backup-and-restore, post-deployment
- **parameters**:
- **roles**: neutron_sanity_check

Role documentation

.. toctree::

   roles/role-neutron_sanity_check

.. _post-deployment_nova-event-callback:

nova-event-callback
-------------------

Nova Event Callback Configuration Check.

This validations verifies that the Nova Event Callback feature is
configured which is generally enabled by default.
It checks the following files on the Overcloud Controller(s):

- /etc/nova/nova.conf:
  [DEFAULT]/vif_plugging_is_fatal = True
  [DEFAULT]/vif_plugging_timeout >= 300
- /etc/neutron/neutron.conf:
  [nova]/auth_url = 'http://nova_admin_auth_ip:5000'
  [nova]/tenant_name = 'service'
  [DEFAULT]/notify_nova_on_port_data_changes = True
  [DEFAULT]/notify_nova_on_port_status_changes = True


- **hosts**: {{ controller_rolename | default('Controller') }}
- **groups**: post-deployment
- **parameters**:

  - **nova_config_file**: /var/lib/config-data/puppet-generated/nova/etc/nova/nova.conf

  - **neutron_config_file**: /var/lib/config-data/puppet-generated/neutron/etc/neutron/neutron.conf

  - **vif_plugging_fatal_check**: vif_plugging_is_fatal

  - **vif_plugging_timeout_check**: vif_plugging_timeout

  - **vif_plugging_timeout_value_min**: 300

  - **notify_nova_on_port_data_check**: notify_nova_on_port_data_changes

  - **notify_nova_on_port_status_check**: notify_nova_on_port_status_changes

  - **tenant_name_check**: tenant_name
- **roles**: nova_event_callback

Role documentation

.. toctree::

   roles/role-nova_event_callback

.. _post-deployment_nova-svirt:

nova-svirt
----------

Check nova sVirt support.

Ensures all running VM are correctly protected with sVirt


- **hosts**: Compute
- **groups**: post-deployment, post-upgrade
- **parameters**:
- **roles**: nova_svirt

Role documentation

.. toctree::

   roles/role-nova_svirt

.. _post-deployment_openstack-endpoints:

openstack-endpoints
-------------------

Check connectivity to various OpenStack services.

This validation gets the PublicVip address from the deployment and
tries to access Horizon and get a Keystone token.


- **hosts**: undercloud
- **groups**: post-deployment, pre-upgrade, post-upgrade, pre-update, post-update
- **parameters**:
- **roles**: openstack_endpoints

Role documentation

.. toctree::

   roles/role-openstack_endpoints

.. _post-deployment_overcloud-service-status:

overcloud-service-status
------------------------

Verify overcloud services state after running a deployment or an update.

An Ansible role to verify the Overcloud services states after a deployment
or an update.  It checks the API /os-services and looks for deprecated
services (nova-consoleauth) or any down services.


- **hosts**: Undercloud
- **groups**: post-deployment, pre-upgrade, post-upgrade, post-overcloud-upgrade, post-overcloud-converge
- **parameters**:

  - **overcloud_service_status_debug**: False

  - **overcloud_service_api**: ['nova', 'cinderv3']

  - **overcloud_deprecated_services**: {'nova': ['nova-consoleauth']}
- **roles**: overcloud_service_status

Role documentation

.. toctree::

   roles/role-overcloud_service_status

.. _post-deployment_ovs-dpdk-pmd-cpus-check:

ovs-dpdk-pmd-cpus-check
-----------------------

Validates OVS DPDK PMD cores from all NUMA nodes..

OVS DPDK PMD cpus must be provided from all NUMA nodes.

A failed status post-deployment indicates PMD CPU list is not
configured correctly.


- **hosts**: {{ compute_ovsdpdk_rolename | default('ComputeOvsDpdk') }}
- **groups**: post-deployment
- **parameters**:
- **roles**: ovs_dpdk_pmd

Role documentation

.. toctree::

   roles/role-ovs_dpdk_pmd

.. _post-deployment_pacemaker-status:

pacemaker-status
----------------

Check the status of the pacemaker cluster.

This runs `pcs status` and checks for any failed actions.

A failed status post-deployment indicates something is not configured
correctly. This should also be run before upgrade as the process will
likely fail with a cluster that's not completely healthy.


- **hosts**: {{ controller_rolename | default('Controller') }}
- **groups**: backup-and-restore, post-deployment
- **parameters**:
- **roles**: pacemaker_status

Role documentation

.. toctree::

   roles/role-pacemaker_status

.. _post-deployment_rabbitmq-limits:

rabbitmq-limits
---------------

Rabbitmq limits.

Make sure the rabbitmq file descriptor limits are set to reasonable values.


- **hosts**: {{ controller_rolename | default('Controller') }}
- **groups**: post-deployment
- **parameters**:

  - **min_fd_limit**: 16384
- **roles**: rabbitmq_limits

Role documentation

.. toctree::

   roles/role-rabbitmq_limits

.. _post-deployment_stonith-exists:

stonith-exists
--------------

Validate stonith devices.

Verify that stonith devices are configured for your OpenStack Platform HA cluster.
We don't configure stonith device with TripleO Installer. Because the hardware
configuration may be differ in each environment and requires different fence agents.
How to configure fencing please read
https://access.redhat.com/documentation/en/red-hat-openstack-platform/8/paged/director-installation-and-usage/86-fencing-the-controller-nodes


- **hosts**: {{ controller_rolename | default('Controller') }}
- **groups**: post-deployment
- **parameters**:
- **roles**: stonith_exists

Role documentation

.. toctree::

   roles/role-stonith_exists

.. _post-deployment_tls-everywhere-post-deployment:

tls-everywhere-post-deployment
------------------------------

Confirm that overcloud nodes are setup correctly.

Checks that overcloud nodes are registered with IdM
and that all certs being tracked by certmonger are in the
MONITORING state.


- **hosts**: allovercloud
- **groups**: post-deployment
- **parameters**:
- **roles**: tls_everywhere

Role documentation

.. toctree::

   roles/role-tls_everywhere
