Opsview ["Opsview"]["Technical Reference"]

Opsview Monitor 6.4.x or newer Upgrade Notes

Overview

The Opsview upgrade notes provide information about the product changes that can affect upgrading from 6.4.x version of Opsview Monitor or newer to 6.7.x.

If your current version of Opsview Monitor is 6.4.x or higher, you can perform an in-place upgrade.

Prior to installing or upgrading Opsview Monitor to a newer version, please read the following pages:

For major and minor version upgrades, please get in touch with your Account Manager to schedule the upgrade. You can also reach out to our technical team through the Support Portal for assistance.

This section describes the steps required to upgrade an existing Opsview Monitor 6.4.x system running on either a single server instance or a distributed Opsview environment (with a remote database and slaves) to the current version of Opsview Monitor.

Depending on the size and complexity of your current Opsview Monitor system, this process may take between a few hours to a full day. This includes the following processes:

  • Back up your Opsview data.

  • Upgrade Opsview Deploy.

  • Run deployment process.

  • Verify the started processes.

  • Upgrade Opspacks.

  • Apply changes in Opsview Monitor.

  • Run the Database Schema Migration script (may be run at a later time).

For guidance on upgrading, you can watch the recorded video about moving from Opsview 5.x to 6.x.

Upgrade process

We recommend you update all your hosts to the latest OS packages before upgrading Opsview Monitor.

Minor upgrades

When performing any upgrade, for example from 6.6.x to newer versions, it is advisable to take a backup of your system and therefore this is why the minor upgrade steps mirror that of the main upgrade steps.

We advise that you check what has changed through the versions in the ITRS Opsview 6.x Release Notes.

Once your system is backed up, the process will be based on the Upgrading: Automated section.

Activation key

Ensure you have your activation key for your system. Please contact Opsview Support if you have any issues,

Back up your Opsview data and system

Please refer to the Common Tasks page for more information.

Run the below command as root which will back up all databases on the server:

# mysqldump -u root -p --add-drop-database --extended-insert --opt --all-databases | gzip -c > /tmp/databases.sql.gz

The MySQL root user password can be found in /opt/opsview/deploy/etc/user_secrets.yml.

Ensure you copy your database dump (/tmp/databases.sql.gz in the above command) to a secure location.

Opsview Deploy

Upgrading to a new version of Opsview Monitor requires the following steps:

  1. Add the package repository for the new version of Opsview Monitor.

  2. Install the latest Opsview Deploy (opsview-deploy) package.

  3. Install the latest Opsview Python (opsview-python3) package.

  4. Re-run the installation playbooks to upgrade to the new version.

Once the upgrade has completed, all hosts managed by Opsview Deploy will have been upgraded to the latest version of Opsview Monitor.

Warning: Running the curl commands will start the upgrade process so only run them when you want to upgrade Opsview.

Upgrading: Automated

  1. Configure the correct Opsview Monitor package repository and update opsview-deploy to the corresponding version by running the command:

  2. curl -sLo- https://deploy.opsview.com/6.x | sudo bash -s -- --only repository,bootstrap

    You must replace 6.x with the correct version you want to upgrade to.

  3. Validate if your system is ready for upgrading, and set up python on all systems (installing if needed) by running the command:

  4. root:~# cd /opt/opsview/deploy
    root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/check-deploy.yml
  5. If you use opsview-results-exporter, you should upgrade this packages first:

    • For Debian and Ubuntu: apt install opsview-results-exporter

    • For CentOS, RHEL, and OEL: yum install opsview-results-exporter

  6. Continue to upgrade your system by running this command:

  7. root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/setup-everything.yml

Once completed, continue with Post upgrade process.

Upgrading: Manual

Amend your Opsview repository configuration to point to the 6.7 release rather than 6.4 or 6.5.

For CentOS/RHEL/OL:

Check if the contents of /etc/yum.repos.d/opsview.repo matches the following, paying special attention to the version number specified within the baseurl line:

[opsview]
name    = Opsview Monitor
baseurl = https://downloads.opsview.com/opsview-commercial/6.x/yum/rhel/$releasever/$basearch
enabled = yes
gpgkey  = https://downloads.opsview.com/OPSVIEW-RPM-KEY.asc

You must replace 6.x with the correct version you want to upgrade to.

For Debian/Ubuntu:

Check if the contents of /etc/apt/sources.list.d/opsview.list matches the following, paying special attention to the version number specified within the URL. You should replace xenial with your OS name (as per other files within the same directory).

deb https://downloads.opsview.com/opsview-commercial/6.x/apt xenial main

Update Opsview Deploy

Run the command below for CentOS/RHEL/OL.

yum makecache fast
yum install opsview-deploy

Run the command below for Debian/Ubuntu.

apt-get update
apt-get install opsview-deploy

Pre-deployment checks

Before running opsview-deploy, we recommend that you check the following list of items.

Manual checks

What Where Why
All YAML files follow correct YAML format. opsview_deploy.yml, user_*.yml Each YAML file is parsed each time opsview-deploy runs.
All hostnames are FQDNs. opsview_deploy.yml If Opsview Deploy cannot detect the host's domain, the fallback domain opsview.local will be used instead.
SSH user and SSH port have been set on each host. opsview_deploy.yml If these are not specified, the default SSH client configuration will be used instead.
Any host-specific vars are applied in the host's vars in opsview_deploy.yml. opsview_deploy.yml, user_*.yml Configuration in user_*.yml is applied to all hosts.
An IP address has been set on each host. opsview_deploy.yml If no IP address is specified, the deployment host will try to resolve each host every time.
All necessary ports are allowed on local and remote firewalls. All hosts Opsview requires various ports for inter-process communication. See Ports.
If you have rehoming. user_upgrade_vars.yml Deploy now configures rehoming automatically. See Rehoming.
If you have Ignore IP in Authentication Cookie enabled. user_upgrade_vars.yml Ignore IP in Authentication Cookie is now controlled in Deploy. See Rehoming.
Webserver HTTP/HTTPS preference declared user_vars.yml In Opsview6, HTTPS is enabled by default, to enforce HTTP-only then you need to set opsview_webserver_use_ssl: False. See opsview-web-app.
     

Example of opsview-deploy.yml:

---
orchestrator_hosts:
  # Use an FQDN here
  my-host.net.local:
    # Ensure that an IP address is specified
    ip: 10.2.0.1
    # Set the remote user for SSH (if not default of 'root')
    ssh_user: cloud-user
    # Set the remote port for SSH (if not default of port 22)
    ssh_port: 9022
    # Additional host-specific vars
    vars:
      # Path to SSH private key
      ansible_ssh_private_key_file: /path/to/ssh/private/key

Automated checks

Opsview Deploy can also look for (and resolve some) issues automatically. Before executing setup-hosts.yml or setup-everything.yml, run the check-deploy.yml playbook. Beginning Opsview 6.6.x, this playbook will additionally set up Python on all systems used:

root:~# cd /opt/opsview/deploy
root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/check-deploy.yml

If any potential issues are detected, a REQUIRED ACTION RECAP will be added to the output when the play finishes.

Check Notes or limitations Severity
Deprecated variables Checks for: opsview_domain, opsview_manage_etc_hosts. MEDIUM
Connectivity to EMS server No automatic detection of EMS URL in opsview.conf overrides. HIGH
Connectivity to Opsview repository No automatic detection of overridden repository URLs. HIGH
Connectivity between remote hosts Only includes LoadBalancer ports. Erlang distribution ports, for example, are not checked. MEDIUM
FIPS crypto enabled Checks value of /proc/sys/crypto/fips_enabled. HIGH
SELinux enabled SELinux will be set to permissive mode later on in the process by setup-hosts.yml, if necessary. LOW
Unexpected umask Checks umask in /bin/bash for root and nobody users. Expects either 0022 or 0002. LOW
Unexpected STDOUT starting shells Checks for any data on STDOUT when running /bin/bash -l. LOW
Availability of SUDO Checks whether Ansible can escalate permissions (using sudo). HIGH
     

When a check is failed, an 'Action' is generated. Each of these actions is formatted and displayed when the play finishes and, at the end of the output, sorted by their severity.

The severity levels are:

Severity Description
HIGH Will certainly prevent Opsview from installing or operating correctly.
MEDIUM May prevent Opsview from installing or operating correctly.
LOW Unlikely to cause issues but may contain useful information.
   

By default, the check_deploy role will fail if any actions are generated with MEDIUM or HIGH severity. To modify this behaviour, set the following in user_vars.yml:

check_action_fail_severity: MEDIUM

The actions at this severity or higher will result in a failure at the end of the role.

The following example shows the two MEDIUM severity issues generated after executing check-deploy playbook.

REQUIRED ACTION RECAP **************************************************************************************************************************************************************************************************************************
 
[MEDIUM -> my-host] Deprecated variable: opsview_domain
  | To set the host's domain, configure an FQDN in opsview_deploy.yml.
  |
  | For example:
  |
  | >>  opsview-host.my-domain.com:
  | >>    ip: 1.2.3.4
  |
  | Alternatively, you can set the domain globally by adding opsview_host_domain to your user_*.yml:
  |
  | >>  opsview_host_domain: my-domain.com
 
[MEDIUM -> my-host] Deprecated variable: opsview_manage_etc_hosts
  | To configure /etc/hosts, add opsview_host_update_etc_hosts to your user_*.yml:
  |
  | >>  opsview_host_update_etc_hosts: true
  |
  | The options are:
  | - true   Add all hosts to /etc/hosts
  | - auto   Add any hosts which cannot be resolved to /etc/hosts
  | - false  Do not update /etc/hosts
 
 
Thursday 21 February 2019  17:27:31 +0000 (0:00:01.060)       0:00:01.181 *****
===============================================================================
check_deploy : Check deprecated vars in user configuration ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ 1.06s
check_deploy : Check for 'become: yes' -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 0.03s
 
*** [PLAYBOOK EXECUTION SUCCESS] **********

Run Opsview Deploy

  1. Run the command below to validate if your system is ready for upgrading:

  2. root:~# cd /opt/opsview/deploy
    root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/check-deploy.yml
  3. If you use opsview-results-exporter, you need to upgrade this package first.

    • For Debian/Ubuntu: apt install opsview-results-exporter

    • For CentOS/RHEL/OEL: yum install opsview-results-exporter

  4. Run the command below to continue the upgrade:

  5. root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/setup-everything.yml

Post upgrade process

As part of the upgrade process, Opsview Deploy overwrites the contents of the configuration files for snmpd and snmptrapd. If Deploy detects that the file it is overwriting had changes made to it, the configuration file will be backed up and labelled with a timestamp while the new configuration replaces it.

A similar message below will display at the end of a run of Opsview Deploy indicating that the configuration file in the message has been overwritten.

REQUIRED ACTION RECAP *************************************************************************

[MEDIUM -> opsview-orch] SNMP configuration file '/etc/snmp/snmpd.conf' has been overwritten
  | The SNMP configuration file '/etc/snmp/snmpd.conf', has been overwritten by Opsview Deploy.
  | 
  | The original contents of the file have been backed up and can be found in
  | '/etc/snmp/snmpd.conf.15764.2020-12-16@12:31:32~'
  | 
  | Custom snmpd/snmptrapd configuration should be moved to the custom
  | configuration directories documented in the new file.

To avoid this in future, all custom snmpd and snmptrapd configuration should instead be put in new xxxx.conf files in the following directories respectively:

  • /etc/snmp/snmpd.conf.d

  • /etc/snmp/snmptrapd.conf.d

Verify the started process

  1. To verify that all Opsview processes are running, run:

  2. /opt/opsview/watchdog/bin/opsview-monit summary
  3. If the opsview-agent process is not running after deployment, run:

  4. systemctl stop opsview-agent
    systemctl start opsview-agent
    /opt/opsview/watchdog/bin/opsview-monit start opsview-agent
    /opt/opsview/watchdog/bin/opsview-monit monitor opsview-agent
  5. If watchdog is not running after deployment, run:

  6. /opt/opsview/watchdog/bin/opsview-monit

Install newer Opspacks

New, non-conflicting Opspacks will be installed as part of the Opsview installation. If you want to use the latest Opsview 6.x configuration, the command below will force the Opspacks to be installed.

On newmasterserver as opsview user, run:

/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/opsview-self-monitoring.tar.gz 
/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/opsview-component-registry.tar.gz 
/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/opsview-component-datastore.tar.gz 
/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/opsview-component-messagequeue.tar.gz 
/opt/opsview/orchestrator/bin/orchestratorimportopspacks --force -o /opt/opsview/monitoringscripts/opspacks/opsview-component-load-balancer.tar.gz 

Upgrade Opspacks

  1. Run the command below as the opsview user to update and add in new Opspacks for the version of Opsview you are upgrading:

  2. tar -zcvf /var/tmp/`date +%F-%R`_opspack.bak.tar.gz /opt/opsview/monitoringscripts/opspacks/*
    /opt/opsview/coreutils/bin/import_all_opspacks -f

    This may take some time to run.

  3. Run the following as the root user:

  4. cd /opt/opsview/deploy
    ./bin/opsview-deploy lib/playbooks/setup-monitoring.yml
  5. If you have amended your configuration to move the Opsview Servers (Orchestrator, Collectors, and Database) into a Hostgroup (other than Monitoring Servers), you must ensure you have the playbook variable opsview_monitoring_host_group set in /opt/opsview/deploy/etc/user_vars.yml, such as:

  6. opsview_monitoring_host_group: New Group with Opsview Servers
  7. If you receive Service Check alerts: CRITICAL: Could Not Connect to localhost Response Code: 401 Unauthorized, then the above step has not been run.

Sync all plugins to collectors

This command will copy all updated plugins on the Master Server to each of the Collectors and should be run as the root user:

root:~# cd /opt/opsview/deploy
root:/opt/opsview/deploy# ./bin/opsview-deploy lib/playbooks/sync_monitoringscripts.yml

Apply changes in Opsview

In the Opsview application UI, navigate to Configuration > Apply Changes, and click Apply Changes.

Uninstall Python 2 binaries

Caution: If you have written your own monitoring scripts, notification scripts, or integrations using the Python 2 binaries provided by the opsview-python package instead of your own Python implementation, you might be impacted by Opsview Monitor Python 3 migration. We recommend that you migrate your own monitoring scripts, notification scripts, or integrations to use the Python 3 binaries provided by opsview-python3 package or your own Python implementation.

To uninstall the Python 2 binaries provided by the opsview-python package from your Opsview Monitor system after upgrading to 6.7, please run the following command as root on your Opsview deployment host (where opsview-deploy is installed, this is often the master host):

root:~# cd /opt/opsview/deploy && bin/opsview-deploy lib/playbooks/python2-uninstall.yml

Run the Database Schema Migration script

This step does not have to be run at the same time as the Opsview Monitor upgrade.

Follow the documentation at Database Migration for SQL Strict Mode.