Opsview 6.x Known Issues

Overview Copied

This page contains a list of bugs and issues that may affect the performance of your applications and components in Opsview 6.x.x.

ITRS Opsview works on isolating and fixing every product issue that we are aware of. We categorize a bug or an issue as a known issue if it meets the following criteria:

The Reported version is also provided for each known issue, which indicates the Opsview version where the issue was found. However, these issues may be present across multiple versions.

OS compatibility and installation issues Copied

Ubuntu 22 Copied

Reported version Known issue description

Opsview 6.10.1

(On-premises, Cloud)

This is a known issue affecting Ubuntu 22 repositories when using apt. When deploying or upgrading Opsview or adding collectors, the deployment process might stall on the Update apt-get task within the setup-hosts.yml playbook. This occurs because of a bug within apt that can cause some collectors, like coll-4 in this case, to fail to respond.
TASK [Update apt-get] ********************************************
Tuesday 13 August 2024  03:11:49 +0000 (0:00:00.471) 0:00:12.826 * 
changed: [lrl-u22-orch]
changed: [lrl-u22-coll-5]
changed: [lrl-u22-coll-1]
changed: [lrl-u22-coll-7]
changed: [lrl-u22-coll-9]
changed: [lrl-u22-coll-3]
changed: [lrl-u22-coll-6]
changed: [lrl-u22-coll-8]
changed: [lrl-u22-coll-2]
Further investigation of the affected collectors shows that the apt-get update processes are hanging.
# ps -ef | grep apt
root 3589 3588 0 03:11 ? 00:00:00 sudo apt-get -y --force-yes update
root 3590 3589 0 03:11 ? 00:00:01 apt-get -y --force-yes update
_apt 3599 3590 0 03:11 ? 00:00:00 /usr/lib/apt/methods/http
_apt 3601 3590 0 03:11 ? 00:00:00 /usr/lib/apt/methods/gpgv
As a workaround, perform the following steps:
  1. Stop the deployment process within the orchestrator by running the kill -9 <PID> command.
  2. Then in each affected collector, kill all apt processes.
  3. Rerun the failed opsview-deploy command that encountered the problem following the normal documentation.

Ubuntu 20 Copied

Reported version Known issue description

Opsview 6.10.1

(On-premises)

Ubuntu 20 ships by default with TLS 1.0 and TLS 1.1 disabled. This means that you may get errors when any OpenSSL libraries try to connect to external services. Ideally, the external service must be upgraded to support TLS 1.2, but if that is not possible, then you can re-enable TLS 1.0 and TLS 1.1.

Warning

By doing this, you are reducing security.
To test the external service, run the following command:
openssl s_client -connect SERVER:443 -tls1_2
This will fail if the external service does not support TLS 1.2. To allow Ubuntu 20 to use TLS 1.0, edit /etc/ssl/openssl.cnf and add this at the top:
openssl_conf = openssl_configuration
Then add this at the bottom:
[openssl_configuration]
ssl_conf = ssl_configuration
[ssl_configuration]
system_default = tls_system_default
[tls_system_default]
MinProtocol = TLSv1
CipherString = DEFAULT:@SECLEVEL=1
Now, check that connections will work.

Opsview 6.8.0

(On-premises, Cloud)

SNMP Polling Checks do not support the aes256 and aes256c SNMPv3 privacy protocols when run on Ubuntu 20 collectors. You may see an UNKNOWN state and an error message containing the following if these are attempted:
Invalid privacy protocol specified after -x flag: aes256
Invalid privacy protocol specified after -x flag: aes256c
See SNMP Privacy Protocol Support for further details.

Opsview 6.8.0

(On-premises, Cloud)

SNMP Traps being sent using the aes256 and aes256c SNMPv3 privacy protocol options will not appear if received by Ubuntu 20 collectors.

Ubuntu 18 Copied

Reported version Known issue description

Opsview 6.10.1

(On-premises)

The notify_by_email Notification Method will fail to work due to /usr/bin/mail being deprecated and replaced with /usr/bin/s-nail. Installing bsd-mailx fixes the issue.
apt install bsd-mailx

Debian 10 Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises, Cloud)

SNMP Polling Checks do not support the aes256 and aes256c SNMPv3 privacy protocols when run on Debian 10 collectors. You may see an UNKNOWN state and an error message starting with the following if these are attempted:
External command error: Invalid privacy protocol specified after -x flag: aes256
External command error: Invalid privacy protocol specified after -x flag: aes256c
See SNMP Privacy Protocol Support for further details.

Opsview 6.8.0

(On-premises, Cloud)

SNMP Traps being sent using the aes256 and aes256c SNMPv3 privacy protocol options will not appear if received by Debian 10 collectors.

CentOS 7, OEL 7, and RHEL 7 Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises, Cloud)

Email notifications: local mail subject lines do not display 4-byte UTF-8 characters correctly.

Opsview 6.8.0

(On-premises, Cloud)

Plugins, event handlers, and notification scripts that contain 4-byte UTF-8 characters do not display correctly in the filesystem but work properly in Opsview.

Upgrade and installation Copied

Reported version Known issue description

Opsview 6.8.2

(On-premises)

After an upgrade, Cluster Health monitoring may report a degradation of service. This can be caused by duplicate orchestrator processes that have become stuck. To fix this issue, stop the orchestrator component via watchdog, then run pkill -f "orchestratorlauncher$" to clean up any hanging processes. Afterwards, restart the orchestrator via watchdog. If the pkill did not stop the orchestratorlauncher processes, you can run pkill -9 -f "orchestratorlauncher$" instead.

Opsview 6.8.0

(On-premises)

The opsview-deploy package must be upgraded before running opsview-deploy to upgrade an Opsview Monitor System.

Opsview 6.8.0

(On-premises)

Changing the flow collectors configuration in Opsview Monitor currently requires a manual restart of the flow-collector component for it to start working again.

Opsview 6.8.0

(On-premises)

During upgrade, the following are not preserved:
  • Downtime — we recommend that you cancel any downtime (either active or scheduled) before you upgrade or migrate. Scheduling new downtime also works.
  • Flapping status — the state from pre-upgrade or migration is not retained, but if the host or service is still flapping, the next checks will set the status to a flapping status again.
  • Acknowledgements — at the end of an upgrade or migration, the first reload removes the acknowledgement state from hosts and services. Any further acknowledgement will work as usual.

Opsview 6.8.0

(On-premises)

If you use an HTTP proxy in your environment, the TimeSeries daemons may not be able to communicate. You can work around this by adding export NO_PROXY=localhost,127.0.0.1 environment variable (this is in upper case, not lower case) to the Opsview user .bashrc file.

Opsview 6.8.0

(On-premises)

Hosts and services in downtime will appear to stay in downtime even when it is cancelled. You can work around this issue by creating a new downtime, then waiting until it starts, and cancelling it afterward. Alternatively, you can add a downtime that lasts only for five minutes and let it expire naturally.

Opsview 6.8.0

(On-premises)

The opsview-messagequeue may occasionally fail to upgrade correctly when running opsview-deploy. See MessageQueue Troubleshooting for steps to fix the issue.

Databases Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises)

All database users created by Opsview will use the mysql_native_password authentication plugin (for MySQL 8, the default is usually caching_sha2_password).

Opsview 6.8.9

(On-premises)

When using mysqldump with an external database and the --set-gtid-purged=off is not set, this can lead to dump failures with the error: Couldn't execute 'FLUSH TABLES': Access denied; you need (at least one of) the RELOAD or FLUSH_TABLES privilege(s) for this operation

Opsview 6.8.0

(On-premises)

The MySQL RPM Repository Key stored within the product has expired. This has been fixed in a later version of Opsview Monitor, but it can be amended locally without upgrading. For APT based systems, edit /opt/opsview/deploy/lib/roles/opsview_database/vars/apt.yml on the Orchestrator, search for the line repo_key_id, and amend as follows:
mysql:
    ...
    repo_key_id: 3A79BD29
For RPM based systems, edit /opt/opsview/deploy/lib/roles/opsview_database/vars/yum.yml on the Orchestrator, search for the line repo_key_id, and amend as follows:
mysql:
    ...
    gpgkey: http://repo.mysql.com/RPM-GPG-KEY-mysql-2022

Opsview 6.8.0

(On-premises)

Deploy cannot be used to update the database root password. Root user password changes must be made manually and the /opt/opsview/deploy/etc/user_secrets.yml file updated with the correct password.

Opsview feature-specific issues Copied

Autodiscovery Copied

Reported version Known issue description

Opsview 6.8.5

(On-premises, Cloud)

If an Infrastructure Agent is detected by Autodiscovery and imported, but has TLS disabled, Service Checks will fail to run against the Agent. This is because the -n flag is required for the check_nrpe command to run in non-TLS mode. To configure this, add the -n flag to the NRPE_EXTRA_FLAGS variable on the affected hosts.

Opsview 6.8.5

(On-premises, Cloud)

Autodiscovery cannot detect Infrastructure Agents that are using custom certificates. This means that you cannot use Autodiscovery to automatically set up monitoring for these agents. Instead, you must set up monitoring manually. To do this, you must copy the custom certificates to the monitoring servers and use the NRPE_CERTIFICATES variable on the affected hosts. This variable specifies the paths to the correct certificates.

Opsview 6.8.0

(On-premises, Cloud)

When running an Autodiscovery Scan via a cluster for the first time, there must be at least one host already being monitored by that cluster. If the cluster does not monitor at least one host, the scan may fail and the following message will appear: Cannot start scan because monitoring server is deactivated.

AutoMonitor Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises, Cloud)

When an AutoMonitor Windows Express Scan is set with a wrong but reachable Active Directory Server IP or FQDN, the scan can remain in a pending state until it times out (1 Hour is the default value). This means that no other scans can run on the same cluster for that period of time, which is due to PowerShell not timing out correctly.

Opsview 6.8.0

(On-premises, Cloud)

Automonitor automatically creates the Host Groups used for the scan: Opsview > Automonitor > Windows Express Scan > Domain. If any of these Host Groups already exist elsewhere in Opsview Monitor, then the scan will fail. If one of the Host Groups is moved, then it must be renamed to avoid this issue.

Opsview 6.8.0

(On-premises, Cloud)

If you have renamed your Opsview top-level Host Group, the Automonitor scan will fail. You must rename this or create a new Opsview Host Group for the scan to be successful.

Opsview 6.8.0

(On-premises, Cloud)

Automonitor application on logout clears local storage, which means that if a scan is in progress and a user logs out, they cannot see that scan’s progress even if it is still running in the background.

Character support Copied

Reported version Known issue description

Opsview 6.10.3

(On-premises, Cloud)

For backwards compatibility, any occurances of $$ in argument strings are replaced with single $ characters when processed by the system. This behavior is not required, and invalid macro strings remain unchanged in argument strings for event handlers, notification scripts, and plugins. For more information, see Variables and macros.

Note

This behavior must not be relied on because this will be removed in a future version.

Opsview 6.8.0

(On-premises, Cloud)

Some characters may not display correctly. For more information about the current limitations, see Supported Unicode Characters.

Opsview 6.8.0

(On-premises, Cloud)

While correctly backed up during daily backups or after an Apply Changes, any 4-byte UTF-8 characters (outside the Basic Multilingual Plane) may end up being corrupted when restoring a database backup. Although they appear as ????, these corrupted characters can be fixed by manually updating the database. If necessary, an unzipped backup file can be used to obtain the original characters before the restore. See Recovering from a Database Backup for further details.

Opsview 6.8.0

(On-premises, Cloud)

Opsview Reporting Module: Report Chart legends do not display 4-byte UTF-8 characters correctly, and 4-byte UTF-8 characters are not supported in the report names.

Opsview 6.8.0

(On-premises, Cloud)

For email notifications, emails sent by Opsview that contain non-ASCII UTF-8 characters may get blocked by mail relays with an error similar to status=bounced (SMTPUTF8 is required but it is not offered by host mail.example.com[10.10.10.10]) if the relay server does not advertise UTF-8 support. Additionally, local mail subject lines will not display 4-byte UTF-8 characters correctly on CentOS 7, OEL 7, and RHEL 7.

Opsview 6.8.0

(On-premises, Cloud)

For Netflow Dashlets and Autodiscovery, if international domain names are picked up and fixed, these names may appear corrupted. This means that names in the Autodiscovery sandbox could contain corrupted characters. However, these can be updated manually before importing into Opsview.

Opsview 6.8.0

(On-premises, Cloud)

Plugins, event handlers, and notification scripts that contain 4-byte UTF-8 characters do not display correctly in the filesystem but work properly in Opsview.

Opsview 6.8.0

(On-premises, Cloud)

Syslog messages may display some Unicode characters as UTF-8 bytes or Unicode code points. For example, \x{1F649}.

Opsview 6.8.0

(On-premises, Cloud)

Some Unicode characters that are tall may be cut off at the top and bottom in some fields.

Opsview 6.8.0

(On-premises)

Results Exporter: Regex filtering does not support Unicode categories (but code points still work), and file outputs do not export Unicode characters correctly.

Opsview 6.8.0

(On-premises, Cloud)

If you have multi-service checks where the generated service check name (with the host variable) has a URL encoded filename exceeding 255 bytes, the performance data stored in RRD from this service check cannot be recognized or appear visible in graphs.

Opsview 6.8.0

(On-premises, Cloud)

Exporting as CSV (for example, Events Viewer) include UTF-8 characters, but this may not import into Excel correctly.

Opsview 6.8.0

(On-premises)

When defining collector names in opsview_deploy.yml, only ASCII characters can be used. However, they can be renamed within the Opsview UI when registering the collectors to clusters.

Opsview 6.8.0

(On-premises, Cloud)

When Hosts are created with names containing 4-byte UTF-8 characters, the Host name uniqueness checks may not work correctly and can consider different names as duplicates.

Opsview 6.8.0

(On-premises, Cloud)

When Hosts are created with names containing 4-byte UTF-8 characters, the Navigator and Host Group configuration pages may not display correctly if viewed in the Firefox browser.

Hosts and Services Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises, Cloud)

When a Host has been configured with two or more parents and all of them are DOWN, the Status of the Services Checks on the host will be set to CRITICAL instead of UNKNOWN. Consequently, the Status Information is also inaccurate.

Opsview 6.8.0

(On-premises)

Any services already in dependency failure before upgrading to this release does not return to their previous state when leaving dependency failure, since that state has not been saved. They remain down until the next check occurs, as per the existing behaviour. However, any services that go into dependency failure after the upgrade is completed follow the new recovery behaviour, as documented in Important Concepts.

Logging Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises, Cloud)

If an Opsview Monitor system is configured to have UDP logging enabled in rsyslog, RabbitMQ logs at INFO level messages to opsview.log and syslog with a high frequency - 1 message approximately every 20 seconds.

Opsview 6.8.0

(On-premises, Cloud)

Some components such as opsview-web and opsview-executor can log credential information when in Debug mode.

Mobile Copied

Recent known issues affecting the Opsview Mobile app have been addressed. Please refer to the Resolved known issues section for more information.

Notifications Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises, Cloud)

Start and End Notifications for flapping states are not implemented in this release. When a Host or Service is flapping, all notifications will be suppressed.

Opspacks Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises, Cloud)

Due to changes made to the Windows Active Directory Opspack, Windows hosts must now have a version of Powershell equal to or higher than version 5.0.

Opsview 6.8.0

(On-premises)

Due to changes made to the Windows Active Directory Opspack, setup-opsview.yml must be rerun to import the new Opspack plugin changes. A reload must also be carried after to propagate the argument changes through the collection plan for the Schedulers.

Opsview 6.8.0

(On-premises, Cloud)

Windows Active Directory Opspack checks may increase CPU usage on the target Windows servers when running checks.

Opsview 6.8.0

(On-premises, Cloud)

For Windows WMI - Base Agentless - LAN Status Servicecheck, the utilization values for Network adaptors byte send/byte receive rates are around eight times lower than expected. Therefore, warning and critical thresholds must be adjusted accordingly as a workaround.

Opsview 6.8.0

(On-premises)

For Cloud - AWS-related Opspacks, the directory /opt/opsview/monitoringscripts/etc/plugins/cloud-aws, which is the default location for aws_credentials.cfg file, is not created automatically by Opsview. Therefore, it must be created manually.

Opsview 6.8.0

(On-premises)

If opsview_tls_enabled is set to false, the Cache Manager component used by Application - Kubernetes and OS - VMware vSphere Opspacks will not work correctly on distributed environments.

Opsview 6.8.0

(On-premises)

For Hardware - Cisco UCS, if migrating this Opspack from an Opsview 5.x system, it may produce the error Error while trying to read configuration file or File "./check_cisco_ucs_nagios", line 25, in <module> from UcsSdk import * ImportError: No module named UcsSdk. If this is encountered, running the following fixes the issue: place config file cisco_ucs_nagios.cfg into the plugins path /opt/opsview/monitoringscripts/plugins/.
# as root
wget https://community.cisco.com/kxiwq67737/attachments/\
kxiwq67737/4354j-docs-cisco-dev-ucs-integ/\
862/1/UcsSdk-0.8.3.tar.gz

tar zxfv UcsSdk-0.8.3.tar.gz
cd UcsSdk-0.8.3
sudo python setup.py install

Opsview 6.8.0

(On-premises)

Opsview - Login is critical on a rehomed system. You can fix this by adding an exception to the Service Check on the Host specifying /opsview/login as the destination instead of /login.

Opsview Reporting Module Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises)

During upgrade to the latest version of Reporting Module, email settings must be re-applied.

Email configuration can be found in the file: /opt/opsview/jasper/apache-tomcat/webapps/jasperserver/WEB-INF/js.quartz.properties

To configure email, edit the following lines in the configuration to match your required configuration.

Example configuration for internal email:

report.scheduler.mail.sender.host=localhost
report.scheduler.mail.sender.username=admin
report.scheduler.mail.sender.password=password
report.scheduler.mail.sender.from=admin@localhost
report.scheduler.mail.sender.protocol=smtp
report.scheduler.mail.sender.port=25

Example configuration for SNMP relay:

report.scheduler.mail.sender.host=mail.example.com

To apply changes, you must restart opsview-reportingmodule:

/opt/opsview/watchdog/bin/opsview-monit restart opsview-reportingmodule

Opsview 6.8.0

(On-premises, Cloud)

When accessing any URL under /jasperserver on an Opsview system without a valid session, a 401 response is returned rather than a redirect to the login page. Users must navigate to the login page manually and login again.

Opsview 6.8.0

(On-premises, Cloud)

Running reports within the Jaspersoft Studio IDE when connected to the Opsview Reporting Module currently results in 401 error. See Running Reports in Jaspersoft Studio for alternatives.

Opsview 6.8.0

(On-premises, Cloud)

Using Jaspersoft Studio when connected to the Opsview Reporting Module can quickly use up available sessions. See Session Manager Config for Jaspersoft Studio for mitigation details.

Plugins Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises, Cloud)

check_wmi_plus.pl may cause errors relating to files within your /tmp/* directory due to the ownership of these files needing to be updated to be the opsview user. This is encountered when upgrading from an earlier version of Opsview, as the nagios user previously ran this plugin.

Opsview 6.8.0

(On-premises, Cloud)

Opsview Golang plugins no longer support legacy certificates. If you encounter the error x509: certificate relies on legacy Common Name field, use SANs instead and recreate the certificates used on your monitored services or devices accordingly using the SAN x509 extension.

Opsview 6.9.0

(On-premises)

On deployments with distributed Python components (those using separate bsm_hosts, downtime-manager_hosts, notification-center_hosts, results-dispatcher_hosts, results-live_hosts, results-performance_hosts, or state-changes_hosts systems in their opsview_deploy.yml deploy file), self-monitoring plugins may fail against these components. This can occur due to missing files, which often results in “UNKNOWN results with No such file or directory messages”. To resolve this issue, copy the necessary file to the correct location on the affected systems by running the following command:
cp -r /opt/opsview/monitoringscripts/builtin/etc/plugins/self-monitoring /opt/opsview/monitoringscripts/etc/plugins
chown -R root:opsview /opt/opsview/monitoringscripts/etc/plugins/self-monitoring

REST API Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises)

For REST API config/OBJECT list calls, the ordering of results when using MySQL 8 is not necessarily deterministic, so REST API calls may need to specify a subsort field. For example, for hosts, order=hostgroup.name is not sufficiently deterministic and must be order=hostgroup.name,id so that the results come back in a fixed order.

SNMP Traps Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises, Cloud)

SNMPTraps daemons are started on all nodes within a cluster. At start-up, a master SNMP trap node is selected and is the only one in a cluster to receive and process traps. Other nodes silently drop traps. The majority of SNMPTrap sending devices can, at most, send to two different devices. The current fix is to manually pick two nodes in a given cluster to act as the SNMP trap and standby node. Then, mark all other nodes within the cluster to not have the trap daemons installed. For example:
collector_clusters:
  Trap Cluster:
    collector_hosts:
      traptest-col01: { ip: 192.168.18.53,  ssh_user: centos }
      traptest-col02: { ip: 192.168.18.157, ssh_user: centos }
      traptest-col03:
        ip: 192.168.18.155
        ssh_user: centos
        vars: { opsview_collector_enable_snmp: False }
      traptest-col04:
        ip: 192.168.18.61
        ssh_user: centos
        vars: { opsview_collector_enable_snmp: False }
      traptest-col05:
        ip: 192.168.18.61
        ssh_user: centos
        vars:
          opsview_collector_enable_snmp: False
On a fresh installation, the daemons will not be installed. On an existing installation, the trap packages must be removed, and the trap demons on the two active nodes must be restarted to re-elect the master trap node:
# INACTIVE NODES:
CentOS/RHEL: yum remove opsview-snmptraps-base opsview-snmptraps-collector
Ubuntu/Debian: apt-get remove opsview-snmptraps-base opsview-snmptraps-collector

# ACTIVE NODES:
/opt/opsview/watchdog/bin/opsview-monit restart opsview-snmptrapscollector
/opt/opsview/watchdog/bin/opsview-monit restart opsview-snmptraps

Opsview 6.8.0

(On-premises)

In order to get the SNMP Traps working on a hardened environment, the following settings need to be changed:
# Add the following lines to /etc/hosts.allow
    
snmpd:ALL
snmptrapd:ALL
    
# Add the following lines to hosts.deny
    
snmpd: ALL: allow
snmptrapd: ALL: allow

Opsview 6.8.0

(On-premises, Cloud)

Using Delete All on the SNMP Traps Exceptions page may sometimes hide new ones as they come in. They can be viewed again by changing the page size at the bottom of the window to a different number.

UI Copied

Reported version Known issue description

Opsview 6.8.0

(On-premises, Cloud)

There is no option to set a new home page via the UI yet. For new installations, the home page is set as the Configuration > Navigator page.

Opsview 6.8.0

(On-premises, Cloud)

Despite the UI or API currently allowing it, you must not set parent or child relationships between the collectors themselves in any monitoring cluster; collectors do not have a dependency between each other and are considered equals.

Opsview 6.8.0

(On-premises, Cloud)

When trying to investigate a host, if you get an Opsview Web Exception error with Caught exception in Opsview message, this can be an indicator that the cluster monitoring for that host has failed and requires you to address it.

Resolved known issues Copied

This section provides the list of known issues that have been fixed in Opsview 6.x.x.

Affected component Reported version Known issue description
Opsview Mobile Opsview Mobile app for iOS

Fixed known issue with new iOS registrations for the Opsview Mobile app.

New iOS registrations for the Opsview Mobile app, which were temporarily unavailable as of October 25, 2024, have been restored. Please note that this issue does not impact users of the Android version of the app. For iOS users already logged in, your service may remain unaffected.

Opsview components

Opsview 6.7.x

(On-premises)

Fixed in version 6.7.0 and newer.

All listed Log4j vulnerabilities are fixed in Opsview.

Plugins

Opsview 6.8.0

(On-premises)

Obsolete in version 6.9.0 and newer.

The sync_monitoringscripts.yml playbook failed to execute whenever the SSH connection between the host where opsview-deploy was run and the other instances was reliant on a user other than root. We only define the private SSH key using the ansible_ssh_private_key_file property in opsview_deploy.yml.

This is because the underlying rsync command was not passed the private SSH key and thus, it failed to connect to the instances. As a workaround, this must be added in the root SSH configs. Consider the following example:
# If you use ansible_ssh_private_key_file on the opsview_deploy.yml file

(...)
collector_clusters:
  cluster-A:
    collector_hosts:
      ip-172-31-9-216:
        ip: 172.31.9.216
        user: ec2-user  
        vars:
          ansible_ssh_private_key_file: /home/ec2-user/.ssh/ec2_key
      ip-172-31-5-98:
        ip: 172.31.5.98
        user: ec2-user  
        vars:
          ansible_ssh_private_key_file: /home/ec2-user/.ssh/ec2_key
(...)

# You need to add the following entries to /root/.ssh/config

Host ip-172-31-9-216 172.31.9.216
    User ec2-user
    IdentityFile /home/ec2-user/.ssh/ec2_key
Host ip-172-31-5-98 172.31.5.98
    User ec2-user
    IdentityFile /home/ec2-user/.ssh/ec2_key
["Opsview On-premises"] ["Release Notes", "Compatibility Matrix"]

Was this topic helpful?