Migrate Opsview 6 to new hardware

In this section, we offer step-by-step instructions providing you with specific guidance to successfully migrate Opsview Monitor to a different hardware platform.

Warning

If you have a distributed environment then you should disable the collector (slave) devices on the old Opsview Monitor installation to avoid any contention between the Orchestrator (master) servers.

Also, if you are migrating to a new architecture, you should read through these steps in this document as it will guide you through how you should export your data.

The data can be migrated to a system running the same or a later version of Opsview Monitor, but you cannot migrate to an older version. Ensure you check the release notes for all Opsview Monitor versions you are upgrading through for any manual upgrade steps.

Warning

There will be an outage to the Opsview Monitor service during the migration.

Assumptions Copied

That your current install (oldMaster) has all services installed on the same server, for example Database, Reporting, and Netflow.

If any collectors are defined on the oldMaster then these will be migrated to work with the newMaster.

Pre-requisites: Copied

Ensure that you have the prerequisites before you migrate Opsview Monitor.

Installation Copied

All commands to be run as root unless otherwise stated.

  1. Run the following command to update your OS packages, setup Opsview Monitor repositories on your server and install the opsview-deploy package [newMaster]:
    curl -sLo- https://deploy.opsview.com/6 | sudo bash -s -- -A boot
    

Note

To install a specific version, please use /6.x in the curl command for example and not /6. If you only specificy /6 then the latest available version of Opsview 6 for your operating system will be installed.
  1. Copy files from [oldMaster] to [newMaster]. In this example scp will be used to transfer files.
    scp /opt/opsview/deploy/etc/user*.yml <newMaster>:/opt/opsview/deploy/etc/
    scp /opt/opsview/var/machine.ref <newMaster>:/opt/opsview/var/machine.ref
    

Note

Verify that the transferred files in [newMaster] keep the same file ownership as in [oldMaster].
  1. Check the variables in the files. For example:
    • Ensure that all IP/hostname addresses referenced in the files you are moving and reusing have been updated before proceeding to the next steps. Failure to do so may overwrite or break your current Opsview system.
    • The opsview_database_backend_nodes must not be set to 127.0.0.1.
    • Software key and certificates are up to date.
    • The opsview_messagequeue_password must not be set if upgrading from pre-6.9.2 to a newer version.

      Warning

      If you are migrating from an Opsview version older than 6.9.2, after transferring the /opt/opsview/deploy/etc/user_secrets.yml file to the new orchestrator, run the following command:
      /opt/opsview/deploy/bin/gen_secrets  | grep -A 1 opsview_messagequeue_user_passwords >> /opt/opsview/deploy/etc/user_secrets.yml
      
      This command generates the necessary entry and a password, which will be set during the installation. This is required due to a change in authentication for the opsview-messagequeue.

Deployment Copied

  1. To deploy Opsview on the new infrastructure [newMaster]:

    cd /opt/opsview/deploy/
    
    # configure the base hosts
    ./bin/opsview-deploy lib/playbooks/setup-hosts.yml
    
    # install and configure the core infrastructure (database, datastore, messagequeue, etc)
    ./bin/opsview-deploy lib/playbooks/setup-infrastructure.yml
    
    # install core files for the orchestrator
    ./bin/opsview-deploy lib/playbooks/orchestrator-install.yml
    
  2. Copy files from [oldMaster] to [newMaster].

    scp /opt/opsview/coreutils/etc/opsview.conf <newMaster>:/opt/opsview/coreutils/etc/opsview.conf
    scp /opt/opsview/webapp/opsview_web_local.yml <newMaster>:/opt/opsview/webapp/opsview_web_local.yml
    
  3. Restart all services [newMaster].

    /opt/opsview/watchdog/bin/opsview-monit restart all
    
  4. Install and configure Opsview [newMaster]. At this point you should now have a working Opsview 6 server.

    ./bin/opsview-deploy lib/playbooks/setup-opsview.yml
    
  5. Log into the [newMaster] Opsview UI and carry out a successful Reload/Apply Changes

Migrating config and data Copied

This section explains how to migrate config and data, such as databases, reporting, netflow etc.

  1. Stop all services on [oldMaster] including services on any collectors.

    cd /opt/opsview/deploy
    source bin/rc.ansible
    ansible opsview_all -m opsview_watchdog -a "name=all state=stopped"
    
    # confirm with
    ansible opsview_all -m shell -a "/opt/opsview/watchdog/bin/opsview-monit summary -B"
    
  2. Stop all services on [newMaster].

    /opt/opsview/watchdog/bin/opsview-monit stop all
    

Datastore (Optional) Copied

The datastore information is not essential for a successful migration.

  1. To be able to access the datastore from the newMaster, create migration.cfg on [oldMaster] by running these commands.

    cat <<EOF | install -o root -g opsview -m 640 /dev/fd/0 /opt/opsview/loadbalancer/etc/migration.cfg
    listen datastore-migration
    bind 0.0.0.0:15989
    mode tcp
    timeout client 3h
    timeout server 3h
    option clitcpka
    server          datastore-migration-balance 127.0.0.1:15984 check inter 5s
    EOF
    
  2. Start loadbalancer and datastore on [oldMaster] and [newMaster].

    /opt/opsview/watchdog/bin/opsview-monit start opsview-loadbalancer
    /opt/opsview/watchdog/bin/opsview-monit start opsview-datastore
    
  3. Delete datastore databases on [newMaster].

    DS_PASS=`grep opsview_datastore_password /opt/opsview/deploy/etc/user_secrets.yml |awk '{print $2}'`
    
    for DB in opsview-master opsview-collector opsview-logs; do
    curl -u opsview:$DS_PASS -X DELETE http://127.0.0.1:15984/$DB;
    done
    
  4. Replicate the oldMaster datastore databases onto the [newMaster]. Populate OLDMASTER with the oldMaster’s IP.

    DS_PASS=`grep opsview_datastore_password /opt/opsview/deploy/etc/user_secrets.yml |awk '{print $2}'`
    OLDMASTER='<add IP of oldMaster>'
    
    for DB in opsview-master opsview-collector opsview-logs; do
    cat <<EOF | curl -u opsview:$DS_PASS -d @- -H "Content-Type: application/json" -X POST http://127.0.0.1:15984/_replicate;
    {
    "_id": "migrate-collector",
    "source": {
    "url": "http://opsview:$DS_PASS@$OLDMASTER:15989/$DB"
    },
    "target": {
    "url": "http://opsview:$DS_PASS@127.0.0.1:15984/$DB"
    },
    "create_target": true,
    "continuous": false
    }
    EOF
    done
    

If an error is seen for ‘opsview-logs’, ignore this as it means you are not using this. The datastores from oldMaster have now been replicated onto the newMaster.

Opsview MySQL databases Copied

  1. To create a full database export, run the following command as root making sure to include any extra databases you may have (for example, include jasperserver if it exists) [oldMaster].

    mysqldump -u root -p`grep database_root /opt/opsview/deploy/etc/user_secrets.yml |awk '{print $2}'` --default-character-set=utf8mb4 --add-drop-database --opt --databases opsview runtime odw dashboard notifications | sed 's/character_set_client = utf8 /character_set_client = utf8mb4 /' | gzip -c > databases.sql.gz
    
  2. Copy the exported db file over to the [newMaster] and import it.

    gunzip -c databases.sql.gz | mysql -u root -p`grep database_root /opt/opsview/deploy/etc/user_secrets.yml |awk '{print $2}'`
    
  3. Upgrade Opsview to apply any database updates. Run the following command as root on the [newMaster].

    /opt/opsview/deploy/bin/opsview-deploy /opt/opsview/deploy/lib/playbooks/setup-everything.yml
    
  4. Run the following command on the orchestrator server, as the opsview user:

    /opt/opsview/coreutils/installer/upgradedb.pl
    

Migrating configuration files Copied

You should migrate any configuration files that you may have customised to your new server, such as those listed here.

Migrate Timeseries Data (RRD) Copied

  1. Export your graphing data by running the following command on your [oldMaster].

    /opt/opsview/coreutils/installer/rrd_converter -y export
    
  2. This will produce file /tmp/rrd_converter.tar.gz. Copy this over to your [newMaster] into the same location.

  3. On the [newMaster] run the following commands to import the graphing data.

    cd /tmp
    /opt/opsview/watchdog/bin/opsview-monit summary | grep timeseries | awk '{print $1}' | while read TSSVC; do /opt/opsview/watchdog/bin/opsview-monit stop $TSSVC; done
    rm -rf /opt/opsview/timeseriesrrd/var/data/*
    sudo -u opsview -i -- bash -c 'export PATH=$PATH:/opt/opsview/local/bin ; /opt/opsview/timeseriesrrd/installer/migrate-uoms.pl /opt/opsview/timeseriesrrd/var/data/'
    /opt/opsview/coreutils/installer/rrd_converter -y import /tmp/rrd_converter.tar.gz
    chown -R opsview:opsview /opt/opsview/timeseriesrrd/var/data
    sudo -u opsview -i -- bash -c 'export PATH=$PATH:/opt/opsview/local/bin ; /opt/opsview/timeseriesrrd/installer/migrate-uoms.pl /opt/opsview/timeseriesrrd/var/data/'
    

Migrate Timeseries Data (InFluxDB) Copied

  1. To migrate the InfluxDB graphing data the new Opsview install must already be running the same InfluxDB version as the source.

  2. Backup the InfluxDB database [oldMaster].

    influxd backup -portable /tmp/influxdb_backup
    cd /tmp
    tar -zcvf influxdb_back.tar.gz influxdb_backup/
    
  3. Backup the Opsview Timeseries InfluxDB Metadata [oldMaster].

    cd /opt/opsview/timeseriesinfluxdb/var/data
    tar -zcvf timeseriesinfluxdb_back.tar.gz +metadata*
    
  4. Transfer the tar.gz files over to the new Opsview install (/tmp). [newMaster].

  5. On the new install, drop the InfluxDB Opsview database and install restore the migrated data.

    curl -i -XPOST http://127.0.0.1:8086/query --data-urlencode "q=DROP DATABASE opsview"
    tar -zxvf /tmp/influxdb_back.tar.gz
    influxd restore -portable /tmp/influxdb_back/
    
  6. Restore the Opsview Timeseries InfluxDB metadata [newMaster].

    tar -zxvf /tmp/timeseriesinfluxdb_back.tar.gz /opt/opsview/timeseriesinfluxdb/var/data/
    

Final migration steps Copied

  1. On the [newMaster] start all services.

    /opt/opsview/watchdog/bin/opsview-monit start all
    
  2. Deactivate all the collectors via the UI; on the Configuration > Collector Management page, toggle Activated in the Cluster configuration menu.

Master Host IP/Hostname Copied

The main monitoring host will have the oldMaster’s name. Correct the IP/hostname for the master host in the UI and, if necessary, the Host Title too.

If the masters Host Title was changed then to not lose any historic graphing data for the master host, carry out the following steps:

  1. Download script rrdmerge [newMaster].

    wget https://opsview-repository.s3.eu-west-1.amazonaws.com/opsview-support/rrdmerge.py -O /tmp/rrdmerge
    
  2. Copy script rrd_merge_renamed_hosts onto your system [newMaster], say /tmp/rrd_merge_renamed_hosts.

    #! /bin/bash
    
    #
    # 1. location of rrdmerge
    # 2. location of rrd datadir
    # 3. name of original host name
    # 4. name of new host name
    #
    
    if [ "$#" -ne 4 ]; then
            if [ "$#" -lt 4 ]; then
                    echo "Not enough arguments"
            elif [ "$#" -gt 4 ]; then
                    echo "Too many arguments"
            fi
    
            echo "Req: <rrdmerge-location> <rrd datadir-location> <Original hostname> <New hostname>"
            echo "Hint: rrd datadir can be located by running command 'grep data_dir /opt/opsview/timeseriesrrd/etc/*.yaml'"
    
    exit 0
    fi
    
    
    rrdmerge=$1
    rrd_datadir=$2
    old_hostname=$3
    new_hostname=$4
    
    
    new_host_root=$rrd_datadir/$new_hostname
    old_host_root=$rrd_datadir/$old_hostname
    
    cd $new_host_root
    
    
    for file in `find . -name value.rrd`
    do
            rrd_path=`echo $file | awk -F 'value.rrd' '{print substr($1,3); }'`
            if [ -f $old_host_root/$rrd_path'value.rrd' ]; then
                    # merge these files
                    echo "File $old_host_root/$rrd_path'value.rrd' exists."
                    # old - new - new-file
                    $rrdmerge $old_host_root/$rrd_path'value.rrd' $new_host_root/$rrd_path'value.rrd' $new_host_root/$rrd_path'value.rrd.new'
                    ret_code=$?
                    if [ $ret_code -eq 0 ]; then
    #                        mv $old_host_root/$rrd_path'value.rrd' $old_host_root/$rrd_path'value.rrd.old'
                            mv $new_host_root/$rrd_path'value.rrd' $new_host_root/$rrd_path'value.rrd.orig'
                            mv $new_host_root/$rrd_path'value.rrd.new' $new_host_root/$rrd_path'value.rrd'
                            chown opsview.opsview $new_host_root/$rrd_path'value.rrd'
                    fi
            else
                    # ignore these ones. Nothing to Merge
                    echo "File $old_host_root/$rrd_path'value.rrd' does not exist - Ignoring!!"
            fi
    done
    
  3. Change ownership and permissions.

    chown opsview.opsview /tmp/rrd_merge_renamed_hosts /tmp/rrdmerge
    chmod +x /tmp/rrd_merge_renamed_hosts /tmp/rrdmerge
    
  4. Now run the following command [newMaster]. Substitute and with the host names that have been used.

    /tmp/rrd_merge_renamed_hosts /tmp/rrdmerge /opt/opsview/timeseriesrrd/var/data/ <oldmaster> <newmaster>
    
  5. Restart Timeseries Services [newMaster].

    /opt/opsview/watchdog/bin/opsview-monit summary | grep timeseries | awk '{print $1}' | while read TSSVC; do /opt/opsview/watchdog/bin/opsview-monit start $TSSVC; done
    

Master Host variables Copied

Also in the edit screen for the master host, correct the following variable Override Node settings located in the Variables tab.

OPSVIEW_DATASTORE_SETTINGS
OPSVIEW_MESSAGEQUEUE_CREDENTIALS

Now carry out an Opsview Apply Changes - this should be successful.

Collectors Copied

  1. Copy all the collectors configuration from the oldMasters /opt/opsview/deploy/etc/opsview_deploy.yml file to the newMasters opsview_deploy.yml file.

  2. Make sure the root users ssh public keys have been passed over to any collectors, otherwise the following command will fail.

  3. Now run deploy to tell the newMaster about the collectors:

    cd /opt/opsview/deploy
    ./bin/opsview-deploy lib/playbooks/setup-everything.yml
    
  4. Within the UI, re-activate the collectors; on the Configuration > Collector Management page, toggle Activated in the Cluster configuration menu.

  5. Now carry out an Opsview Apply Changes - this should be successful. At this point you should have successfully migrated your oldMaster to your newMaster host and all systems should now be fully working.

  6. Restart all services on the Orchestrator and Collectors [newMaster].

    cd /opt/opsview/deploy
    source bin/rc.ansible
    ansible opsview_all -m opsview_watchdog -a "name=all state=restarted"
    
    # confirm with
    ansible opsview_all -m shell -a "/opt/opsview/watchdog/bin/opsview-monit summary -B"
    

Modules Copied

NetAudit Copied

On the [oldMaster] as the Opsview user:

cd /opt/opsview/netaudit/var/repository/
tar -cvf /tmp/netaudit.tar.gz --gzip rancid/
scp /tmp/netaudit.tar.gz USER@newMaster:/tmp

On the [newMaster] as the Opsview user:

cd /opt/opsview/netaudit/var/repository/
rm -fr rancid
tar -xvf /tmp/netaudit.tar.gz
cd /opt/opsview/netaudit/var
rm -fr svn
svn checkout file:///opt/opsview/netaudit/var/repository/rancid svn

Test by looking at the history of the NetAudit hosts. Test when a change is made on the router.

Reporting Module Copied

On [newMaster] as root user, stop Reporting Module:

/opt/opsview/watchdog/bin/opsview-monit stop opsview-reportingmodule

On [oldMaster], take a backup of your jasperserver database and transfer to [newMaster] server:

mysqldump -u root -p`grep opsview_database_root_password /opt/opsview/deploy/etc/user_secrets.yml |awk '{print $2}'` --default-character-set=utf8mb4 --add-drop-database --extended-insert --opt --databases jasperserver | sed 's/character_set_client = utf8 /character_set_client = utf8mb4 /' | gzip -c > /tmp/reporting.sql.gz
scp /tmp/reporting.sql.gz USER@newMaster:/tmp

On [newMaster], restore the database:

echo "drop database jasperserver" | mysql -u root -p`grep opsview_database_root_password /opt/opsview/deploy/etc/user_secrets.yml |awk '{print $2}'`
( echo "SET FOREIGN_KEY_CHECKS=0;"; zcat /tmp/reporting.sql.gz ) | mysql -u root -p`grep opsview_database_root_password /opt/opsview/deploy/etc/user_secrets.yml |awk '{print $2}'`

On [newMaster] as root user, run the upgrade and start the Reporting Module:

/opt/opsview/jasper/installer/postinstall_root
/opt/opsview/watchdog/bin/opsview-monit start opsview-reportingmodule

In the Reporting Module UI, you will need to reconfigure the ODW datasource connection, as it should now point to newdbserver.

Test with a few reports.

Network Analyzer Copied

For the master, on [oldMaster] as root user, run:

cd /opt/opsview/flowcollector/var/data/
tar -cvf /tmp/netflow.tar.gz .
scp /tmp/netflow.tar.gz USER@newMaster:/tmp

On [newMaster] as root user, run:

cd /opt/opsview/flowcollector/var/data/
tar -xvf /tmp/netflow.tar.gz
chown -R opsview.opsview .

Network devices will need to be reconfigured to send their Flow data to the new master and/or collectors.

Service Desk Connector Copied

  1. Copy from the [oldMaster] the appropriate service desk connector yml file to the [newMaster].

    scp /opt/opsview/servicedeskconnector/etc/config.d/*.yml USER@newMaster:/opt/opsview/servicedeskconnector/etc/config.d/
    
  2. Restart the Service Desk Connector on the [newMaster].

    /opt/opsview/watchdog/bin/opsview-monit restart opsview-servicedeskconnector
    

SNMP Trap MIBS Copied

If you have any specific MIBs for translating incoming SNMP Traps, these need to exist on the new master.

On [oldMaster] as the opsview user copy over mibs files to the [newMaster]:

# copy over any MIBs and subdirectories (excluding symlinks)
cd /opt/opsview/snmptraps/var/load/
find -maxdepth 1 -mindepth 1 -not -type l -print0 |  tar --null --files-from - -cvf /tmp/custom-mibs.tar.gz --gzip
scp /tmp/custom-mibs.tar.gz USER@newMaster:/tmp

On the [newMaster] unpack the mibs:

cd /opt/opsview/snmptraps/var/load/
tar -xvf /tmp/custom-mibs.tar.gz
# now become the root user and run the following command
/opt/opsview/watchdog/bin/opsview-monit restart opsview-snmptrapscollector

Test by sending a trap to the master from a host that it is in its cluster and check that it arrives as a result for the host in the Navigator screen. You can also add the “SNMP Trap - Alert on any trap” service check to the host if it has not got any trap handlers. With the service check added to the host, you can use SNMP Tracing to capture and read any trap that is getting sent from that host.

SNMP Polling MIBS Copied

If you have any specific MIBs for translating OIDs for check_snmp plugin executions, these need to exist in the /usr/share/snmp/mibs/ or /usr/share/mibs/ location for the orchestrator to use in the newMaster. All OIDs specified in opsview of the form of “::” need to get translated during an opsview reload into their number form using the standard MIBs in /usr/share/snmp/mibs and /usr/share/mibs to translate them. You should ensure that all your MIBs are transferred from the old folders to the newMaster.

On [oldMaster] as the root user:

# copy over any /usr/share MIBs and subdirectories (excluding symlinks)
cd /usr/share/snmp/mibs     #[DEBIAN,UBUNTU]
find -maxdepth 1 -mindepth 1 -not -type l -print0 |  tar --null --files-from - -cvf /tmp/share-snmp-mibs.tar.gz --gzip
scp /tmp/share-snmp-mibs.tar.gz USER@newMaster:/tmp
cd /usr/share/mibs     #[RHEL,OL]
find -maxdepth 1 -mindepth 1 -not -type l -print0 |  tar --null --files-from - -cvf /tmp/share-mibs.tar.gz --gzip
scp /tmp/share-mibs.tar.gz USER@newMaster:/tmp

# copy over any custom MIBs and subdirectories (excluding symlinks)
cd /opt/opsview/snmptraps/var/load/
find -maxdepth 1 -mindepth 1 -not -type l -print0 |  tar --null --files-from - -cvf /tmp/custom-mibs.tar.gz --gzip
scp /tmp/custom-mibs.tar.gz USER@newMaster:/tmp

On [newmaster] as the root user:

# install mib package for Debian/Ubuntu
apt-get install snmp-mibs-downloader

# Note: At this point you should also install any other proprietary MIB packages necessary for translating MIBs used for SNMP Polling in your system
e.g. apt-get install {{your-MIB-Packages}}
e.g. yum install {{your-MIB-Packages}}

# unpack and copy the extra MIBs
cd /usr/share/snmp/mibs     #[DEBIAN,UBUNTU]
tar -xvf /tmp/share-snmp-mibs.tar.gz
mkdir opsview && cd opsview && tar -xvf /tmp/custom-mibs.tar.gz
cd /usr/share/mibs     #[RHEL,OL]
tar -xvf /tmp/share-mibs.tar.gz
mkdir opsview && cd opsview && tar -xvf /tmp/custom-mibs.tar.gz
["Opsview On-premises"] ["User Guide", "Technical Reference"]

Was this topic helpful?