Skip to main content
Planview Customer Success Center

What should I be aware of when upgrading to 19.2?

Last Updated:   |  Applicable Hub Versions: All

Answer

To facilitate smoother processing, a one-time update to the operational database will be made when upgrading to Planview Hub versions 19.2.1 and later.  

When upgrading from a version earlier than 19.2.1 to 19.2.1 or later, please be aware of the following:

Before Upgrade

While we always recommend backing up the operational database, it is imperative that a backup is made prior to upgrading to this version.

This upgrade may take hours depending on the size of your operational database. While typical processing time is 30 minutes to an hour, we recommend that you plan for an hour or more to be safe. The upgrade does not require that external repositories be online.

In rare instances where a customer's configuration does not allocate enough memory to Hub to cache, the upgrade could crash and the database could be left in a poor state. A rule of thumb is that the upgrade will require approximately 1KB of memory for each artifact reference. You can count the rows in the Artifact Tracking table on the operational database to get an idea of how many artifact references you have. Based on that number, you can calculate how much memory is required for upgrade.

For example, 100,000 artifact reference records will require a minimum of 100MB of memory. A customer of that size is likely to have at least 10 - 20 times more memory than that configured (at least 2GB), so issues with memory should be rare.

Please review our general Upgrade instructions here.

During Upgrade

During the upgrade, the Hub UI will be unavailable. In order to see the progress of the upgrade, we recommend that you inspect the log files.

The log files will show when each table is being migrated, along with the number of records in that table. The logs will record every 1000 entries processed. While this does not give a specific time for upgrades, it should give some insight into how long the process is taking and how much is left.

How do I know my upgrade is complete?

The ARTIFACT_STATE table is the last table to be processed. If the logs indicate that this table has successfully been processed and no unexpected errors are in the logs, then the upgrade has completed successfully.

After the ARTIFACT_STATE table is DONE, the startup phase will continue as usual: migration of artifact handles will be performed as required to account for any changes in connector schemas, after which the log should indicate that the application is listening on the configured port.

If you do not see this, you will potentially need to rerun the Hub upgrade with your back-up database. Please reach out to customer care with any questions.

Error Handling

While it is extremely unlikely that errors will occur, we have outlined some potential errors and their resolution steps below.

Error Case: ARTIFACT_ASSOCIATION_CORRUPT table

Hub maintains two matching records for each artifact association — one for the source artifact and one for the target. It is possible that in the past, one of those matching records may have been deleted while the other remained. This data corruption could have remained undetected for years, but would be exposed during this upgrade. In that case, the offending record(s) will be copied to a new table and the original deleted. Since this data corruption could lead to duplicate records created upon synchronization, Hub will present an error that will stop the affected integration(s) from processing and urge the customer to contact Support to help resolve the issue.

If customers encounter this, be aware that the issue was not due to the upgrade. In this scenario, the upgrade is simply exposing a pre-existing data issue.

Error Case: Out of memory

In rare cases, Hub may run out of memory during the upgrade. If this were to happen, the database would have to be restored from backup and the upgrade attempted again with more memory allocated to Hub.

A rule of thumb is to allocate at least 100MB of memory for every 100,000 records in the largest table, plus another 500MB or so for Hub itself. A customer with 500,000 records would then need at least 1.1GB of heap space. Most customers will have much more configured by default.