Hi,
I am looking for some input regarding a server migration question for our primary poller.
Our current environment is as follows...
Primary Poller: W2008 physical machine with NPM 11.5.3, NCM 7.4.1, SAM 6.2.3 & VNQM 4.2.3 installed
Secondary Poller: W2012 VM with NPM 11.5.3, NCM 7.4.1, SAM 6.2.3 & VNQM 4.2.3 installed
Database Server: W2008 VM
New Primary Poller VM: W2012 VM
*********************************************************************************************************************************************************************************************************************************************************************************************************************************
What we are planning on doing is migrating our Primary Poller to a VM, the cluster on which the VM we are migrating to does not have our current primary poller address available on it, as a result we need to change the address of our primary poller.
Having to change the address of our primary poller necessitates a lot of changes across both monitored nodes and firewalls between our poller and monitored nodes.
With all this change, I would expect that when we have migrated our current primary poller to the new VM there will be some nodes that are unreachable from the new address, there's so many changes being made I wouldn't expect us to be able to monitor everything we are currently monitoring 100% when we attempt to cut over.
There are a few approaches we could consider to ensure there is not a prolonged period for which nodes are not being monitored...
1. Take a staged approach to the migration.
Install all the current modules we are running on our new VM on an eval licence, then run the config wizard on the VM to point the new eval instance of Solarwinds at our production database. The eval instance of Solarwinds will then have all the polling information it needs to know which addresses it needs to poll and by which method (some are SNMP, some ICMP etc). We could give that 45 minutes or so then we could have a look at our nodes with problems window and record which nodes we are not able to poll then cut back over to our current primary poller address (10.31.1.1) with the production licence. We could then evaluate which nodes we're not able to poll and then make any necessary config or firewall rule changes. We could repeat this process until we have connectivity to all currently monitored nodes.
2. Create a bare bones database and point eval instance of Solarwinds at it
This approach involves creating a bare bones database to run side by side on our live database server. I would like to pre populate this database only with the necessary polling information for each node that is being currently polled by our live primary poller, what I mean here is if we could prepopulate the database with just enough information to tell the eval instance of Solarwinds what addresses to poll we could run both instances side by side and I could work away at eliminating connectivity problems whilst the live system remains untouched.
I had a look around the DB to see if there was a table we could export then import but can't seem to find anything I could export that will let me populate a new temp database with just the Orion config (customizations, IP's of nodes to poll etc). Is there a way to generate a temp database with everything excluding historical polling data to keep the size down? We would need to keep this database small so it doesn't consume too much space on our hard drive so I would set historical data retention to the bare minimum. At present our DB is 89G in size and we have 90G free on the disk that we could put temp DB on
3. Use SQL express and host the DB locally.
Install NPM on our new VM and let it create its SQL express database on the same server. This is a temporary measure to have the DB on the same server as the Orion installation. I know this is not a recommended approach for Solarwinds. What I plan to do is run this eval instance of Solarwinds alongside the production instance. I can then use the discovery tool on the eval instance and feed it every IP that I am currently monitoring with the production Solarwinds, this way I can establish whether the eval instance has connectivity to nodes being currently monitored by production Solarwinds, once I can confirm connectivity to every node from eval instance currently being monitored I can then stop the services on the production Solarwinds then run the config wizard on the eval version and point it at the production database, when that is up and running I can delete the temp SQL express. If there any problems I can always reverse the last step and bring back up the old production instance.
Thoughts?
Thanks!