Author Archives: chouse

SRM 5.0 installer – “database version is not supported”

Got this vague message during SRM 5.0 install: “The provided database version is not supported. Please enter a supported database.”. Thought it was referring to the version of the database server (SQL Server) which was supported. Dug in to the installer log file (Example: C:\Users\\AppData\Local\Temp\3) and found the following lines:

VMware: Srm::Installation::Database::CheckDsn: INFORMATION: Validating DB type: SQL Server
VMware: Srm::Installation::Database::CheckDsn: INFORMATION: Successfully connected to database.
VMware: Srm::Installation::Database::CheckDbPopulatedAndNeedsUpgrade: INFORMATION: Database already contains product tables.
VMware: Srm::Installation::Database::CheckDbPopulatedAndNeedsUpgrade: INFORMATION: Database version: ‘4.0.0’
VMware: Srm::Installation::Utility::GetMsgFromErrorTable: INFORMATION: Error message is The provided database version is not supported. Please enter a supported database.
VMware: Srm::Installation::Database::CheckDbPopulatedAndNeedsUpgrade: ERROR: Unsupported database instance.

Caught red-handed, I was trying to “upgrade” an SRM 4.0.0 installation to 5.0, which is not supported. I didn’t need the current config as it was easy to reconfigure, so I dropped the tables and was able to continue on and finish the installation.

Thin vs Thick

Imagine a vSphere 4.0 environment, running short on space with a dozen or so small datastores and most VMs using thin-provisioned disks. A decision is made to consolidate to fewer/larger datastores and convert the VMs to thick to make it easier for the distributed administration team to realize when a datastore was low on space and not consider it for new VMs, and to also prevent any catastrophes from over-subscribing a datastore with thin-provisioned VMs that eventually fill up and overflow the datastore.

Before doing this, we need an understanding of the different vmdk storage formats. Luckily, EMC guru Chad Sakac outlines these disk types in his article Thin on Thin? Where should you do Thin Provisioning – vSphere 4.0 or Array-Level?.

  • Thin: A VM’s thin-provisioned vmdk only consumes as much space on the datastore as the guest OS has written to its volumes inside the VM (the hypervisor quickly zeroes out the next available block and then commits the VM’s write IO). This format allows provisioning ahead of time an amount of storage the VM is expected to need over its lifetime and without the need to expand it in the future (though it can be expanded, if necessary). This also allows for over-subscribing a datastore by creating multiple VMs where the total sum of their thin-provisioned disks is larger than the datastore is actually capable of holding. This is OK while there is a lot of free space, but when free space is low and used space starts creeping up close to the total size of the datastore, extra caution must be taken to prevent the datastore from filling up entirely, causing a major headache. Simple things can easily push it over the edge such as VM swap files, snapshots, or other normal usages.
  • ZeroedThick: A VM’s zeroedthick-provisioned vmdk appears to the datastore filesystem to be as large as the original provisioned size and actively using all that space. However, from the perspective of the storage array hosting the datastore LUN, only the amount of data written by the Guest OS to its volumes is actually written to the vmdk on the datastore. The array sees this small usage and not the larger usage of the entire vmdk that the datastore reports for the vmdk. This is important when using array-based thin-provisioning. LUNs with zeroedthick-provisioned VMs do not consume any more space than thin-provisioned VMs, so thin-provisioning on the array is still useful. When a VM needs to write a new block, no zeroing is required since the vmdk already lays claim to blocks on the filesystem that it needs, and space isn’t wasted on the array, and a datastore cannot overflow or be over-provisioned.
  • EagerZeroedThick: A VM’s eagerzeroedthick-provisioned vmdk actually does consume the entire amount of provisioned space both on the datastore and on the underlying LUN. Any space remaining in the vmdk after the Guest OS has written what it needs is filled up by blocks of zeros. Consequently, these types of vmdks take awhile to create because the hypervisor inflates the vmdk with these blocks of zeros which must all be written to the storage array which also has to commit space. Not only Guest OS data but also zeroed blocks that the hypervisor has written out at disk creation that haven’t been overwritten by guest data. However, if the array supports a feature such as ‘zero-based page reclaim’ in a thin-provisioned pool, it can scan for these zeroed blocks and reclaim them as free space since it recognizes that no data is actually stored there. Note that the VMware feature Fault Tolerance (FT) requires eagerzeroedthick-provisioned disks, as does VMs participating in Microsoft Clustering (MSCS). This requirement ensures the space is there to support the critical availability requirement inherent in FT and MSCS VMs.

The concern is that by converting from thin-provisioned to thick-provisioned VMs during storage vMotion to new/larger datastores, lots of additional space would be used on the datastores and disk array. The important part to know is that while the datastore appears to be using more space with all VMs being thick-provisioned, it actually isn’t using all that space on the disk array because the VM disks are converted to zeroedthick-provisioned during storage vMotion. Not eagerzereodthick.

Build numbers

VMware titles its interm releases as “Update 1” or “Update 2”, etc. But in vSphere client, the version is displayed with a build number. And in the VMware HCL, the versions are displayed using the “Update” number.

Trying to make sense of this can be difficult, but luckily Virten.Net (@viFlorian) has an ESXi Release and Build Number History page which helps make sense of it all.

On a related note, if you need to decipher a SQL Server build number and relate it to version and service pack, check out SQLServerBuilds.

Reporting on vCenter alarm emails

The PowerCLI applet Get-VIEvent can be used to report on any events that occur in vCenter, for any object.

To use it to see which alarms have generated email alerts for all VMs in the cluster, the following one-liner gets the job done:


Get-Cluster ClusterName | Get-VM | Get-VIevent -Start ([DateTime]"2012-07-10 00:00") -Finish ([DateTime]"2012-07-11 23:59") | Where { $_.GetType().Name -eq "AlarmEmailCompletedEvent"} | Select @{N="Name"; E={$_.Vm.Name}}, CreatedTime, Key, FullFormattedMessage | ft -autosize

Simply put, for all VMs in cluster “ClusterName”, get their events from 7/10/2012 to 7/11/2012 that have an event type of AlarmEmailCompletedEvent, and output the VM’s name, the date/time of the entry, the event key number for future reference, and the text of the message. Output it as a formatted table, autosized to accommodate the long strings of the message.

This can take awhile to run, especially if there are a large number of VMs in the cluster and they have a large number of events to sift through. By including start/finish times, the script will return faster as there’s a shorter window to sift through for AlarmEmailCompletedEvent entries.

Some notes:

  • Dynamic time-frame: the -Start and -Finish parameters can be dynamic, so that, for example, the time-frame to search begins 12 hours ago and ends 1 hour ago: -Start ((Get-Date).AddHours(-12)) -Finish ((Get-Date).Addhours(-1)) (Thanks LucD)
  • Type: this can be determined by piping the output of Get-VIevent to Get-Member. Above the list of methods and properties will be a TypeName which can be used with the Gettype().Name property (do not include the “VMware.Vim” portion of the TypeName).
  • Instead of piping to ‘ft’ (Format-Table), the output could be piped to Export-CSV (be sure to use the -notype parameter).
  • As always, bits and pieces were assembled from LucD’s great work.