This post is written by Steve Hagerty. Since I find this a very valuable blog, I have decided reblogging this article. The original article can be found here.
This is a short post to explain how to configure syslog forwarding from VxRail Manager to vRealize Log Insight.
Depending on the deployment scenario, VxRail may or may not be automatically configured to forward all of its associated logs to vRealize Log Insight. For example with VCF on VxRail, this configuration is automated for all VxRail components, while in other situations this may need to be configured manually.
There are three primary VxRail components to configure:
VxRail Manager
vCenter Server
ESXi Hosts
… with the iDRAC of each VxRail node also being an option.
The configuration of the VxRail vCenter Server in vRLI can also incorporate configuring the log forwarding from the associated ESXi hosts if selected, as shown below:
This is all managed under the built-in vSphere integrations for vRLI. What remains then, if required, is to configure VxRail Manager to forward its logs (marvin.log) to vRLI.
As described in KB504644 VxRail: How to configure a new syslog server , SSH to VxRail Manager as mystic user and switch user to root user, before editing the /etc/rsyslog.conf file with the following additional entries:
Advertisementshttps://c0.pubmine.com/sf/0.0.3/html/safeframe.htmlREPORT THIS ADPRIVACY
Ideally you should use the Log Insight load balancer IP as the target for the <customer remote server ip> (syslog/vRLI server IP), where 514 is the udp port.
Update 31/03/2021: In additional, to the above, the /var/log/mystic/connectors-cluster.log and the /var/log/mystic/connectors-esrs.log can be added to this list, simply by adding them as additional $InputFileName line items, as shown below:
Restart the syslog service using command on VxRail Manager: service rsyslog restart (or reboot the VxRail Manager VM if required).
We can then confirm that the vRLI system is receiving the forwarded logs from our VxRail Manager (vcf2mgmtvxrmgr) in the vRLI UI under Administration > Hosts
Advertisementshttps://c0.pubmine.com/sf/0.0.3/html/safeframe.htmlREPORT THIS ADPRIVACY
On the Interactive Analytics tab we can filter for the VxRail Manager hostname of vcf2mgmtvxrmgr in order to get more detail on each event received since the log forwarding was configured.
The events received from the VxRail Manager source will automatically be included in the General vRLI Dashboard, as shown below
It is also possible to create your own custom VxRail dashboard in vRLI if required. A new (VxRail) dashboard can be created under My Dashboards, where new and existing widgets can be copied and modified as required.
For completeness, if a customer requires the iDRAC logs of the VxRail nodes to be forwarded to vRLI also, then please take a look at this post which covers the required steps, leveraging the Dell iDRAC Content Pack for vRLI, installable directly from the vRLI Content Pack Marketplace, as shown below:
Advertisementshttps://c0.pubmine.com/sf/0.0.3/html/safeframe.htmlREPORT THIS ADPRIVACY
Recently I was asked if it is possible to receive an email notification if a vSAN storage policy with Force Provisioning enabled is applied to a vm. In this blogpost I want to show that this is possible.
Use Case – An administrator wants apply an vSAN storage policy with Force Provisioning enabled to virtual machines beacause of a possible shortage of vSAN storage capacity. In my opinion not a very good idea in a production environment!
Goal – Get an email notification when a vSAN storage policy with Force Provisioning enabled is applied to an vm. There is also the wish to make this visable in a dashboard.
Solution – With a bit of reverse engineering and vRealize Log Insight (vRLI) it’s possible to achieve this.
The first step is create a storage policy with Force Provisioning enabled. We name this policy “FP VM Storage Policy”
We need to apply the new storage policy “FP VM Storage Policy” to our test vm “sbpm01”
The policy is successful applied to the vm “sbpm01”.
Now we need some reverse engineering because it’s not possible to grep the name of the storage policy in vRLI. We move to the sbpm01 events in vCenter.
This is the information we need to created a new filter in vRLI Interactive Analytics.
Add the associated storage policy id “98df0443-5244-49af-9069-ad9fdbfedb52″ to the text field and add an extra filter (+ ADD FILTER). Choose from the pull down menu “vc_event_type” contains com.vmware.pbm.profile.associate. Choose a time window. In our example I choose “Latest 24 hours of data”. You can also choose here the last hour or last 5 minutes of data. It depends on the time you applied the policy and when you search in vRLI.
In de results above you don’t see the name of the vm with the applied storage policy. In the last image above here you see on the right of Events the Field Table section. Select Field Table. Search for the row with name vc_vm_name. Below here is the vm friendly name displayed of the vm with new applied storage policy.
Finally you want a email notification and a dashboard. I am not going to explain here how to create an email notification and a dashboard. This can be done in vRLI at the same way you normally create notifications and dashboards. Press the icon (1) to create a email notification and press the icon (2) to create a dashboard.
If you want receive an email notification if the vm get another storage policy applied. Then you should create another filter including the following two details.
In this blog post I wanted to demonstrate that it is possible with vRLI to receive an email notification if a vm has a storage policy applied where Force Provisioning is enabled. A disadvantage is that if the vm gets a different storage policy with different settings, this email notification is no longer valid, because the notifications are based on this specific storage policy id.
I have shown that it is works but as far as I believe it is not a solution for a production environment.
VxRail software version 7.0.300 includes VMware ESXi 7.0 Update 3, VMware vSAN 7.0 Update 3 and VMware vCSA 7.0 Update 3a with support for external storage and introduction to satellite nodes.
New features
Operationalize the edge with VxRail satellite nodes: You can deploy the E660, E660F, and V670F as single VMware vSphere nodes with no VMware vSAN to address VxRail edge deployments that require a smaller footprint. You can configure satellite nodes with an optional PowerEdge RAID controller to add resiliency for local disks. The satellite nodes are managed by a new or existing standard cluster with VMware vSAN running 7.0.300.
Control satellite nodes from a central location: You can deploy a VxRail Manager VM that can control all satellite nodes from a centralized host management location in VMware vCenter. You can add, remove, and update satellite nodes from one access point using VxRail Manager.
Expanded storage option for VxRail dynamic nodes: You can deploy VxRail dynamic nodes as part of a PowerFlex 2-layer architecture. Deploy VxRail dynamic nodes cluster as compute only node leveraging PowerFlex storage for hosting the workload VMs.
Protocol support for VxRail dynamic nodes: NVMe-FC is supported with PowerStore and PowerMax storage arrays that are attached to dynamic nodes.
VMware ESXi 7.0 Update 3, VMware vSAN 7.0 U3, VMware vCSA 7.0 Update 3a support. The major changes for VxRail include: Support upgrade of the VMware vSAN Witness Host (dedicated) in vLCM as part of the coordinated cluster remediation workflow for VMware vSAN 2-Node and Stretched Clusters.
Stretched Cluster Enhancement to allow the ability to tolerate planned or unplanned downtime of a site and the witness in a stretched cluster deployment.
Sometimes you run into an issue that can keep you busy for hours and afterwards the cause remains easy to solve. Recently I ran into such an issue.
There was a minor update that needs to be done. It was a VxRail code upgrade from 7.0.x to 7.0.2xx.
The upgrade was basically like all other upgrades:
Run VxVerify
If there are findings in the results, solve them before starting the upgrade
Upload the desired VxRail target code
Start the upgrade
Done
The results of the vxVerify were fine, no issues detected.
While uploading the target VxRail code everything looks fine but during the extraction of the upgrade bundle it failed at 50%. So I start a retry but the extraction of the upgrade bundle failed again at 50%. At the Cluster level we noticed the following error.
VXR1F4114 ALARM Upload of upgrade composite bundle unsuccessful VxRail Update ran into a problem… Error extracting upgrade bundle 7.0.2xx. Failed to upload bundle. Please refer to log for more details.
I opened a support request by Dell Support and in the meantime I start to examine the lcm-web.log in /var/log/mystic. I found some errors and failures but they did not lead directly to the root cause. There were errors about upgrade bundles couldn’t uploaded but those events were too general. I noticed the VxRail node that was mentioned at last in the log before the extraction failed.
Dell Support was now also working on the case. The support engineer also noted that the VxRail node I suspected was causing the problem.
I won’t go into too much detail, but at some point we checked the status of the “dcism-netmon-watchdog” service on that particular VxRail node.
[root@ESXi03:~] /etc/init.d/dcism-netmon-watchdog status iSM is active (not running)
I had seen recently the same service status on another VxRail nodes running on code 7.0.x. Restarting the service won’t start the service. So I restarted the VxRail node. After the restart it could take some minutes before the service is restarted. I checked the service again.
[root@ESXi03:~] /etc/init.d/dcism-netmon-watchdog status iSM is active (running)
Finally we restarted(retry) the VxRail code extraction. Both the VxRail code extraction and VxRail upgrade were successful.
If there is a hardware issue that could cause problems within a vSAN cluster, you want to know as early as possible. Once you know this, you may have time to resolve the issue before business is compromised.
Cause:
I have seen the following error several times in the results of a VxRail VxVerify check, which is performed to identify issues in a VxRail clusterbefore an update.
Error:
++++++++++++++++++++++
2021-10-08 15:01:00.012 esxi01.vrmware.nl vcenter-server: vSAN detected an unrecoverable medium or checksum error for component AB1234 on disk group DG5678
++++++++++++++++++++++
It could be possible that an underlying hardware device (physical disk) is causing this error. This is why you want to be informed as early as possible if there is an error that can cause an vSAN issue in the near future. This allows you to proactively carry out repair work, without any downtime to business operations.
Resolution:
How do you find out on which physical disk the component resides on? You need to identify the following information (first 3 bullets). The 4th bullet is about the vm which can be possible affected by the issue.
VMware Host
Diskgroup
Disk
Virtual Machine where the component belongs to
Let’s start to identify the disk where the component resides:
Write down the component and diskgroup from the error
Ssh to an arbitrary ESXI server in the vSAN cluster. It doesn’t matter what server you choose.Type the following command: esxcli vsan debug object list –all > /tmp/objectlist.txt
Transfer /tmp/objectlist.txt to local pc
Open objectlist.txt and search for component AB1234.
Snippet from objectlist.txt: ++++++++++++++++++++++
Configuration:
RAID_5
Component: AB1234
Component State: ACTIVE, Address Space(B): 39369834496 (36.67GB), Disk UUID: 52ec6170-5298-7f14-1069-d0d3872b742a, Disk Name: naa.PD9012:1
All the info you need to identify the disk is almost all here, VMware Host, Diskgroup and VM. To indentify the possible affected disk you need to switch to vCenter gui.
Move to Cluster > Host (esxi03.vrmware.local) > Monitor > Performance > Disks > Diskgroup (DG5678) > Whole Group (pull down). Here do you find the disk naa.PD9012
Conclusion:
Now you know that component AB1234 resides on disk naa.PD9012 in diskgroup DG5678 and the component belongs to vm01.vmdk.
I would advise always contact VMware GS for support in any production environment or Dell Support in case of a VxRail cluster. They will provide further support and help you to fix this error.
Last year I wrote an blog post about the VMware vCLS datastore selection. This blog post is one of the most read articles on my website. This does indicate that there is a need to be able to choose a datastore on which the vCLS vms are placed.
Today VMware announced vSphere 7.0 update 3. In this update there is also an improvement on the vCLS datastore selection. It’s now possible to choose the datastore on which the vCLS vms should be located.
In the following video on the VMware vSphere YouTube channel move on to 20 minutes to learn more about the vCLS vms datastore selection improvement.
Another improvement is that the vCLS vms now have a unique identifier. This is useful when you have multiple clusters managed by the same vCenter.
It’s always good to see that a vendor is listening to the customers’ needs to further improve a product.
VMware has released yesterday a new security update because of a vCenter vulnerability, VMSA-2021-0020. The CVSSv3 score is 9.8. Affected vCenter versions are 6.5, 6.7 and 7.0.
You find the complete information and response matrix at the following link.
Last night I spend several hours to update the vCenter in my lab from vCSA 7.0. Update 1d to vCSA 7.0 Update 2. The update kept going wrong. I’ve staged the update package first. After staging the update was completed I started the installation. This result in the following error: Exception occured in postInstallHook.
So I tried Resume.
This seems so far so good. Continue.
Continue the Installation.
Same error again! Let’s retry Resume and Cancel in the next step.
Cancel.
Now I get stuck. So I quit to get some sleep :-).
This morning I woke up and received an e-mail from WIlliam Lam that he has written a new blog post about an error during upgrade to vCenter vCSA 7.0 Update 2, “Exception occurred in install precheck phase“. This is a different error than the error that I experienced yesterday but I have seen this error also during one of the attemps.
Here an overview of the errors during my attempts:
Exception occured in postInstallHook This error appears after staging the update and install it later
Exception occurred in install precheck phase This error appears after stage and install at the same time.
Now let’s try the workaround from William Lam that should result in a working vCenter vCSA 7.0 Update 2.
Create a snapshot of the vCSA
Stage the update file
SSH to the vCSA
Move to folder /etc/applmgmt/appliance/
Remove the file software_update_state.conf
Move to folder /usr/lib/applmgmt/support/scripts
Run script ./software-packages.py install –url –acceptEulas
The update ended with an PostgreSQL error and vCenter is not working after the update. I rebooted the appliance one more time without any result.
Conclusion:
vCenter vCSA 7.0 Update 2 is in my opinion not ready for deployment at this moment. I rollback the snapshot and decide to wait for an updated version of vCenter vCSA 7.0 Update 2.
Today Duncan Epping posted this video “Introducing VMware vSAN 7.0 U2”.
Since the introduction, I am a fan of VMware vSAN Native File Services. With the introduction of vSAN 7.0 Update 2, vSAN Native File Services is also available for stretched vSAN clusters. How cool is that!
vSphere 7.0 Update 2 is already available for download.