Quantcast
Channel: VMware Communities : Discussion List - vSphere™ Storage
Viewing all 2328 articles
Browse latest View live

Maximum LUN size. Maximum VMFS volume size.

$
0
0

Hi everyone!

 

Do you have an example of a LUN bigger than 16TB connected to your vSphere? Maybe 32TB? Do you have a VMFS 5 or 6 volume bigger than 16TB?

 

According to the information I have found the limitation of VMFS volume is 62TB. Unfortunately I cannot find the confirmation that a storage system can present to vSphere a LUN bigger than 16TB so that I could create the big VMFS volume.

There is a known 16TB LUN size limitation from NetApp. But what about other vendors?

 

Thanks in advance.


HP P2000 G3 SAS Multipath Configuration with Vmware HOSTS

$
0
0

Hi , we are having a weird problem with out HP SAN Setup with our VMware hosts , we have our san connected via 2 controllers ( A and B ) into each VMWare host.

 

Controller A port 1 goes to vmware hostA

Controller B port 1 goes to vmware hostA

 

Controller A port 2 goes to vmware hostB

Controller B port 2 goes to vmware hostB

 

Controller A port 3 goes to vmware hostC

Controller B port 3 goes to vmware hostC

 

when we fail over 1 controller on the SAN ( by rebooting the controller ) , it causes the vmware host to completely lose connection to our storage instead of failing over to the connection on the other controller on the SAN and we can only get it back by rebooting the vmware host causing downtime.

 

Our vmware configuration is shown in the attached screenshots.

 

thanks in advance and kind regards

 

output of ( esxcfg-mpath --list )

 

~ # esxcfg-mpath --list

  1. usb.vmhba32-usb.0:0-mpx.vmhba32:C0:T0:L0

   Runtime Name: vmhba32:C0:T0:L0

   Device: mpx.vmhba32:C0:T0:L0

   Device Display Name: Local USB Direct-Access (mpx.vmhba32:C0:T0:L0)

   Adapter: vmhba32 Channel: 0 Target: 0 LUN: 0

   Adapter Identifier: usb.vmhba32

   Target Identifier: usb.0:0

   Plugin: NMP

   State: active

   Transport: usb

 

  1. sata.vmhba0-sata.0:0-mpx.vmhba0:C0:T0:L0

   Runtime Name: vmhba0:C0:T0:L0

   Device: mpx.vmhba0:C0:T0:L0

   Device Display Name: Local hp CD-ROM (mpx.vmhba0:C0:T0:L0)

   Adapter: vmhba0 Channel: 0 Target: 0 LUN: 0

   Adapter Identifier: sata.vmhba0

   Target Identifier: sata.0:0

   Plugin: NMP

   State: active

   Transport: sata

 

  1. sas.500605b0045cce00-sas.500c0ff135bcc000-naa.600c0ff00013989d0000000000000000

   Runtime Name: vmhba2:C0:T1:L0

   Device: naa.600c0ff00013989d0000000000000000

   Device Display Name: HP Serial Attached SCSI Enclosure Svc Dev (naa.600c0ff00013989d0000000000000000)

   Adapter: vmhba2 Channel: 0 Target: 1 LUN: 0

   Adapter Identifier: sas.500605b0045cce00

   Target Identifier: sas.500c0ff135bcc000

   Plugin: NMP

   State: active

   Transport: sas

   Adapter Transport Details: 500605b0045cce00

   Target Transport Details: 500c0ff135bcc000

 

  1. sas.500605b0045cce00-sas.500c0ff135bcc000-naa.600c0ff00013989dc2164e4f01000000

   Runtime Name: vmhba2:C0:T1:L1

   Device: naa.600c0ff00013989dc2164e4f01000000

   Device Display Name: HP Serial Attached SCSI Disk (naa.600c0ff00013989dc2164e4f01000000)

   Adapter: vmhba2 Channel: 0 Target: 1 LUN: 1

   Adapter Identifier: sas.500605b0045cce00

   Target Identifier: sas.500c0ff135bcc000

   Plugin: NMP

   State: active

   Transport: sas

   Adapter Transport Details: 500605b0045cce00

   Target Transport Details: 500c0ff135bcc000

 

  1. sas.500605b0045cce00-sas.500c0ff135bcc400-naa.600c0ff0001395430000000000000000

   Runtime Name: vmhba2:C0:T0:L0

   Device: naa.600c0ff0001395430000000000000000

   Device Display Name: HP Serial Attached SCSI Enclosure Svc Dev (naa.600c0ff0001395430000000000000000)

   Adapter: vmhba2 Channel: 0 Target: 0 LUN: 0

   Adapter Identifier: sas.500605b0045cce00

   Target Identifier: sas.500c0ff135bcc400

  Plugin: NMP

   State: active

   Transport: sas

   Adapter Transport Details: 500605b0045cce00

   Target Transport Details: 500c0ff135bcc400

 

  1. sas.500605b0045cce00-sas.500c0ff135bcc400-naa.600c0ff00013989dc2164e4f01000000

   Runtime Name: vmhba2:C0:T0:L1

  Device: naa.600c0ff00013989dc2164e4f01000000

   Device Display Name: HP Serial Attached SCSI Disk (naa.600c0ff00013989dc2164e4f01000000)

   Adapter: vmhba2 Channel: 0 Target: 0 LUN: 1

   Adapter Identifier: sas.500605b0045cce00

   Target Identifier: sas.500c0ff135bcc400

   Plugin: NMP

   State: active

   Transport: sas

   Adapter Transport Details: 500605b0045cce00

   Target Transport Details: 500c0ff135bcc400

 

~ #

Storage recommendations for small 3 host vcenter environment

$
0
0

Hi, I recently took over a half-well done vmware environment and need some direction.

 

There are 3 servers currently:

host 1 - single proc supermicro, runs vcsa (6.7)

host 2- dual proc supermicro, runs several vm's, locally (6.5)

host 3 - dual proc supermicro, runs several vm's, locally (6.7)

 

Total storage is about 1TB on all of the vm's combined.

 

All vm's running are in production and need to be up 100% of the time (dc's, puppet, dev, veeam, etc)

 

Budget = $15k give or take.

 

Options:

1 - Should I updated local storage on each ESX host and ram and call it a day?

2 - Should I purchase a shared DAS (iscsi) storage, if so, which one? vendor websites are annoying to get to the meat and potatoes (prices) without jumping through the sales team and days of waiting.

3 - Since total storage use is low (don't ever see it going above 5 TB), should I opt for all-flash DAS?

4 - What do you guys recommend?

 

Ideally, I'd like to get to a point where all hosts are on same esxi, vcenter can do vmotion and manage everything, and all vm's are running on fast, local, flash storage so esxi hosts can fail but the vm will get migrated to a different host. (this will also help me use update manager and get things updated when needed)

 

Thoughts?

 

I'm open to all good ideas.

 

Thanks.

[vCenter 6.7u3] Analysing SIOC Activity Events

$
0
0

Hello,

I want to look up, when Storage I/O Control started to throttle IOps on hosts in an ESXi Cluster.

For that I analyse the storageRM logfile and I think I found the correct events, here is an example:

2020-01-08T12:14:24Z hostfqdn.local storageRM[2100418]: Throttling anomaly VOB for naa.id: 59, 0.203814

 

Can someone please tell me, for what the red marked values that are named right after the naa id are representing. At first I thought the first value ist the set maximum queue depth (DQLEN), however it sometimes it reaches value that are much lower (example: 3, 0.00112019) than what is shown in the performance metrics in the vsphere client, sometimes the value is much higher than the possible queue depth of 64 on the adapter (example: 168, 0.203217) - so maybe this is the execution throttle\queue depth? (see: http://qgt.qlogic.com/Hidden/support/Current%20Answer%20Attachments/VMware.pdf )

Does SIOC set a larger Queue Depth per LUN or Host, than the defautl maximum of 64?

 

edit2:

I try to compare following values:

- the ones from the storageRM log, like here: 2020-01-08T12:14:24Z hostfqdn.local storageRM[2100418]: Throttling anomaly VOB for naa.id: 59, 0.203814

- esxtop on the selected host: hostfqdn.local > disk device > check DQLEN value for disk device

- vsphere client - select Datastore - Performance - Hosts - Max Queue Depth per Host > check real time value for host hostfqdn.local

The values do not match.

 

edit: Change topic so that the thread is not moved again

Mirroring two PCIe NVMe Drives using ESXi 6.5 on DL380 G10

$
0
0

I have a new DL380 G10 with two PCIe NVMe drives.  I have set up a Windows 2016 VM as a test, given it equal storage from each NVMe and set the two disks to be Dynamic and then set up Disk Mirroring in Windows on all partitions on disks 0 and 1.  I see two boot options when I start the VM - 1. Windows Server 2016 and 2. Windows Server 2016 - secondary plex - so I think the Windows mirroring is successful.

 

My aspiration is that if one NVMe drive fails I'll be able to easily recover and run this VM and others to be set up from the remaining good drive.

 

I've found the VMWare document "Set Up Dynamic Disk Mirroring" for SAN LUNs, a slightly different variation than my case, and I see that it says to add a couple of advanced options pertaining to the SCSI controller, returnNoConnectDuringAPD, and returnBusyOnNoConnectStatus.

 

In my case do I need to do this for the NVMe controller I added to the settings for this VM?

 

I also browsed the DataStores NVMe1 and NVMe2 (the not so colorful names I have given the two NVMe drives).  The ESXi metadata files - vswp, nvram, vmx, logs &c. -  only live in the folder for the VM on NVMe1.  The VM's folder on NVMe2 only has the vmdk file.

 

If NVMe1 were to fail can I recover with just the vmdk file on NVMe2?  If not, which files other than vmdk file are critical and is there a way to keep them in sync between NVMe1 and NVMe2, or would the occasional static copy do?

 

The team who will eventually use this server don't really care and say they can easily rebuild VMs if a drive fails and they might actually prefer to have the second NVMe drive available for more VMs.

 

But it offends my IT sensibilities not to try and set up some sort of RAID on this and be able to recover more quickly should a drive fail.

Snapshots on storage

$
0
0

Good day!

There is a storage DELL SC3020. A LUN was created on it to store virtual machines. The storage made the snapshots, after which the LUN was full and the virtual machine stopped with the error: "msg.hbacommon.outofspace: "/ vmfs / volumes / 5c4b18ea-39e14e84-39e7-f4e9d4cf4810 / AD / AD -000002.vmdk". Click Cancel to terminate this session " . VCENTER did not allow to increase LUN, it was necessary through ESXI. Is there any idea to use snapshots on the DELL SC3020 device?

Re: Snapshots on storage

$
0
0

Good afternoon!

I talked with a Dell specialist, and he said that they recommend performing snapshots on the SC3020. Since snapshots can speed up storage performance.
1. I wanted to clarify if I create snapshots for each Lun as described above and they will be stored only 24 hours. It will be right? (2 Luns are for storing backups, 2 Luns for storing non-critical virtual machines)
2. I have 4 Luns on DELL, all of them are created under VMFS. At the moment I have 3 Lun in the red zone. Well, that’s logical, as Thick Provision Lazy Zeroed was created in Lun data.
(Example 1Lun 3TB - 1 Thick Provision Lazy Zeroed 2.9 TB) But Thick Provision Lazy Zeroed itself is not fully occupied in the system. What to do or is this a normal practice?

DR Orchestration for vVols

$
0
0

I have setup virtual volume for array based replication, want to orchestrate the failover(planned and unplanned). I find references only to PowerCLI cmdlets(example: Start-SpbmReplicationFailover).

Are there any other programmable interface other than PowerCLI cmdlets, to orchestrate DR failover.


Space allocation on NetApp LUN

$
0
0

Hello,

 

I am wondering if anyone else is experiencing this issue.

 

Short facts about our system:

- NetApp AFF220 IP-Metrocluster (ONTAP 9.6 P3)

- VMware ESXi 6.7 U3

- Thin provisioned LUNs with space allocation disabled

 

Problem is:

- Space reclaim is not working properly

 

How the issue gets visible:

- When I use storage vmotion to migrate a machine with approx 50% unique data from A to B, no space is getting free on the source volume

 

How I can work around this problem:

- Enable space allocation on this LUN, then suddenly the space will get reclaimed

 

Why this is weird:

- We never created our LUNs other than as recommended by NetApp. Plus we need space allocation as a protection function if the LUN runs out of space.

 

Is anyone else experiencing this problem?

 

Best regards and thanks

Alex

A general runtime error when configuring Dell SC series live volume on vcenter appliance 6.7

$
0
0

I have an environment that consists of 3 hosts and 1 SC5020F storage center. The DR site is also similarly kitted out. When running the live volume configuration wizard, everything is fine, I can see my SC5020F and all its configured volumes.

The problem I am having is that I end up with a "A general runtime error occurred" in the Recent Tasks tab. I am stumped as to what could be the issue.

 

 

That's the screenshot of the process failure. I hope its visible enough.

 

Any help would be highly appreciated.

Why QAVG is so high ?

$
0
0

In vCenter, the datastore latency is very high. In esxtop. DAVG is good, but KAVG & QAVG is very high, but at the HBA level, the QAVG is low, what is the problem? The Storage is EMC VPLEX, at the storage side, the IOPS of the datastore is less than one hundred, and the latency is less than 1 ms.

1.png2.png3.png4.png5.png

Problem with recreating a missing descriptor file

$
0
0

Hi,

I trying to recreat vmdk file from flat.vmdk as VMware Knowledge Base , but i can't get into the section of the virtual machine:

 

/vmfs/volumes/5cb5fcec-19f64160-4478-14dae93d862e] cd ******

-sh: cd: can't cd to *******

 

Plz help.

When will we get support for 4Kn / 4096 byte sector FC storage targets?

$
0
0

4K device targets have been shipping for years, and it's increasing impact to not have support.  There is still only support for locally attached 4Kn SAS/SATA spinning disks.  When will we get support for 4K / 4Kn targets over other interfaces?  eg FC, iSCSI, NVMeOF, iSER, etc?

 

ref: vSphere 7.0 Storage Guide still shows the same limitations that we had around ~6.4 IIRC...

When you use 4Kn devices, the following considerations apply:

  • ESXi supports only local 4Kn SAS and SATA HDDs
  • ESXi does not support 4Kn SSD and NVMe devices, or 4Kn devices as RDMs

...

Thanks

The need to run a chkdsk equivalent on a attached datastore?

$
0
0

My storage admin just told me that they are seeing possible corruption (maybe false positive) on a m disk in a v7000 SAN.
As they cant see inside a LUN they have presented to my hosts the SAN admin is suggesting I should run a chkdsk equivalent on my LUN/Datastore.

 

I have never come across this before and a bunch of googling didn't make me any smarter in this matter.

 

Now, I am not seeing any corruption of my VM's running on that datastore and obviously they would not see anything as they are abstracted from the storage.

 

Would this be a chkdsk from the console of the ESXi host? or perhaps something I can do via powercli.  Any kind of service interruptions to expect or perhaps i should vacate the LUN/Datastore.

I spoke with my SAN admin and he mentioned he have seen false positives of corruption before on this array.

Win 2012r2 physical compatibility RDM free space DELL SCv2020

$
0
0

Hi,

We have a Win 2012r2 VM running on 6.0 ESXi. It has attached 4 RDM disks in physical compatibility mode. These LUNs belongs to a DELL SCv2020 compellent.

Each LUN/Volume is configured with ~6TB in the compellent. In the VM the disks are not fully used, there are ~1TB free in each one. However, I don't see that free space in the compellent.

Is this behaviour correct? Do I have to run some command in the VM so storage can reclaim free space?

Thanks.


Unable to create virtual machines in new datastore--frustrated beyond belief! :/

$
0
0

Hey all,

 

I am about to tear out the rest of my hair. I am hoping someone here has seen this behavior and can point me to a solution. I simply have tried everything I can think of, an nothing is making sense.

 

Summary: I have a Samsung 840 SSD, good for about 700GB, installed along with other SSDs in a box running ESXi 6.7

When I try to create a new VMFS6 datastore on this SSD, the process goes without issue. I have tried this under regular ESXi 6.7 UI, under ssh using command-line tools, and under vCenter 6.7 -- regardless, creating the datastore works just fine, as expected.

However, creating a new VM using that DS, or migrating an existing VM to that new DS, fails. EVERY TIME. The errors vary, they never seem to be the same, but it is always a combination of:

 

  • Unable to load configuration file '/vmfs/volumes/5ea9ef7c-23eb5578-3738-000af7a1d9e1/VM1/VM1.vmx'
  • An error occurred while creating a temporary dictionary file: Error.
  • Failed - The file is too large

 

No matter what though, the VM fails to be created/migrated. Google and other searches have yielded no help for this problem.

 

Of note: I can browse the new datastore, and upload files. I can create folders. I can ssh to the host and create files. I can create a text file using the text editor vi and read the file using cat. I can delete the files. In other words, the datastore seems to be perfectly happy. It just won't take virtual machines!

 

When I have tried creating the DS under command line, I have used partedUtil to create the partition (starting at sector 2048 as recommended for VMFS6) and then vmkfstools for creating the VMFS store. Currently, the SSD has a GPT format. I have tried taking the SSD into a Linux machine and clearing out all partitions there, and trying again under ESXi. Same result: the datastore is created but VMs can't be written to it. I have tried leaving a little space at the front of the partition and some space at the end of the partition. I did this because examining the other SSD's partitions, they seemed to have some free space before and after. I figured I would try that. No dice.

 

I truly am baffled. I don't know what could possibly be wrong with this datastore in terms on creating VMs or VMs being migrated to it, but being totally fine for writing files and directories directly or using the Browsing UI.

 

Here is a quick movie I made which shows the failed process, and you can also see I am displaying the info for the device in ESXi under ssh.

I am also attaching an image with the same information being shown in the ssh console.

 

If you would like any more details, I can provide them. If anyone can help, I will be forever grateful.

 

# polo

Device Details per ESXi.png

How Should VMware do Kubernetes Backup for Project Pacific?

$
0
0

Hello All!

 

I'm the product manager responsible for the data protection features of vSphere and I'm here to ask a favor.  VMware has a robust set of solutions for protecting VMs and we want to have a similar set of offerings for containers and their data.  Would you please spend a few minutes to help me understand how you protect containerized workloads today and how you want to do so in the future?

 

The survey: https://vmwarek8sbackup.surveyanalytics.comhttps://vmwarek8sbackup.surveyanalytics.com/

 

If you have any questions or would prefer to just set-up a few minutes to help me understand your needs please reach out:  rhammond@vmware.com

 

Stay safe!

Unable to remove datastore

$
0
0

Hi,

 

When I try to remove a datastore from vsphere I get the following error:

 

Call "HostDatastoreSystem.RemoveDatastore" for object "datastoreSystem-15" on vCenter Server "xxxxxx" failed.

 

The datastore is emtpy and we want to unpresent it to the hosts. Is there a way to remove a datastore via the CLI? If I unpresented it to the hosts via the SAN would this cause any issues?

iSCSI vs NFS

$
0
0

Hi

 

VMware has not released this paper's new version since 2012 - https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/storage_protocol_comparison-white-paper.pdf 

 

I went though some old post & got sense from all that NFS is better for below reasons

 

1. Easy Setup

2. Easy to expand

3. UNMAP is advantage on iSCSI

4. VMFS is quite fragile if you use Thin provisioned VMDKs. A single powerfailure can render a VMFS-volume unrecoverable.

5. NFS datastores immediately show the benefits of storage efficiency (deduplication, compresson, thin provisioning) from both the NetApp and vSphere perspectives

6. Netapp specific : The NetApp NFS Plug-In for VMware is a plug-in for ESXi hosts that allows them to use VAAI features with NFS datastores on ONTAP

7. Netapp specific : NFS has autogrow

8. When using NFS datastores, space is reclaimed immediately when a VM is deleted

9. Performance is almost identical

 

Please list out if i miss anything & share comments

 

 

 

Thanks

No able to increase LUN

$
0
0

HI

 

I have this stange situation and I hope someone can help

 

I have expanded a LUN and rescan storage, the adapter pick up the new size but when I try to increase the datastore I get empty

 

 

I dont know where to start looking to troubleshot this issue

 

Thanks a bunch

Viewing all 2328 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>