Quantcast
Channel: VMware Communities : Discussion List - vSphere™ Storage
Viewing all 2328 articles
Browse latest View live

Extending existing local datastore on stand alone ESXi Host

$
0
0

Hi,

 

Looking for some advice on adding disks / extending a single datastore.

 

We have HP DL380 G6 Server, running stand alone instance of ESXi 5.1, with local storage only.  RAID1 (2 x 146gb) for ESXi OS install, and RAID5 (4 x 300GB) which is presented as the Datastore where the VM's reside. There are physically two more empty bays, (bay7 & bay8), in the Server, and would like to populate with another 2 x 300gb disks and add to the RAID5 setup / expand the Datastore.  Has anyone any thoughts or suggestions on how to approach this?  The VM's will be backed up as precaution.

 

Thanks,


Incorrect "Used Storage" reported - ESX 5.5

$
0
0

Hello everybody,

 

We have a strange problem since we migrated some hosts in ESX 5.5...

 

On our cluster, 14 hosts are in 5.5 and 2 are still in 5.1 (we stopped the migration when we stumbled on this problem... Here it is :

Our biggests vm (> 500 Gb provisionned) are reporting incorrect "Used Storage" (and "Not-Shared Storage btw). These vms are in thin provisionning.

The datastores they are located in are all in VMFS 5.54 (this problem is on all our datastores, not just only one)

If we perform a vmotion to an ESX 5.1 host, this problem disappear, but it's back with a vmotion back to 5.5...

 

Our storage is provided by 2 datacore hosts, San-Symphony V 9.0 PSP3.

 

I attached 2 screenshots showing my problem...

 

 

anybody here already experienced this problem ?

 

Thanks, and best wishes to you all,

 

V-

Unable to add bind iSCSI initiator

$
0
0

While looking to improve on our configuration by aligning with best practice recommendations, we came across this error:

 

Call "IscsiManager.QueryCandidateNics" for object "iscsiManager-690" on vCenter Server

 

can anyone assist with the interpretation of the "iscsiManager-690" part of this message?

 

search results for a similar message refers to having 10 or more than physical NICs on a switch, which is not the case for us.

 

Thanks in advance

VVOL Equallogic and VSphere 6

$
0
0

I have been banging my head against the wall trying to get VVOL's established in my test environment. The setup is as follows, 3 hosts, two with HBA iSCSI broadcoms and one Software iSCSI adapter. The SAN is a mixed group, one Dell 6100 w SSD and one 4100 spinning disk. I have the Dell VSM loaded and configured, I've built a storage container on the EQL, and I can configure the access rules for hosts. I have also added the EQL as a VASA provider.

 

Hosts vSphere 6.0, vCenter 6.0, EQL 8.0.4, VSM 4.5.1.700

 

For the sake of clarity we'll call the hosts HBA1 and HBA2 and SOFT1

 

The protocol endpoints show up in the storage devices on all three hosts, but the only machine that can see the PE's and access them is the SOFT1 host. Its also the only one that can mount the VVOL. If I mount it via SOFT1 and choose 'mount this datastore on additional hosts' the process completes without errors, but on HBA1 and HBA2 the VVOL is in accessible.

 

This is from one of the two HBA hosts (HBA2):

 

esxcli storage vvol protocolendpoint list

naa.6019cb71d121ed38e33165924a54c38b

   Host Id:

   Array Id: com.dell.storageprofile.equallogic.std:MASTER

   Type: SCSI

   Accessible: false

   Configured: false

   Lun Id: naa.6019cb71d121ed38e33165924a54c38b

   Remote Host:

   Remote Share:

   Storage Containers: 6019cb71-d121-cd52-7f62-05a43b05e04d

 

esxcli storage vvol storagecontainer list

MasterVMDK

   StorageContainer Name: MasterVMDK

   UUID: vvol:6019cb71d121cd52-7f6205a43b05e04d

   Array: com.dell.storageprofile.equallogic.std:MASTER

   Size(MB): 1048590

   Free(MB): 1048200

   Accessible: false

   Default Policy:

 

This is from SOFT1

 

esxcli storage vvol protocolendpoint list

naa.6019cb71d121ed38e33165924a54c38b

   Host Id: naa.6019cb71d121ed38e33165924a54c38b

   Array Id: com.dell.storageprofile.equallogic.std:MASTER

   Type: SCSI

   Accessible: true

   Configured: true

   Lun Id: naa.6019cb71d121ed38e33165924a54c38b

   Remote Host:

   Remote Share:

   Storage Containers: 6019cb71-d121-cd52-7f62-05a43b05e04d

 

esxcli storage vvol storagecontainer list

MasterVMDK

   StorageContainer Name: MasterVMDK

   UUID: vvol:6019cb71d121cd52-7f6205a43b05e04d

   Array: com.dell.storageprofile.equallogic.std:MASTER

   Size(MB): 1048590

   Free(MB): 1048200

   Accessible: true

   Default Policy:

 

Secondly the datastore is listed in alarm with this error:

Issue,Type,Trigger Time,Status

vSphere HA failed to create a configuration vVol for this datastore and so will not be able to protect virtual machines on the datastore until the problem is resolved. Error: (vim.fault.InaccessibleDatastore) ,Configuration Issue,"10/22/15, 10:33:12 AM GMT",

 

I'm guessing that error will clear once I get the vvol working on all three hosts.

 

In addition if I try and zero out my config to rebuild the mess the VSM leaves behind access lists that I cannot destroy.

 

Any thoughts would be appreciated.

DT

If all my VM Datastores are backed by the same set of spindles, will Storage I/O Control compete against itself?

$
0
0

7 ESXi 6.0.0 Hosts

3 Datastores

200+ VMs

 

Each of the three Datastores are backed by a single ZFS storage pool across the same disk spindles.

If we enable Storage I/O Control for all the datastores, how will it behave?

 

We are concerned that SIOC on one datastore will detect load on another datastore as non-VI workload.

So this will not evenly distribute I/O across all VMs. It will just create a balanced competition between

VMs on the three different datastores.

 

Is SIOC smart enough to detect this problem and realize it's all VM load that VMWare can control?

Installing KINGSTON - PREDATOR - HYPERX PCI express storage using ESXI 5.5

$
0
0

Hi Team,

 

We would like to test  a  PCI Express storage (HyperX-kingston)480GB  in a VMWARE ESXI 5.5..

We have installed the hardware (PCIe) in the motherboard and the BIOS could recognize the hardware, but when logon in the vsphere client and access configuration , add storage,  the software is not recognizing the Device.

 

Is there any case using PCI express storage (kingston) in the community ?  How to use this hardware with Vmware esxi 5.5 ?

 

Regards.

 

Mauro.

Missing Sata HDD

$
0
0

I have a cheap ThinkServer TS140 booting version 5.5 via USB Flash.

It has 2, 1 TB drives, and a DVD drive that show up with no issues.

I also have a SanDisk SSD(120GB) that will not show up as a device.

The machine is configured to use AHCI, and I see the "missing" drive in the BIOS.

I have also connected the "missing" drive to another computer to verify that it works properly.

 

I can't seem to make any progress with the vSphere Client, and I have tried the following commands via SSH in hopes of finding the drive, but had no success:

 

esxcli storage core device list

 

esxcli storage nmp device list

 

 

What should I try next?

Luns Grown do not show new Space

$
0
0

Hi Guys , please assist . I grew some VMFS5 Luns from 5 to 15 TB and they are still reporting 5 TB in size , from time to time , until you refresh the Datastore in vsphere client / Rescan ALL the Host's Fibre Channel HBAs .

I had to grow the Luns using a direct vSphere client ESXi Host connection as the Grow option was greyed out in vCenter Views . So I understand that vCenter view would not immediately reflect the new size . But why it this persisting after HBA rescans ? Also if

you CLI > esxcli storage filesystem list , its shows the original side

 

vmfs/volumes/55f671f9-0eef5ab1-6e38-001517f2da24  DC1_CLOUD_ARCHIVE      55f671f9-0eef5ab1-6e38-001517f2da24     true  VMFS-5  5497289703424  3585331953664

 

How do fix this ,as we need to do many more of these soon ?

 

ESXI 55u2

vCenter 55u1

IBM SVC with Hitachi VSP behind it .

NMP is default AA

 

esxicli storage core device list | grep -A 25 15af

naa.600507680180857128000000000015af

   Display Name: IBM Fibre Channel Disk (naa.600507680180857128000000000015af)

   Has Settable Display Name: true

   Size: 15728640

   Device Type: Direct-Access

   Multipath Plugin: NMP

   Devfs Path: /vmfs/devices/disks/naa.600507680180857128000000000015af

   Vendor: IBM

   Model: 2145

   Revision: 0000

   SCSI Level: 6

   Is Pseudo: false

   Status: on

   Is RDM Capable: true

   Is Local: false

   Is Removable: false

   Is SSD: false

   Is Offline: false

   Is Perennially Reserved: false

   Queue Full Sample Size: 0

   Queue Full Threshold: 0

   Thin Provisioning Status: unknown

   Attached Filters:

   VAAI Status: supported

   Other UIDs: vml.0200300000600507680180857128000000000015af323134352020

   Is Local SAS Device: false

   Is USB: false

   Is Boot USB Device: false

   No of outstanding IOs with competing worlds: 32


Thin provision datastore at SAN level?

$
0
0

Is there a good reason to thin provision a vSphere Datastore at the SAN level? I was always under the impression that it's a good idea to Thick provision the SAN volumes and then decide if you want thin or thick VMs within that DS because of better performance, but really more, too much promised reservation and not keeping an eye on it could get you into trouble.

How to find a Virtual Volume's I/O rate in vSphere 6.0

$
0
0

When we select a Virtual Volume under the Storage tab in vSphere, it seems like we cannot get the read and write rate of the VVol. Is there any way to see it or is there any API we can use to get it? Thank you!

shared virtual disk in win 7 64

$
0
0

Hi, I got a problem with ussing Shared Virtual Disk in Win 7 64 Virtual Machine. (Esxi 6.0)

 

- I have a SSD storage (450Gb), and i creat 2 VM ussing Win 7 64 on it. (40Gb for 2 VM , and ~400Gb not ussing)

- I creat a virtual disk (A disk) with Thick Provision Eager Zeroed option (400Gb). Set SCSI controller to "Virtual" in both VM.

- Add "A disk" to VM 1, power on VM1 and Format "A disk" as "D:" volume (NTFS).

- Add "A disk" to VM2, power on VM2 and i see "A disk" as "D:" too. That's ok, im making D Volume as a share Folder where both VM can read and write so fast, no need through Ethenet.


- BUT, i got a proplem here, when i make a new Folder("Test Folder") in D volume in VM1 , it not show in VM2 (the same when i try with VM2) , but when i restart VM2 , i see "Test Folder" here.

- i tried to Set SCSI controler to Physic, add SCSI(x:y) = multi writer .... but it not work correctly. (I hope it work like a share volume in windows).

Please,give me any help. Thanks so muchhhh. I tried it in 1 week, seem its too difficult for me.

NetApp storage detection - ALUA vs. AA

$
0
0

We have a small issue were we have LUNs that are being brought into service as AA rather than ALUA. All our hosts are ESX 4.1u1 with NetApp FC storage. The igroup is ALUA enabled. Some hosts bring in all the LUNs as ALUA and some have a few that get detected as AA. Here are a few messages that I’ve come across that might be related.

 

From messages log:

Aug 18 08:44:33 esxp007 sfcb-vmware_base[9119]: -#- Serious provider id / provider process mismatch ---  (PID=9119, ProcName=vmware_base, ReqProvId=20185095)

Aug 18 08:44:33 esxp007 sfcb-vmware_base[9119]: Provider is not found

 

 

Here is another entry as well vmkernel:

 

Aug 18 09:48:46 esxp007 vmkernel: 34:14:05:02.678 cpu4:24519)WARNING: NMP: nmp_AddPathToDevice: The SATP for physical path "vmhba0:C0:T3:L9" (VMW_SATP_DEFAULT_AA) does not match the SATP and options already associated with NMP device with primary uid "naa.60a98000486e2f422f4a4c6d

Aug 18 09:48:46 esxp007 vmkernel: 34:14:05:02.678 cpu4:24519)WARNING: VMW_SATP_ALUA: satp_alua_getTargetPortInfo: Could not find relative target port ID for path "vmhba0:C0:T3:L9" - Not found (195887107)

Aug 18 09:48:46 esxp007 vmkernel: 34:14:05:02.678 cpu4:24519)WARNING: NMP: nmp_SatpClaimPath: SATP "VMW_SATP_ALUA" could not add  path "vmhba0:C0:T3:L9" for device "naa.60a98000486e2f422f4a4c6d30395467". Error Not foundd

Aug 18 09:48:46 esxp007 vmkernel: 34:14:05:02.678 cpu4:24519)WARNING: ScsiPath: 3815: Plugin 'NMP' had an error (Not found) while claiming path 'vmhba0:C0:T3:L9'.Skipping the path.

Aug 18 09:48:46 esxp007 vmkernel: 34:14:05:02.678 cpu4:24519)ScsiClaimrule: 1183: Plugin NMP specified by claimrule 65535 was not able to claim path vmhba0:C0:T3:L9. Busy

Aug 18 09:48:46 esxp007 vmkernel: 34:14:05:02.678 cpu4:24519)ScsiClaimrule: 1405: Error claiming path vmhba0:C0:T3:L9. Busy.

 

I might have to open a ticket to get to the bottom of this.

Any thoughts here? I inherited these host and am new to NetApp storage as well.

ALUA state change

$
0
0

Hello,

 

I'm testing iSCSI storage based on SCST project with VMWare ESX (5.5 and 6.0).

 

Storage worked in Active/Active mode and connected to ESX with two paths.

Storage support implicit ALUA with states Active and Standby.

 

esxcli storage nmp device list -d eui.6665316438643664

eui.6665316438643664

    Device Display Name: SCST_BIO iSCSI Disk (eui.71446433436a6168)

    Storage Array Type: VMW_SATP_ALUA

    Storage Array Type Device Config: {implicit_support=on;explicit_support=off; explicit_allow=off;alua_followover=off; action_OnRetryErrors=off; {TPG_id=41080,TPG_state=AO}{TPG_id=41801,TPG_state=AO}}

    Path Selection Policy: VMW_PSP_FIXED

    Path Selection Policy Device Config: {preferred=vmhba37:C0:T1:L0;current=vmhba37:C0:T1:L0}

    Path Selection Policy Device Custom Config: 

    Working Paths: vmhba37:C0:T1:L0

 

I'm also trying to use MRU policy.

 

How we perform test - implicitly change state for one of the path (iSCSI connection).

Problem that ESXi do not react on ALUA state change and do not change state (also do not switch traffic to new active path).

 

Just after implicit state change we get:

 

- AEN to initiator (0x06 0x2a 0x06 Asymmetrict access state changed)

- REPORT LUNs from initiator

- REPORT TARGET PORT GROUPS from initiator

 

No TEST UNIT READY

 

In Standby state commands (except allowed) return error with 0x02 0x04  0x0b Local unit not accessible, target port in standby.

 

Why ALUA statehas not changed? ESXi use AEN to detect state change or poll with TUR commands? Something wrong with policy/array type?

Success probability of 3 Tb snapshot removal if system itself takes about 3Tb and there's left only about 70 Gb space on the VMFS5 partion (vSphere 5.1)

$
0
0

Hello.

 

I have a problem... VM located on vSphere 5.1 VMFS5 storage partition has 3 vmdk files (1 for each data partition in windows 2008 server, ~1Tb each). There were snapshot taken from this system, which currently used nearly all available space on the storage partition. I need to remove this snapshot as it is no more needed, but doubt if there's enough free space left on this partition to detele snapshot (~70Gb left). Would I be able to successfully remove snapshot or have I before this operation migrate VM to another VMFS5 datastore with more space on it?

Using Vmware 4.0 and having storage issues

$
0
0

     I was trying to extend the storage space on one of 2 servers using Datastore1.  Powered down the server.  It said that I could extend Disk1 to 80GB with 2 remainder.

Adjusted up to 80GB then tried to power up the server.  It now says "insufficient space on datastore" .  When I try to reset it back to the orginal size of 69GB it keeps going back to the 80gb that was previously in the box upon tabbing out of it that I thought I could expand in to.   Anybody have any ideas on how to either get it to accept what it thinks is available space or return in to its previous provisioned amount?  I am running on Vsphere 4.0 and esxi 4.0.

Thanks,


Physical RDM's to Virtual RDMs Supported in vSphere 6.0

$
0
0

Does anyone know if this process in KB 1006599 works in vSphere 6.0+?

 

VMware KB:     Switching a raw data mapping between physical and virtual compatibility modes in ESX/ESXi

 

We are about to migrate a bunch of RDM's to VMDKs however we need to convert from physical to virtual RDMs first (you can live storage migrate [to VMDKs] with virtual RDMs but not physicals). The other part of the story is that we are currently running vSphere 5.5 and are about to upgrade to vSphere 6.0

 

The KB was written on Oct 17, 2014 and I am wondering if it is still the supported method in vSphere 6.0 and hasn't been updated; or if it is no longer supported in which case I must do the migrations before the upgrade?

 

Thanks

Merging / Combining multiple VMDK's on different datastores into one VMDK on yet another Datastore.

$
0
0

So, I think I know the answer to this (do it the long way) but i wanted to run it by the community just in case I was wrong.

 

The Environment:

vSphere 5 Enterprise

ESXi 5.1.0.

vm: Windows serer 2008 standard (not R2).

upgrading to a 2012R2 server.

 

Originally, there were never any LUNs/datastores made over 500GB because of the old-school cluster sizing before VMFS5.

 

Basically, we had a 225GB drive on a Windows file server. More space was needed, so rather than increasing the size of the virtual disk, and expanding it in windows, they added another 225GB disk to the vm, from a different datastore, and spanned the disks in Windows.  This was performed again later, so this vm currently has three separate 225GB disks (four if you count the system disk) on three separate datastores, that the OS sees as a single 675GB drive.

 

What I would LIKE to do is simply merge those three vmdk's via magic into a single VMDK, then just mount that VMDK onto the new server. I've read some documentation and some posts here and there regarding merging split vmdks into monolithic ones, but that's still VMware presenting the OS with a single disk that happens to be broken up into pieces, and not VMware presenting the OS with multiple disks that the OS itself combines into one piece.

 

I'm 90% sure that this is just a pipe dream, but it (usually) never hurts to ask if anyone has run across a similar experience.

 

Thanks!

ESXi multiple storage array design iSCSI?

$
0
0

Hi,

 

I've a scenario where I've to connect 3 storage arrays to the same ESXi and I've only 2x10Gb interfaces available in the server.

Each 10Gb card is connected to a different switch and each storage array network is in a different VLAN and subnet.

 

This is my setup:

Array1: 10.10.1.1/24

Array2: 10.10.2.1/24

Array3: 10.10.3.1/24

 

ESXi

PortGroupArray1-A: 10.10.1.2/24 vmkernel active/unused

PortGroupArray1-B: 10.10.1.3/24 vmkernel unused/active

 

PortGroupArray2-A: 10.10.2.2/24 vmkernel active/unused

PortGroupArray2-B: 10.10.2.3/24 vmkernel unused/active

 

PortGroupArray3-A: 10.10.3.2/24 vmkernel active/unused

PortGroupArray3-B: 10.10.3.3/24 vmkernel unused/active

 

I want to use port-binding, but this KB VMware KB: Considerations for using software iSCSI port binding in ESX/ESXi explains that to use port binding all the vmkernel must be in the same broadcast domain, and the storage arrays are in a separate VLAN. I've opened a case with VMWARE and the received solution is add network cards to the ESXi or move all the storage arrays to the same subnet.

 

I'm using iSCSI software.

If I use port binding the hba scans are really slow and I get a lot of errors in the logs, Because the software iSCSI stack asks all VMkernel ports to log in to all available targets on the storage arrays.

VMware KB: Rescanning takes a long time when using multiple VMkernel ports with port binding to access two or more s…

 

Without port-binding I cannot achieve multipathing and load balancing.

What will be the supported setup ?

 

Cheers

VAAI xcopy not working on ESX6.0, used to work with ESX 5.5

$
0
0

We are seeing that VMwareVAAI  XCOPY is not working with ESX 6.0. This used to work fine with ESX 5.5. The storage has not been updated, only the ESX server was updated to 6.0 and XCOPY stopped working. The same operation which works with a ESX 5.5 cluster on the  storage node, does not work with a ESX 6.0 cluster on the same storage node. So it does not seem to be a array vendor issue as it clearly works in ESX 5.5 version and not the newer ESX 6.0 version.

 

Anyone have a clue as to why this could be..

 

I have made sure of the following


The source and destination volumes were the same blocksize as I was cloning within the same VMFS volume

  1. The source file was not an RDM
  2. The disk was a flat disk format
  3. The VMFS datastore was created using the vSphere Web client

iscsi or nas storage for vsphere essentials plus

$
0
0

We are a small church and have volunteer technical members who have been very helpful in assisting us setup a VMware environment consisting of 2 ESXi5 hypervisor hosts.   We are about to make a purchase of Essentials Plus so that we can use vCenter and some of the HA and vMotion features to enhance what we already have.   We also have a 3rd server that contains 2TB of storage.  It was once a Windows 2003 file server.   We would like to keep it around and use it as our shared storage device with the Essentials Plus environment.  The question is:  can we convert this server to either a NAS storage device or iSCSI storage device and share the storage space as our VMware Datastore?  If so, how?  

Viewing all 2328 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>