Quantcast
Channel: VMware Communities : Discussion List - vSphere™ Storage
Viewing all 2328 articles
Browse latest View live

HP VSA 11.0 (2014) - Bad storage performance

$
0
0

2 hosts: Dell PowerEdge 720xd running vsphere 5.5 update 1

10 disks per host. 10k RPM 1.2 TB 2.5inch

One HP VSA per host, 2 vCPU and 8 GB memory per VSA.

 

All the latest updates for VSA. Almost all the latest vmware updates (they keep releasing more after my update cycle)

 

2014-05-02 16_42_43-pe720-host1 - PuTTY.png

 

my disk latency for a VM (when producing a significant storage load) is very bad. My HP storeVirtual hardware under similar load functions just fine. seems to be something related to VSA + VMware. the vmhba0 (dell perc card) seems to perform fine.

 

the kavg seems high. this article says kavg is when the vmkernel (VMware) is processing a command.

VMware KB: Using esxtop to identify storage performance issues for ESX / ESXi (multiple versions)  http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1008205

 

 

Thoughts on what I may have configured wrong or how I should proceed with troubleshooting?

 

Also, if I use IO meter and test with high IOPS but small writes. everything performs good. If I choose large writes (4 mb) everything performs like crap.


Storage issue

$
0
0

I am Facing a storage Issue in vmware esx 5.0, i am using dell power edge 2950 server.Kindly help me in this

 

Capture.JPG

error.JPG

Recompute Datastore Groups every two minutes

$
0
0

Hi All,

 

I currently added a NFS Datastore on my ESXI hosts and now every two minutes I see Recompute Datastore Groups every two minutes. My VMKernel is configure for MTU 9000 and is on a DvSwitch Port Group. The Load Balancing Teaming and Failover Policy it set for Route base on physical NIC load. Any idea why this may be happening or if this happen to someone can point me to the right direction.

 

Thank you for any help!!!!

Vmware vsphere 5.5 with VSA Version 5.5.0.2

$
0
0

Hello my name is Ezequiel and I have problems with VMware vSphere 5.5 with VSA 5.5.0.2 .

I have bad performance in read / write disk in virtual machines. When I copy a file of 5.5GB on a virtual machine running Windows 7, the copying starts around 150Mb / s , after a few seconds (20 seconds) , downward transfer to 10Mb / s

Sorry for my English
I ask for help to see if I can solve the problem.


Thank you

Untitled.jpg

4.jpg

 

2.jpg

3.jpg

VMDKs that cant be copied, renamed,deleted or used with a VM due to I/O-errors - possible workaround

$
0
0

Hopefully you never have to work with a VM that blocks all operations and reports I/O errors.
Having such a problem on a datastore with activ production-VMs can be a really pain - solving it with ESXi buildin-tools may appear impossible.

In some easy cases using Linux to extract such a vmdk may work - but often that also fails.
If you ever run into a problem like this dont give up to early.
Last weekend I tried a quite obscure procedure to recover a domain-controller VM with one 80gb thick/unfragmented vmdk and another 512gb/85 fragments thick vmdk.
ESXi-tools all reported I/O errors so the customer already expected a complete loss.
But after two days I could present a completely healthy VM that did not even need a checkdisk on first start after the recovery.
It is a bit to early to assume this will work in similar cases - but at least I now have an idea how to handle such a problem.

 

If you run into a problem like that and run out of options - let me have a look - maybe I can help.

 

Ulli

Poor disk performance in virtual machines

$
0
0

Hi,

 

I have an VMware ESXi 5.1 update 2 infrastructure. It consists of five hosts in a cluster.

 

I have attached a fiber channel storage array to the hosts.

 

I have shown a LUN to the hosts.

 

I have created two windows 2012 R2 VM's, and put them in separate hosts.

 

I have attached the LUN to the virtual machines as a shared raw mapped phisical disk. I have created a MSCS file server cluster with those two Windows 2012 R2 servers and the shared storage.

 

I have installed IOmeter in one of the virtual machines.

 

I also have two Windows 2003. They are phisical machines. There is a file server cluster installed on them. Another fiber channel storage array is connected to them, to provide the shared space needed for the cluster.

I have installed IOMeter in one Windows 2003.

 

The array disk used for ESXi scenario, is at least, twice faster in performance than the used for Windows 2003 phisical scenario.

 

I run the standard tests in both WIndows 2003 and Windows 2012: 4 KB, 100% Read, 100% sequential. 4 KB 75% Read, 100% sequential. 4 KB, 50% Read, 100% sequential. 4KB, 25% Read, 100% sequential. 4KB, 0% Read, 100% sequential.

 

The results are suprising, at least I think so: all the measurements (I/Os per second, Total MB per second, Average I/Os response time, Maximun I/Os response time and % CPU utilization) are between twice and four times better in the Windows 2003 scenario. I mean, for example, in any of the tests, if the I/O's per second are 1200 in Windows 2012 machine, in the Windows 2003 machine is 4000.

 

Since the array attached to the ESX hosts is supposed to give twice much better performance than the attached to Windows 2003 phisical machines, I guess there is something not well configured, or something not optimized in the ESX host o Virtual Machine configuration. But I can't imagine what can it be.

 

I need help to improve the disk performance of these VM's.

VMware 5.5 major Latency's! - System Down

$
0
0

Hi Everyone,

I'm really in trouble and could do with opening my issue up to the Community!

I have VMware 5.5u3 with Netapp storage FAS 2240.

Ever since Thursday last week the system has been unstable, and showing latency's of over 2 seconds when ever we do anything.

I have taken off line 80% of the load and it still courses the issues. Netapp have been all over the system even when we cause it to fail and we get a massive latency they don't see anything significant on the advanced monitoring tools. So I'm back at looking at VMware.

 

VMware have recommended a couple of changes but they seem to be taking no affect.

 

Any ideas???

 

We are trying to bring other SAN and NAS Boxes online to test things, but some direction would be useful!

 

Thanks everyone!

 

Peter

Vsphere 6, IBM V3700, 10GbE Best practise

$
0
0

Hi Guys.

 

Scenario :

 

1 x V3700 with dual controller each equipped with 2 x 10GbE. (4 x 10GbE total).

3 x X Series with dual 10GbE

2 x 10GbE switch for redundancy.

 

I need multiple vlans within the vswitches.

 

I am a little unsure about the best practise for configuring the vswitches on vsphere.

The documentation i found guides me to make 1 separate vswitch pr. 10GbE connection.

 

Is the only way to do it, to use only 1 vswitch with both 10GbE connections when i need multiple vlans ?

 

The iscsi and management is running in separate vlans, i use dedicated storage vlan for the V3700.

 

Sorry for my poort english and have a nice day.


vcenter mob ui shows incorrect CBT status

$
0
0

hello!

 

i have vcenter 5.5 and esxi 5.5. i had enabled the CBT using the KB article vmware has and have power cycled the VM.  the VM even has ctk disks when i checked the datastore.

 

but when i go and look at the mob server via UI https://<vencer>/mob the value shows false.

 

as  a result of this i see that the backup application is ignoring CBT.

 

request some guidance on solving this?

 

Thanks!

How to delete an absend disk in a VSAN Node ?

$
0
0

Hi All

   I am learning VSAN...and I build a 3 nodes server in my VSAN lab, one day, a server could not see this data disk , and I replace disk but I find it generate a "absent disk " ...I try to delete but fial...and I remove this server from cluster then format SSD and data disk to clean ...absent disk seem not exist, but then I rejoin VSAN...thhe absent disk show up again.

 

   Whether I have any better method to clear this absent disk from my VSAN Lab or not??

 

thanks!!

 

wencheng

P2000 G3 FC Disk Latency ESXi 5.5

$
0
0

Hi All,

 

First and foremost hello to everyone in the VMware community, new here and this my first post and i am hoping someone will be able to assist me as i am at wits end.

 

Our Setup

 

Hosts : DL380 Gen8's running esxi 5.5

HBA's : Emulex Lpe12000 8GB

Switch : HP SAN 8/8 Fabric switch

SAN : P2000 G3 FC ( 3 Enclosures with 10 x 900GB 10K rpm disks in each )

 

I have noticed high disk latency on a couple of our servers, for example on a SQL server running 1 DB the drive holding that DB is generating latency of 60ms, we have another server where we a have raw lun mapped for file share purposes and writing back to that lun is generating latency of around 60ms+

 

Hardware acceleration is enabled along with storage I/O. We are using raid 5 Vdisks with a couple of small 2TB volumes approx 3 presented to the ESXi hosts to form a storage cluster.

 

I've tried everything to reduce the latency from ensuring there is adequate ram/cpu that the disks are eager zero and the drives are split out using multi scsi controllers etc....

 

I've also done the following as per HP's recommended best practices for the HP P2000 G3 SAN

 

Change default PSP path to Round Robin for HP P2000 G3

esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp=VMW_SATP_ALUA

Next ran this to set the psp for all existing volumes to round robin.

for i in `esxcli storage nmp device list | grep naa.600` ; do esxcli storage nmp device set --device $i --psp VMW_PSP_RR; done

Finally set the path-change-frequency from 1000 IOps (default) to 1 so every IO the other (optimized) path is used

for i in `esxcli storage nmp device list | grep naa.600` ; do esxcli storage nmp psp roundrobin deviceconfig set -t iops -I 1 -d $i; done


This has made no difference to the latency

 

Also installed the latest VAAI.

 

I've taken some random  snippets from two hosts

 

The VM Kernel logs for host 2 are reporting

 

2014-01-14T20:18:45.807Z cpu8:33531)WARNING: Migrate: 262: Invalid message type for new connection: 542393671.  Expecting message of type INIT (0).

2014-01-14T20:19:01.661Z cpu12:34002)Config: 346: "HostLocalSwapDirEnabled" = 0, Old Value: 0, (Status: 0x0)

2014-01-14T20:21:02.771Z cpu6:32811)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412e803e8cc0, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. Act:EVAL

2014-01-14T20:21:02.771Z cpu6:32811)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2014-01-14T20:21:02.771Z cpu6:32811)ScsiDeviceIO: 2337: Cmd(0x412e803e8cc0) 0x1a, CmdSN 0x21cb from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

2014-01-14T20:21:02.787Z cpu6:32811)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412e803e8cc0, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. Act:EVAL

2014-01-14T20:21:02.806Z cpu4:32809)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412e803e8cc0, 0) to dev "mpx.vmhba35:C0:T0:L0" on path "vmhba35:C0:T0:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE

2014-01-14T20:21:02.806Z cpu4:32809)ScsiDeviceIO: 2337: Cmd(0x412e803e8cc0) 0x1a, CmdSN 0x21cf from world 0 to dev "mpx.vmhba35:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2014-01-14T20:26:02.773Z cpu3:32808)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412e82d28680, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. Act:EVAL

2014-01-14T20:26:02.773Z cpu3:32808)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2014-01-14T20:26:02.773Z cpu3:32808)ScsiDeviceIO: 2337: Cmd(0x412e82d28680) 0x1a, CmdSN 0x21e8 from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

2014-01-14T20:26:02.792Z cpu3:32808)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412e82d28680, 0) to dev "mpx.vmhba35:C0:T0:L0" on path "vmhba35:C0:T0:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE

2014-01-14T20:26:02.792Z cpu3:32808)ScsiDeviceIO: 2337: Cmd(0x412e82d28680) 0x1a, CmdSN 0x21ee from world 0 to dev "mpx.vmhba35:C0:T0:L0" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

 

Host 5 is reporting

 

014-01-14T20:17:00.194Z cpu15:32820)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x2a failed <3/23> sid x010400, did x010000, oxid xffff SCSI Reservation Conflict -

2014-01-14T20:17:03.400Z cpu16:33500)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x2a failed <1/3> sid x010400, did x010800, oxid xffff SCSI Reservation Conflict -

2014-01-14T20:17:35.242Z cpu10:33500)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x28 failed <0/4> sid x010400, did x010900, oxid xffff SCSI Reservation Conflict -

2014-01-14T20:17:35.242Z cpu10:33322)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x28 failed <0/4> sid x010400, did x010900, oxid xffff SCSI Reservation Conflict -

2014-01-14T20:17:35.242Z cpu10:33322)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x28 failed <0/4> sid x010400, did x010900, oxid xffff SCSI Reservation Conflict -

2014-01-14T20:17:35.242Z cpu10:33500)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x28 failed <0/4> sid x010400, did x010900, oxid xffff SCSI Reservation Conflict -

2014-01-14T20:17:35.242Z cpu10:33322)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x28 failed <0/4> sid x010400, did x010900, oxid xffff SCSI Reservation Conflict -

2014-01-14T20:17:35.242Z cpu10:33322)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x28 failed <0/4> sid x010400, did x010900, oxid xffff SCSI Reservation Conflict -

2014-01-14T20:17:35.243Z cpu10:33322)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x28 failed <0/4> sid x010400, did x010900, oxid xffff SCSI Reservation Conflict -

2014-01-14T20:17:35.243Z cpu10:33322)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x28 failed <0/4> sid x010400, did x010900, oxid xffff SCSI Reservation Conflict -

2014-01-14T20:17:35.245Z cpu10:33500)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x28 failed <0/4> sid x010400, did x010900, oxid xffff SCSI Reservation Conflict -

2014-01-14T20:18:09.402Z cpu1:32800)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd x2a failed <1/3> sid x010400, did x010800, oxid xffff SCSI Reservation Conflict -

2014-01-14T20:20:42.407Z cpu6:32811)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 1:(0):3271: FCP cmd xa3 failed <0/1> sid x010400, did x010900, oxid xffff SCSI Chk Cond - Unit Attn: Data(x2:x6:x3f:xe)

2014-01-14T20:20:42.409Z cpu23:514490)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 0:(0):3271: FCP cmd xa3 failed <3/2> sid x010400, did x010900, oxid xffff SCSI Chk Cond - Unit Attn: Data(x2:x6:x3f:xe)

2014-01-14T20:20:42.409Z cpu23:514490)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 0:(0):3271: FCP cmd xa3 failed <3/1> sid x010400, did x010900, oxid xffff SCSI Chk Cond - Unit Attn: Data(x2:x6:x3f:xe)

2014-01-14T20:20:42.409Z cpu23:514490)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 0:(0):3271: FCP cmd xa3 failed <3/8> sid x010400, did x010900, oxid xffff SCSI Chk Cond - Unit Attn: Data(x2:x6:x3f:xe)

2014-01-14T20:20:42.409Z cpu23:33320)lpfc: lpfc_scsi_cmd_iocb_cmpl:2145: 0:(0):3271: FCP cmd xa3 failed <3/0> sid x010400, did x010900, oxid xffff SCSI Chk Cond - Unit Attn: Data(x2:x6:x3f:xe)

2014-01-14T20:20:42.414Z cpu19:32824)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412fc00128c0, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. Act:EVAL

2014-01-14T20:20:42.414Z cpu19:32824)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2014-01-14T20:20:42.414Z cpu19:32824)ScsiDeviceIO: 2337: Cmd(0x412fc00128c0) 0x1a, CmdSN 0x888e from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

2014-01-14T20:25:42.423Z cpu21:32826)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412fc6702c80, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Possible sense data: 0x4 0x0 0x0. Act:EVAL

2014-01-14T20:25:42.423Z cpu21:32826)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2014-01-14T20:25:42.423Z cpu21:32826)ScsiDeviceIO: 2337: Cmd(0x412fc6702c80) 0x1a, CmdSN 0x88aa from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x4 0x0 0x0.

2014-01-14T20:30:42.432Z cpu21:32826)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412fc42be600, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. Act:EVAL

2014-01-14T20:30:42.432Z cpu21:32826)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2014-01-14T20:30:42.432Z cpu21:32826)ScsiDeviceIO: 2337: Cmd(0x412fc42be600) 0x1a, CmdSN 0x88c2 from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

2014-01-14T20:32:07.873Z cpu11:32816)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412e8a0d9ec0, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. Act:EVAL

2014-01-14T20:32:07.873Z cpu11:32816)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2014-01-14T20:32:07.873Z cpu11:32816)ScsiDeviceIO: 2337: Cmd(0x412e8a0d9ec0) 0x1a, CmdSN 0x88cc from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

2014-01-14T20:32:07.891Z cpu11:32816)ScsiDeviceIO: 2337: Cmd(0x412e8a0d9ec0) 0x1a, CmdSN 0x88cd from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

2014-01-14T20:35:42.441Z cpu19:32824)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412fc6531c00, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. Act:EVAL

2014-01-14T20:35:42.441Z cpu19:32824)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2014-01-14T20:35:42.441Z cpu19:32824)ScsiDeviceIO: 2337: Cmd(0x412fc6531c00) 0x1a, CmdSN 0x88db from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

2014-01-14T20:40:42.447Z cpu12:32817)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412fc23d6200, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Possible sense data: 0x2 0x3a 0x0. Act:EVAL

2014-01-14T20:40:42.447Z cpu12:32817)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2014-01-14T20:40:42.447Z cpu12:32817)ScsiDeviceIO: 2337: Cmd(0x412fc23d6200) 0x1a, CmdSN 0x88f1 from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x2 0x3a 0x0.

2014-01-14T20:45:42.457Z cpu19:32824)NMP: nmp_ThrottleLogForDevice:2321: Cmd 0x1a (0x412fc36e9240, 0) to dev "mpx.vmhba32:C0:T0:L0" on path "vmhba32:C0:T0:L0" Failed: H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0. Act:EVAL

2014-01-14T20:45:42.457Z cpu19:32824)WARNING: NMP: nmp_DeviceRequestFastDeviceProbe:237: NMP device "mpx.vmhba32:C0:T0:L0" state in doubt; requested fast path state update...

2014-01-14T20:45:42.457Z cpu19:32824)ScsiDeviceIO: 2337: Cmd(0x412fc36e9240) 0x1a, CmdSN 0x8907 from world 0 to dev "mpx.vmhba32:C0:T0:L0" failed H:0x7 D:0x0 P:0x0 Possible sense data: 0x0 0x0 0x0.

 

 

Attached some pictures

 

Latency 1 : this is a server using VMDK's only no RDM's

Latency 2 : this server has an RDM attached (scsi1:0) latency is the same

 

I've also updated on host 2 the HBA firmware to Firmware 2.01a12 which is the latest one available for the LPE12000 this again has made no difference

 

The SAN management controllers are running the latest firmware available also

 

 

latency1.JPGlatency2.JPGhba.JPGp2000_management.JPGp2000_vdisk_overview.JPG

 

Any help would be greatly appreciated please.

I can't change directory in ESX 5.5

$
0
0

Hello,

I'm logged on the root account with administrator privilegies and trying to access the directory /vmfs/volumes/datastore1, but im getting the message error: Permission Denied

Error no more space for virtual disk. Two 50TB vmdk into one 100TB VM Compatibility 9 or 11 with Win2012 Server

$
0
0

I have a VM that is currently under 5.1. It has 3 disks mounted 30GB, 50TB, and 50TB. Each vmdk is below the 62TB limit imposed. The location they are stored on have 117.3TB free. Inside of the VM I have bridged the 2 50TB volumes to get one 100TB volume.

 

I have been transferring data to the one larger volume (as windows now sees just the 1 volume, but in disk management is shows the two 50TB). I get error that the vmdk is full as it stands at 49.15TB of used space.

 

"There is no more space for virtual disk RF-DATA_1.vmdk. You might be able to continue this session by freeing disk space on the relevant volume, and clicking Retry. Click Cancel to terminate this session."

 

I know that there is a limitation on Virtual machines for a "virtual disk size" of "62TB"  with the volumes being 50TB I am below that limit.

 

There are currently no snapshots for this taking up any space.

 

It seems the issue is some how tied to the transfer maybe of the robocopy from another NFS share on our network to this VM and the VM not rolling the data to the other vmdk as stated in windows. .

 

Can someone possibly shed some light on this for me and help me with a solution to fixing this bridging issue between the two vmdks?

 

Here is a link to a older blog post on the idea, however it doesnt address the issue I am having.

Is Spanning VMDKs Using Windows Dynamic Disks a Good Idea? - VMtoday

I understand the issues presented with such a large volume in a single VM. I have a specific use case where all the files need to be co-located due to HPC Software limitations.

Cannot Shrink/Convert 2048GB VMDK

$
0
0

Hello everyone, I have several 2TB (2048GB) thick vmdk files that I need to shrink or convert to thin/growable. The actual file sizes are 2,147,483,647KB.

 

I have tried the VMware standalone converter, but received the error "unable to obtain hardware information for the selected machine". I've tested this with smaller VMDK's and didn't have an issue.

 

I also tried the vmware-vdiskmanager.exe that's included with VMware workstation pro, but received the error of "failed to defragment: The specified file is not a virtual disk (0x3ebf)" and then the same error when trying to convert to thin/growable. I tested the same commands on a smaller vmdk and it worked without a problem.

 

I just seem to be really stuck because of the 2TB size. Nothing seems to be able to shrink or convert them.

 

Any help would be greatly appreciated. Thank you!

Extremely high latency when migrating from local datastore to shared datastore.

$
0
0

Hi guys i hope you to help me. Sorry for my english btw, i'm not native.

 

Let's begin!

 

I have:

 

1 vCenter
1 Host
1 Distributed Switch (with one pg for management network/IPstorage of the esxi)
1 Standard Switch (Empty)
1 FreeNAS to provide iSCSI LUNs
1 Microsoft to provide iSCSI LUNs

 

 

When i try to migrate VMs between shared datastores or from shared datastore to local, everything is fine. The problem became when i try to migrate VMs from local to Shared datastore. All the datastores go down ( All paths down ) and go up again and i recieve this error:

 

 

" Error caused by file /vmfs/volumes/volumenID/VMDirectory/Disk.vmdk "


When i try to migrate VMs from local to FreeNAS iSCSI datastore it fails inmediatly.
When i do the same from local to Microsoft iSCSI datastore it takes a loooooong time to migrate the VM, give me the same All paths down and uplinks down error but don't fail in the migration.

 

 

 

I'll give you some screenshots to see the errors.

 

Thanks a lot!

 

EDIT: I have notice extremely high latency when i try to migrate from local to shared datastores. 2000ms average with peaks of 50.000 (See my reply below for more info)


Datastore Clusters and Different versions of ESXi hosts

$
0
0

I'm looking at building out a new ESXi cluster in our datacenter, and I was thinking of going with version 6 on the new hosts, but was wondering if I could connect to datastore clusters that are currently connected to another ESXi cluster that is running version 5.5.  I saw in the documentation that if it will work with multiple versions of 5.x, but wanted to make sure it would also work between 5.5 and 6.  Thanks!

How to make vmware iscsi behave like linux iscsi initiator or microsoft iscsi initiator?

$
0
0

Hi, I am newbie at vmware and I am trying to connect an iSCSI target (jscsi.org) at vcenter through its initiator. I already can do through NFS when I mount with open-iscsi initiator and NFS protocol. But, for some reason, the vmware iscsi initiator does not behave in the same way of the linux open-iscsi initiator. I cant arrive until the partition of the disk but vmware cannot format my disk (image attached).

 

1. Do you guys know how to configure the iscsi vmware initiator to behave like the open-iscsi linux initiator or microsoft initiator?

2. Where can I see more log messages about the refused connection? I am seeking erros at /var/log/vmkernel.log

3. I pluged wireshark and I can see the vmware initiator makes more the one connection login, instead open-iscsi initiator and microsoft initiator makes just one. Maybe it is trying to make severel connections in the iscsi session? How to check it? Because when I allowed more connections at my iscsi target (jscsi.org) it goes until finish the partition table disk, but still cannot format.

 

image.png

DataStore 2 VVOL Migration Problem

$
0
0

I have 13 VMs that I cannot storage migrate from a DataStore to a VVOL and  I cannot find anything that these VMs have in common that might be causing this.

Anyone else see this before. I have a call open with support, but they cant figure it out either.

 

This is the error I receive.

Failed to create one or more destination disks.

A fatal internal error occurred. See the virtual machine's log for more details.

Failed waiting for data. Error 195887107. Not found.

Cannot add vasa provider in vSphere 6 like I could in vSphere 5

$
0
0

I have two vSphere environments where one is on vSphere 5 (production) and one is on vSphere 6 (test).  The production environment has no issue connecting to our VNX5200 VASA service url at https://vnxcs0:5989/vasa/services/vasaService.  However the vSphere 6 test I am doing to evaluate it before making it production is not going so well.  I go to add it in vsphere 6 and I get a security alert.

Unable to verify the authenticity of the specified host.

Issuer:CN=emcnas_i0,O=VNX Certificate Authority

Do you wish to proceed connecting anyway?

 

 

Choose "Yes" if you trust the host. The above information will be remembered until the host is removed from the inventory.

 

I choose yes, it says registering but then it comes back with "The provider certificate is invalid.  It is either empty, malformed, expired, not yet valid, revoked, or fails host name verification."

 

I did NOT have this problem on vSphere 5.  Any ideas?  I explicitly clicked YES to trust the certificate but it seems like it errors out anyway on the certificate.

HP Proliant local logical disks not showing up. Please help

$
0
0

Pretty new to this whole thing, but have a decent IT background and do minor stuff in VMware at work.

 

Have a HP DL380 that I installed ESXi 6.0 on. This server has a 120gb HDD for the OS. It has 3 additional 1TB HDD’s that are configured as a Raid 0.

 

When I login to server through vSphere Client, it does not detect the 3tb logical drive.

 

In Storage Adapters I have the following


     631xESB/632xESB IDE Controller
          bmhba0               ---           CDROM DRIVE
          vmhba32             ---           ???
     Smart Array P400
          vmhba1                ---           120GB OS HDD

 

I have read that local drives show up as remote. But that should not prevent them from showing up here… correct?

 

I have also read that if the drives have any preexisting data on them they may not show up. Drives are new and have been formatted.

 

What am I missing? It can’t possibly be this complicated.

Viewing all 2328 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>