Quantcast
Channel: VMware Communities : Discussion List - vSphere™ Storage
Viewing all 2328 articles
Browse latest View live

Obscure VMFS behaviour - via iSCSI to a QNap

$
0
0

This weekend I was busy with a very obscure VMFS-recovery.
An iSCSI-volume of a Qnap-storage had about 2/3 of its vmdk-files damaged.
Those flat.vmdks that appeared to be damaged actually were moved 224 sectors towards the end of the physical VMFS-volume.
So we could fix some of them by adding another vmdk of the same size to the VM.
Then we booted the VM into Linux and cloned the disks like this:
dd if=/dev/sda of=/dev/sdb bs=512 skip=224.
This obscure offset is something I have never seen before.
The VMFS-volume itself was properly alligned with a 1MB offset for the start of the VMFS-partition.
Has anybody seen something like this before or has a theory which would explain such a behaviour ?
Additional info: no Raid5 problems , no unexpected power failures , behaviour appeared with out any apparent problems or warnings before.
Suggestions or theories are welcome.
Recovery results so far: most important SQL-servers were recovered, we still have some problems with Windows 2008 and Windows 2012 boot disks that are readable but no longer bootable.
Ulli


vSphere 6u2 and EMC UnityVSA and VVOLs -> Protocol Endpoints are not created -> Certificate problem: ESXi doesn' like UnityVSA (updated)

$
0
0

Hi all,

 

I've just installed UnityVSA from EMC (Version 4.0.1) in my lab with vSphere 6u2.

 

I've created an old fashioned VMFS-Datastore on a LUN and an NFS-Datastore on a share. Everything is working fine.

 

Then I created using Unisphere a Datastore for blockbased VVOLs (via iSCIS) and for filebased VVOLs (via NFS). Everything went fine in the UnityVSA.

 

Then I added the StorageProvider in vCenter - also this workes without error.

 

BUT: There are no PEs created on the vSphere.

 

The PEs are present on the UnityVSA (output from uemcli executed directly on the UnityVSA):

 

18:25:40 service@VIRT1636W2LHJW-spa spa:~> uemcli -d localhost -u admin -securePassword /stor/prov/vmware/pe show

Password:

Storage system address: localhost

Storage system port: 443

HTTPS connection

 

 

1:    ID                   = rfc4122.b835c8dd-5cc2-48d8-9bb6-4e1483c35ecf

      Name                 = nas_pe_3

      Type                 = NAS

      VMware UUID          = rfc4122.b835c8dd-5cc2-48d8-9bb6-4e1483c35ecf

      Export path          = 192.168.4.33:/rfc4122.3c70af02-00ad-4d9e-be70-16fe20fe07e7

      IP address           = 192.168.4.33

      WWN                  =

      Default SP           =

      Current SP           =

      NAS Server           = nas_1

      VMware NAS PE server = PES_1

      VVol datastore       = res_9

      Host                 =

      Health state         = OK (5)

      Health details       = "The protocol endpoint is operating normally. No action is required."

 

 

2:    ID                   = rfc4122.60060160-b2ed-2021-a932-d2f0067c0253

      Name                 = scsi_pe_13

      Type                 = SCSI

      VMware UUID          = rfc4122.60060160-b2ed-2021-a932-d2f0067c0253

      Export path          =

      IP address           =

      WWN                  = 60:06:01:60:B2:ED:20:21:A9:32:D2:F0:06:7C:02:53

      Default SP           = SPA

      Current SP           = SPA

      NAS Server           =

      VMware NAS PE server =

      VVol datastore       =

      Host                 = Host_1

      Health state         = OK (5)

      Health details       = "The protocol endpoint is operating normally. No action is required."

 

 

3:    ID                   = rfc4122.60060160-b2ed-2021-a2df-e66d9f94b2d1

      Name                 = scsi_pe_14

      Type                 = SCSI

      VMware UUID          = rfc4122.60060160-b2ed-2021-a2df-e66d9f94b2d1

      Export path          =

      IP address           =

      WWN                  = 60:06:01:60:B2:ED:20:21:A2:DF:E6:6D:9F:94:B2:D1

      Default SP           = SPA

      Current SP           = SPA

      NAS Server           =

      VMware NAS PE server =

      VVol datastore       =

      Host                 = Host_2

      Health state         = OK (5)

      Health details       = "The protocol endpoint is operating normally. No action is required."

 

 

4:    ID                   = rfc4122.60060160-b2ed-2021-b146-73a102b508cf

      Name                 = scsi_pe_15

      Type                 = SCSI

      VMware UUID          = rfc4122.60060160-b2ed-2021-b146-73a102b508cf

      Export path          =

      IP address           =

      WWN                  = 60:06:01:60:B2:ED:20:21:B1:46:73:A1:02:B5:08:CF

      Default SP           = SPA

      Current SP           = SPA

      NAS Server           =

      VMware NAS PE server =

      VVol datastore       =

      Host                 = Host_3

      Health state         = OK (5)

      Health details       = "The protocol endpoint is operating normally. No action is required."

 

 

19:02:21 service@VIRT1636W2LHJW-spa spa:~>

 

Also the appropriate LUNs for the blockbased VVOLs are created and provisioned (snipplet form vmkernel.log from one of the ESXi; the iSCSI-LUNs are available via two paths):

 

2016-09-09T18:38:00.810Z cpu3:32925)ScsiPath: 604: Path vmhba33:C0:T3:L1023 is a VVol PE (ver:6)

2016-09-09T18:38:00.811Z cpu1:32924)ScsiPath: 604: Path vmhba33:C0:T2:L1023 is a VVol PE (ver:6)

 

I can create the VVOL-Datastore from vSphere Web Client, the appropriate Storage Containers (i.e. "Datastores" from UnityVSA) are presented. But as there are no PEs, the Datastores are unaccessible and show a size of 0 bytes (output from PowerCLI):

 

PowerCLI C:\> get-datastore vvol*

Name                               FreeSpaceGB      CapacityGB

----                               -----------      ----------

vvol-nfs-vsa-1                           0,000           0,000

vvol-iscsi-vsa-1                         0,000           0,000

PowerCLI C:\>

 

By the way, the vvold on the ESXi ist not running, it says, that there is no VVOL-configuration.

 

[/etc/init.d/vvold: /etc/init.d/vvold start, called by pid 43665]

[/etc/init.d/vvold: vvold max reserve memory set to 200]

2016-09-09T20:20:15.054Z Section for VMware ESX, pid=43696, version=6.0.0, build=4192238, option=Release

------ Early init logs start --------

2016-09-09T20:20:15.052Z info -[FFD82350] [Originator@6876 sub=Default] Successfully registered SIGHUP handler

2016-09-09T20:20:15.052Z info -[FFD82350] [Originator@6876 sub=Default] Successfully registered SIGPIPE handler

2016-09-09T20:20:15.052Z info -[FFD82350] [Originator@6876 sub=Default] Successfully registered SIGTERM handler

------ Early init logs end   --------

2016-09-09T20:20:15.054Z info vvold[FFD82350] [Originator@6876 sub=Default] Logging uses fast path: true

2016-09-09T20:20:15.054Z info vvold[FFD82350] [Originator@6876 sub=Default] The bora/lib logs WILL be handled by VmaCore

2016-09-09T20:20:15.054Z info vvold[FFD82350] [Originator@6876 sub=Default] Initialized channel manager

2016-09-09T20:20:15.055Z info vvold[FFD82350] [Originator@6876 sub=Default] Current working directory: /var/log/vmware

2016-09-09T20:20:15.055Z info vvold[FFDC3B70] [Originator@6876 sub=ThreadPool] Thread enlisted

2016-09-09T20:20:15.055Z info vvold[FFE04B70] [Originator@6876 sub=ThreadPool] Thread enlisted

2016-09-09T20:20:15.055Z info vvold[FFE45B70] [Originator@6876 sub=ThreadPool] Thread enlisted

2016-09-09T20:20:15.055Z info vvold[FFD82350] [Originator@6876 sub=ThreadPool] Thread pool on asio: Min Io, Max Io, Min Task,

Max Task, Max Concurency: 2, 22, 2, 52, 2147483647

2016-09-09T20:20:15.055Z info vvold[FFD82350] [Originator@6876 sub=ThreadPool] Thread enlisted

2016-09-09T20:20:15.055Z info vvold[FFE86B70] [Originator@6876 sub=ThreadPool] Thread enlisted

2016-09-09T20:20:15.055Z info vvold[FFD82350] [Originator@6876 sub=Default] Syscommand enabled: true

2016-09-09T20:20:15.056Z info vvold[FFD82350] [Originator@6876 sub=Default] ReaperManager Initialized

2016-09-09T20:20:15.056Z info vvold[FFD82350] [Originator@6876 sub=Default] Initalized App with config file:/etc/vmware/vvold/config.xml

2016-09-09T20:20:15.056Z info vvold[FFD82350] [Originator@6876 sub=Default] Listening on port 8090 (NOT using SSL) using version 'vvol.version.version1'.

2016-09-09T20:20:15.056Z info vvold[FFD82350] [Originator@6876 sub=Default] Initializing SOAP tcp adapter

2016-09-09T20:20:15.056Z info vvold[FFD82350] [Originator@6876 sub=Default.HTTPService] Using default for nonChunkingAgents: 'VMware VI Client|VMware-client|VMware-client/3.*'

2016-09-09T20:20:15.056Z info vvold[FFD82350] [Originator@6876 sub=Default.HTTPService] Using default for agentsNeedingContentLength: 'VMware-client'

2016-09-09T20:20:15.056Z info vvold[FFD82350] [Originator@6876 sub=Default.HTTPService] Max buffered response size is 104857600 bytes

2016-09-09T20:20:15.056Z info vvold[FFD82350] [Originator@6876 sub=Default] enableChunkedResponses: true

2016-09-09T20:20:15.057Z info vvold[FFD82350] [Originator@6876 sub=Libs] UUID: Running in UW, but cannot verify vmk syscall version, giving up.

2016-09-09T20:20:15.057Z info vvold[FFD82350] [Originator@6876 sub=Libs] UUID: Valid gethostid routine. Value = A8C01704.

2016-09-09T20:20:15.058Z info vvold[FFD82350] [Originator@6876 sub=Default] Creating SOAP body handler for version 'vvol.version.version1'

2016-09-09T20:20:15.058Z info vvold[FFD82350] [Originator@6876 sub=SOAP-1] Created SOAP body handler for vvol.version.version1 (vvol/1.0)

2016-09-09T20:20:15.058Z info vvold[FFD82350] [Originator@6876 sub=Default] Creating SOAP body handler for internal version 'vvol.version.version1'

2016-09-09T20:20:15.058Z info vvold[FFD82350] [Originator@6876 sub=SOAP-2] Created SOAP body handler for vvol.version.version1 (internalvvol/1.0)

2016-09-09T20:20:15.066Z error vvold[FFD82350] [Originator@6876 sub=Default] VVold SI:main, no VVol config available, exiting

[/etc/init.d/vvold: /etc/init.d/vvold stopnomemclear, called by pid 43701]

[/etc/init.d/vvold: vvold stopped.]

[/etc/init.d/vvold: WaitVvoldToComeUp /var/run/vmware/.vmware-vvol.started created]

[/etc/init.d/vvold: vvold stopped after start!]

[/etc/init.d/vvold: /var/run/vmware/.vmware-vvol.started is not created]

[/etc/init.d/vvold: Successfully cleared vvold memory reservation]

 

With "esxcli storage vvol vasaprovider list" I don't get a Storage Provider listed - although it's registered in vCenter (and listed as online vie the Web Client).

 

Anything I should be missing, but at the moment, I don't have any idea to go further. Any help very appreciated.

 

Best regards,

Christian

VMDK is larger than Max Size

$
0
0

 

I have read the option of changing my block size on the datastore from 1MB to 2 but, I only have 3 virtual machines here with a total storage space and I am not willing to rebuild just yet.

 

 

Here is the error.

 

 

Create virtual machine 192.168.13.90 File

Intel-OpenSuse/Intel-Ope-Suse.vmdk is larger than

the maximum size supported by datastore  Storage1

 

 

My datastore is ~135GB and it has ~106 free, I have attached that screen shot.  I had this VM up before but, I destroyed it.!file:///C:/Users/rserio/AppData/Local/Temp/moz-screenshot-36.png!

 

 

Help understanding reclamation for thin provisioning

$
0
0

Hi,

Can some confirm or correct me on my understanding of potential areas that require disk reclamation in a vsphere environment?  Also have a few questions.  As far as I know, reclamation can happen at 2 levels:

1. at the guest OS level when thin provisioned vmdk's are used

2. at the esxi hypervisor level when the LUNs on the SAN are thin provisioned

 

Further clarification For #1....This becomes necessary when space is allocated within the vm and then deleted withing the vm.  To reclaim this, it is required to have vm-hw version 11, EnableBlockDelete=1 on hosts, esxi 6, thin provisioned vmdks, and guest OS that is capable of recognizing thin disks.  But what guest OS's are capable?  I've read some articles saying that windows 2008+ are capable; others that only windows 2012 R2+.  What about Linux?  Lastly, is it required for esxi 6.0 to have CBT disabled?  What about 6.5?

 

Further clarification on #2....This becomes necessary when a vm is storage vmotioned from one datastore to another or when snapshots are consolidated.  I know starting with esx 5.0 U1, this became a manual process to perform (vmkfstools or esxcli storage unmap)....is this still true as of esxi 6.0 and 6.5?

 

Thanks in advance

Cannot delete a vmdk file from datastore in the GUI or using CLI, getting an "invalid argument" error

$
0
0

Hi,

 

I cannot delete a vmdk file from my datastore. The file in question (S9_1-flat.vmdk) cannot be deleted using the vSphere Client or using CLI. I've attached the output from my SSH session:

 

/vmfs/volumes/50bb5d47-5677d7b4-cb59-90b11c1479d7/S9 # ls -al

drwxr-xr-x    1 root     root          3640 Aug 17 15:45 .

drwxr-xr-t    1 root     root          1540 Aug 17 14:59 ..

-rw-r--r--    1 root     root            27 Jul 10 09:15 S9-365b1e05.hlog

-rw-------    1 root     root     17179869184 Aug 17 15:45 S9-365b1e05.vswp

-rw-------    1 root     root     107374182400 Aug 18 11:56 S9-flat.vmdk

-rw-------    1 root     root          8684 Aug 17 15:46 S9.nvram

-rw-------    1 root     root           491 Aug 17 15:45 S9.vmdk

-rw-r--r--    1 root     root             0 Mar 21  2014 S9.vmsd

-rw-------    1 root     root          4172 Aug 17 15:45 S9.vmx

-rw-------    1 root     root             0 Aug 17 15:45 S9.vmx.lck

-rw-r--r--    1 root     root          3033 Aug 17 15:39 S9.vmxf

-rw-------    1 root     root          4068 Aug 17 15:45 S9.vmx~

-rwxrwxrwx    1 root     root     1099511627776 Jun 29 10:07 S9_1-flat.vmdk

-rw-------    1 root     root     586263035904 Aug 18 11:01 S9_2-flat.vmdk

-rw-------    1 root     root           520 Aug 17 15:45 S9_2.vmdk

-rw-------    1 root     root       8561355 Aug 15 19:44 vmmcores-1.gz

-rw-------    1 root     root       3581220 Aug 16 07:52 vmmcores-2.gz

-rw-------    1 root     root       2968111 Aug 16 07:53 vmmcores-3.gz

-rw-r--r--    1 root     root        227605 Aug 17 14:54 vmware-79.log

-rw-r--r--    1 root     root        227281 Aug 17 15:02 vmware-80.log

-rw-r--r--    1 root     root        232708 Aug 17 15:16 vmware-81.log

-rw-r--r--    1 root     root        226958 Aug 17 15:28 vmware-82.log

-rw-r--r--    1 root     root        235389 Aug 17 15:35 vmware-83.log

-rw-r--r--    1 root     root        230933 Aug 17 15:38 vmware-84.log

-rw-r--r--    1 root     root        236001 Aug 18 09:18 vmware.log

-rw-------    1 root     root      97517568 Aug 17 15:45 vmx-S9-911941125-1.vswp

/vmfs/volumes/50bb5d47-5677d7b4-cb59-90b11c1479d7/S9 # rm -f S9_1-flat.vmdk

rm: can't remove 'S9_1-flat.vmdk': Invalid argument

 

(Note that the permissions on the file are currently set as 777 as I initially thought that this could be a permissions issue).

 

Another thing to add is that the "S9_1.vmdk" file has gone missing, and if I attempt to create a new Hard Disk in Settings I get the message "S9_1.vmdk already exists". What is the best way to remove "S9_1.vmdk" that seems to be hidden, and also remove "S9_1-flat.vmdk"? Will I have to delete the entire datastore?

 

Any assistance would be much appreciated.

bnxtnet HWRM Hardware Resource Manager errors, ESXi 6.5

$
0
0

Seeing an issue attempting to connect to Dell SC storage via BCM57402 NICs using bnxtnet driver version 20.6.302.0-1OEM.650.0.0.4598673 on FW version 20.06.04.03 (boot code 20.06.77).

 

Dell branded BCM57402 adapters:

VID: 14e4

DID: 16d0

SVID: 14e4

SSID: 4020

 

Using in conjunction with software iSCSI adapter, 1 subnet, port binding enabled, 2 vmkernel adapters.

 

Disabled TSO and LRO, however issue remains.  Originally was using driver version bnxtnet 20.2.16.0, updated to 20.6.34.0 to align with HCL for FW version 20.06.77, however issue remained.  Updated driver to 20.6.302.0 and issue remains, vmkernel logs with bnxtnet debug logging enabled show:

 

2017-08-21T15:18:23.037Z cpu70:65725)WARNING: bnxtnet: hwrm_send_msg:201: [vmnic9 : 0x410029a96000] HWRM cmd error, cmd_type 0x90(HWRM_CFA_L2_FILTER_ALLOC) error 0x4(RESOURCE_ALLOC_ERROR) seq 2393

2017-08-21T15:18:23.037Z cpu70:65725)bnxtnet: bnxtnet_uplink_stop_rxq: 358 : [vmnic9 : 0x410029a96000] RXQ 1 stopped

2017-08-21T15:18:23.037Z cpu70:65725)bnxtnet: rxq_quiesce: 391 : [vmnic9 : 0x410029a96000] host stop rxq 1

2017-08-21T15:18:23.037Z cpu70:65725)bnxtnet: uplink_rxq_free: 639 : [vmnic9 : 0x410029a96000] uplink request to free rxq 1

2017-08-21T15:18:23.037Z cpu70:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic9 : 0x410029a96000] HWRM send cmd (type: 0x41(HWRM_VNIC_FREE) seq 2394)

2017-08-21T15:18:23.038Z cpu70:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic9 : 0x410029a96000] HWRM completed cmd (type: 0x41(HWRM_VNIC_FREE) seq 2394)

2017-08-21T15:18:23.038Z cpu70:65725)bnxtnet: bnxtnet_rxq_free: 1140 : [vmnic9 : 0x410029a96000] attempt to free rxq 1

2017-08-21T15:18:23.038Z cpu70:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic9 : 0x410029a96000] HWRM send cmd (type: 0x51(HWRM_RING_FREE) seq 2395)

2017-08-21T15:18:23.038Z cpu54:66284)bnxtnet: bnxtnet_process_cmd_cmpl: 2126 : [vmnic9 : 0x410029a96000] HWRM cmd (type 0x20 seq 2395) completed

2017-08-21T15:18:23.038Z cpu70:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic9 : 0x410029a96000] HWRM completed cmd (type: 0x51(HWRM_RING_FREE) seq 2395)

2017-08-21T15:18:23.039Z cpu70:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic9 : 0x410029a96000] HWRM send cmd (type: 0x61(HWRM_RING_GRP_FREE) seq 2396)

2017-08-21T15:18:23.039Z cpu70:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic9 : 0x410029a96000] HWRM completed cmd (type: 0x61(HWRM_RING_GRP_FREE) seq 2396)

2017-08-21T15:18:23.039Z cpu70:65725)bnxtnet: bnxtnet_rxq_free: 1192 : [vmnic9 : 0x410029a96000] freed rxq 1 successfully

2017-08-21T15:18:23.759Z cpu70:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic8 : 0x4100299fa000] HWRM send cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 40627)

2017-08-21T15:18:23.760Z cpu70:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic8 : 0x4100299fa000] HWRM completed cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 40627)

2017-08-21T15:18:23.760Z cpu70:65725)bnxtnet: bnxtnet_priv_stats_get_len: 2410 : [vmnic8 : 0x4100299fa000] driver private stats size: 40384

2017-08-21T15:18:23.760Z cpu70:65725)bnxtnet: bnxtnet_priv_stats_get: 2449 : [vmnic8 : 0x4100299fa000] requested stat buf size is 40385

2017-08-21T15:18:23.760Z cpu70:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic11 : 0x41000f07a000] HWRM send cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 36420)

2017-08-21T15:18:23.760Z cpu70:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic11 : 0x41000f07a000] HWRM completed cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 36420)

2017-08-21T15:18:23.761Z cpu70:65725)bnxtnet: bnxtnet_priv_stats_get_len: 2410 : [vmnic11 : 0x41000f07a000] driver private stats size: 40384

2017-08-21T15:18:23.761Z cpu70:65725)bnxtnet: bnxtnet_priv_stats_get: 2449 : [vmnic11 : 0x41000f07a000] requested stat buf size is 40385

2017-08-21T15:18:23.972Z cpu70:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic8 : 0x4100299fa000] HWRM send cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 40629)

2017-08-21T15:18:23.972Z cpu65:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic8 : 0x4100299fa000] HWRM completed cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 40629)

2017-08-21T15:18:23.972Z cpu65:65725)bnxtnet: bnxtnet_priv_stats_get_len: 2410 : [vmnic8 : 0x4100299fa000] driver private stats size: 40384

2017-08-21T15:18:23.972Z cpu65:65725)bnxtnet: bnxtnet_priv_stats_get: 2449 : [vmnic8 : 0x4100299fa000] requested stat buf size is 40385

2017-08-21T15:18:23.972Z cpu65:65725)bnxtnet: hwrm_send_msg: 151 : [vmnic11 : 0x41000f07a000] HWRM send cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 36422)

2017-08-21T15:18:23.973Z cpu65:65725)bnxtnet: hwrm_send_msg: 209 : [vmnic11 : 0x41000f07a000] HWRM completed cmd (type: 0x18(HWRM_FUNC_QSTATS) seq 36422)

2017-08-21T15:18:23.973Z cpu65:65725)bnxtnet: bnxtnet_priv_stats_get_len: 2410 : [vmnic11 : 0x41000f07a000] driver private stats size: 40384

2017-08-21T15:18:23.973Z cpu65:65725)bnxtnet: bnxtnet_priv_stats_get: 2449 : [vmnic11 : 0x41000f07a000] requested stat buf size is 40385

 

At this point just curious if anyone has run into this, particularly HWRM_CFA_L2_FILTER_ALLOC errors.  Per https://reviews.freebsd.org/file/data/jhjnayad4lkbrjvsspn3/PHID-FILE-lbmmlmi2bjqh54xmpwsd/D6555.id16878.diff appears issue possibly related to NIC FW, however currently Dell only has version 20.06.04.03 (boot code 20.06.77) available.  Excerpt from freebsd.org indicating HWRM is FW related:

 

The Hardware Resource Manager (HWRM) manages various hardware resources
+ * inside the chip. The HWRM is implemented in firmware, and runs on embedded
+ * processors inside the chip

 

Connectivity is established, however seeing storage related issues on the array and in guest.  Any advice is welcome and than you in advance.

SCSI LUN id query

$
0
0

In the past there were issues observed on ESXi hosts due to the lun id mismatch. LUN id is not uniform across hosts in the cluster which causes issues with RDM vmotion / multipathing issues.

 

If a device is presented to a set of hosts where few hosts see the device as LUN1 for example and other hosts see it as LUN2, then the problem arises. So we have to make the presentation uniform to make all the host to see that particular device with same LUN number.

 

My query now is, if I have two storage boxes and each storage starts numbering the lun from LUN0. In such case if I present a LUN from storage1 to the cluster with LUN0 and present a  new LUN from second storage with LUN0 , so now we have to two devices with LUN0 from two different storage boxes, is this is supported ?

 

what would be the implications on host because of having more devices with same LUN number.

Live migrate VM beatween VVOL datastore is slow

$
0
0

I take VM live migration between VVOL datastores. It's very slow. I checked the logs and find that it will createNewVirtualVolume on the destination datastore. Then it  will issue vaai xcopy, it will failed. Then it will call VASA method to copy data. I think the season of slow migrating is try VAAI. But I confused that from blogs, it saied

"

XCOPY – With VVOLs, ESX will always try to use array based VVOLs copy mechanism defined using copyDiffsToVirtualVolume or CloneVirtualVolume primitives. If these are not supported, it will fall back to software copy.

"

My quesiton is  why it didn't issue "copyDiffsToVirtualVolume or CloneVirtualVolume primitives"? Does live storage migration not support it?


Can not deploy from template -- error caused by file

$
0
0

 

Hello all,

 

 

I have a fairly simple  setup with a 4 EXS  servers connected to same NFS datastore on a NetApp storage. I have a debian template that I am deploying virtual machines from accross all hosts. The problem is all hosts get the VMs deployed normally without any problems except for one...lets call it host A. On host A when I try to deploy the VM from template, it takes a long time and it finally errors out with follwoing message

 

 

Note: This all hosts are 4.0 and are being managed from vSphere server 

 

 

"Error caused by file Linux90/Linux90.vmdk"

 

 

Couple of observations:

 

 

  • Thie file in question Linux90.vmdk actually belongs to the template

 

 

  • I try to monitor the progress  of VM deployment by observing my /vmfs/volume/.../ directory, I can see that files are being created for new VM however something goes wrong towards the end of the procedure that causes deployment to fail

 

 

What files can I look at in order to find the possible cause of this failure? Has anyone seen this before? Please note that there are other hosts where template is being deployed without any problems so I suspect that problem is local to hostA

 

 

 

 

 

vsantraces folder in datastore

$
0
0

I'm looking to clear up some under utilized datastores, but I am now seeing a vsantraces folder. Is there something that needs to be reconfigured before deleting this datastore? I unfortunately did not set up VSAN, is there a location I can check to see the configuration of VSAN if it has been implemented by another administrator?

 

Thanks!

Datastore Sizing on NFS

$
0
0

I'm using VMware on NetApp NFS All flash FAS where there is an option to autogrow volumes instead of creating a new volume and VMware datastore when I run out of space. 

 

Traditionaly I've been using a standard 4TB datastore size.

 

Is it better to keep a standard datastore size of 4TB and create new volumes when I run out of space or should I use Autogrow and let volumes become different sizes as autogrow grows them?

vSphere datastore details

$
0
0

Hello all,

 

I would like to verify something regarding in the datastores view in vCenter

 

There is one LUN, that is presented to 2 clusters.

 

One cluster is 10 hosts with 100VMs and the other cluster is 8 hosts with 56 VMs.

 

LUN and remaining capacity is consistent if you look the datastore on both cluster.

but why is the provisioned space is different from the 2 cluster that if we add the two provisioned space result, it is more than the LUN size.

 

Thanks!

Datastore import failure

$
0
0

Hi guys,

 

Today I've been working on RAID reconfiguration (added one more LUN on local storage), but after ESXi host reboot one of datastores disappeared.

 

Tried this article but with no luck:

Recreating a missing VMFS datastore partition in VMware vSphere 5.x and 6.x (2046610) | VMware KB

 

ESXi version: 6.0.0 (build 3620759)

This datastore has been created from local disks

 

Does anyone has any idea how to fix this?

 

VMkernel.log:

2017-09-25T20:55:28.017Z cpu2:32979)NMP: nmp_ThrottleLogForDevice:3286: Cmd 0x1a (0x439d803b1840, 0) to dev "naa.600605b003b170f020721bf665b2d1c2" on path "vmhba2:C2:T2:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0. Act:NONE

2017-09-25T20:55:28.017Z cpu2:32979)ScsiDeviceIO: 2651: Cmd(0x439d803b1840) 0x1a, CmdSN 0x1042 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T20:55:28.017Z cpu2:32979)NMP: nmp_ThrottleLogForDevice:3286: Cmd 0x85 (0x439d803b1840, 34273) to dev "naa.600605b003b170f020721bf665b2d1c2" on path "vmhba2:C2:T2:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE

2017-09-25T20:55:28.017Z cpu2:32979)ScsiDeviceIO: 2651: Cmd(0x439d803b1840) 0x4d, CmdSN 0xa from world 34273 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2017-09-25T20:55:28.017Z cpu2:32979)NMP: nmp_ThrottleLogForDevice:3286: Cmd 0x1a (0x439d803b1840, 34273) to dev "naa.600605b003b170f020721bf665b2d1c2" on path "vmhba2:C2:T2:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0. Act:NONE

2017-09-25T20:55:28.017Z cpu2:32979)ScsiDeviceIO: 2651: Cmd(0x439d803b1840) 0x1a, CmdSN 0xb from world 34273 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T20:55:28.018Z cpu2:32979)ScsiDeviceIO: 2651: Cmd(0x439d803b1840) 0x1a, CmdSN 0x1048 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T20:55:28.018Z cpu2:32979)NMP: nmp_ThrottleLogForDevice:3286: Cmd 0x85 (0x439d803b1840, 34273) to dev "naa.600605b003b170f020721bf665b2d1c2" on path "vmhba2:C2:T2:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE

2017-09-25T21:31:25.960Z cpu0:33080)ScsiDeviceIO: 8409: Get VPD 86 Inquiry for device "naa.600605b003b170f020721bf665b2d1c2" from Plugin "NMP" failed. Not supported

2017-09-25T21:31:25.960Z cpu3:32794)NMP: nmp_ThrottleLogForDevice:3298: Cmd 0x12 (0x439d80436540, 0) to dev "naa.600605b003b170f020721bf665b2d1c2" on path "vmhba2:C2:T2:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0. Act:NONE

2017-09-25T21:31:25.960Z cpu0:33080)ScsiDeviceIO: 7030: Could not detect setting of QErr for device naa.600605b003b170f020721bf665b2d1c2. Error Not supported.

2017-09-25T21:31:25.960Z cpu0:33080)ScsiDeviceIO: 7544: Could not detect setting of sitpua for device naa.600605b003b170f020721bf665b2d1c2. Error Not supported.

2017-09-25T21:31:25.965Z cpu5:32779)ScsiDeviceIO: 2651: Cmd(0x439d80436540) 0x1a, CmdSN 0x26 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:25.965Z cpu0:33080)ScsiDevice: 3835: Successfully registered device "naa.600605b003b170f020721bf665b2d1c2" from plugin "NMP" of type 0

2017-09-25T21:31:31.542Z cpu2:33271)ScsiDeviceIO: 2651: Cmd(0x439d80434a40) 0x1a, CmdSN 0x33 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:31.567Z cpu2:32793)ScsiDeviceIO: 2651: Cmd(0x439d80434a40) 0x1a, CmdSN 0x56 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:31.752Z cpu2:33198)NMP: nmp_ThrottleLogForDevice:3231: last error status from device naa.600605b003b170f020721bf665b2d1c2 repeated 10 times

2017-09-25T21:31:31.752Z cpu2:33198)ScsiDeviceIO: 2651: Cmd(0x439d80424d40) 0x1a, CmdSN 0xda from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:31.754Z cpu2:33198)ScsiDeviceIO: 2651: Cmd(0x439d80424740) 0x1a, CmdSN 0xe0 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:31.757Z cpu6:33080)FSS: 5334: No FS driver claimed device 'naa.600605b003b170f020721bf665b2d1c2:1': No filesystem on the device

2017-09-25T21:31:31.908Z cpu2:33198)ScsiDeviceIO: 2651: Cmd(0x439d80415940) 0x1a, CmdSN 0x148 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:31.963Z cpu2:33198)ScsiDeviceIO: 2651: Cmd(0x439d804100c0) 0x1a, CmdSN 0x1c5 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:31.963Z cpu6:33080)FSS: 5334: No FS driver claimed device 'naa.600605b003b170f020721bf665b2d1c2:1': No filesystem on the device

2017-09-25T21:31:32.158Z cpu2:33198)ScsiDeviceIO: 2651: Cmd(0x439d80419a00) 0x1a, CmdSN 0x23d from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:32.175Z cpu2:32793)ScsiDeviceIO: 2651: Cmd(0x439d80419a00) 0x1a, CmdSN 0x260 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:32.193Z cpu2:32793)ScsiDeviceIO: 2651: Cmd(0x439d8041c700) 0x1a, CmdSN 0x27e from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:32.307Z cpu2:32793)ScsiDeviceIO: 2651: Cmd(0x439d80428500) 0x1a, CmdSN 0x2c6 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:32.308Z cpu2:32793)NMP: nmp_ThrottleLogForDevice:3231: last error status from device naa.600605b003b170f020721bf665b2d1c2 repeated 20 times

2017-09-25T21:31:32.308Z cpu2:32793)ScsiDeviceIO: 2651: Cmd(0x439d80428680) 0x1a, CmdSN 0x2cc from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:32.317Z cpu2:33198)ScsiDeviceIO: 2651: Cmd(0x439d80429400) 0x1a, CmdSN 0x2f0 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:32.317Z cpu6:33080)FSS: 5334: No FS driver claimed device 'naa.600605b003b170f020721bf665b2d1c2:1': No filesystem on the device

Starting up services2017-09-25T21:31:41.146Z cpu3:32771)ScsiDeviceIO: 2651: Cmd(0x439d8040bb40) 0x1a, CmdSN 0x3bd from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:41.272Z cpu5:32779)ScsiDeviceIO: 2651: Cmd(0x439d803c4d00) 0x1a, CmdSN 0x44f from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:41.272Z cpu5:32779)ScsiDeviceIO: 2651: Cmd(0x439d803c4e80) 0x1a, CmdSN 0x455 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:41.273Z cpu0:33632)FSS: 5334: No FS driver claimed device 'naa.600605b003b170f020721bf665b2d1c2:1': No filesystem on the device

2017-09-25T21:31:41.847Z cpu7:32775)ScsiDeviceIO: 2651: Cmd(0x439d803beb40) 0x1a, CmdSN 0x4b1 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:41.874Z cpu7:32798)ScsiDeviceIO: 2651: Cmd(0x439d803beb40) 0x1a, CmdSN 0x4d9 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:41.893Z cpu4:32772)ScsiDeviceIO: 2651: Cmd(0x439d803bc5c0) 0x1a, CmdSN 0x4f6 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

 

2017-09-25T21:31:42.047Z cpu7:32776)ScsiDeviceIO: 2651: Cmd(0x439d803dd180) 0x1a, CmdSN 0x580 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:42.047Z cpu0:33669)FSS: 5334: No FS driver claimed device 'naa.600605b003b170f020721bf665b2d1c2:1': No filesystem on the device

2017-09-25T21:31:44.631Z cpu1:34248)ScsiDeviceIO: 2651: Cmd(0x439d8037d900) 0x1a, CmdSN 0x598 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:44.687Z cpu0:33973)ScsiDeviceIO: 2651: Cmd(0x439d8037d900) 0x1a, CmdSN 0x5b5 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:44.857Z cpu6:34135)FSS: 5334: No FS driver claimed device 'naa.600605b003b170f020721bf665b2d1c2:1': No filesystem on the device

2017-09-25T21:31:45.048Z cpu4:32795)NMP: nmp_ThrottleLogForDevice:3231: last error status from device naa.600605b003b170f020721bf665b2d1c2 repeated 40 times

2017-09-25T21:31:45.048Z cpu4:32795)ScsiDeviceIO: 2651: Cmd(0x439d80379100) 0x1a, CmdSN 0x66d from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:45.048Z cpu4:32795)NMP: nmp_ThrottleLogForDevice:3298: Cmd 0x85 (0x439d80379100, 34339) to dev "naa.600605b003b170f020721bf665b2d1c2" on path "vmhba2:C2:T2:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE

2017-09-25T21:31:45.048Z cpu4:32795)ScsiDeviceIO: 2635: Cmd(0x439d80379100) 0x85, CmdSN 0x0 from world 34339 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2017-09-25T21:31:45.048Z cpu4:32795)ScsiDeviceIO: 2651: Cmd(0x439d80379100) 0x4d, CmdSN 0x1 from world 34339 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0.

2017-09-25T21:31:45.048Z cpu4:32795)NMP: nmp_ThrottleLogForDevice:3298: Cmd 0x1a (0x439d80379100, 34339) to dev "naa.600605b003b170f020721bf665b2d1c2" on path "vmhba2:C2:T2:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0. Act:NONE

2017-09-25T21:31:45.048Z cpu4:32795)ScsiDeviceIO: 2651: Cmd(0x439d80379100) 0x1a, CmdSN 0x2 from world 34339 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:45.049Z cpu4:32785)ScsiDeviceIO: 2651: Cmd(0x439d80379100) 0x1a, CmdSN 0x673 from world 0 to dev "naa.600605b003b170f020721bf665b2d1c2" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x24 0x0.

2017-09-25T21:31:45.049Z cpu4:32785)NMP: nmp_ThrottleLogForDevice:3298: Cmd 0x85 (0x439d80379100, 34339) to dev "naa.600605b003b170f020721bf665b2d1c2" on path "vmhba2:C2:T2:L0" Failed: H:0x0 D:0x2 P:0x0 Valid sense data: 0x5 0x20 0x0. Act:NONE

FC WWPN vs WWNN

$
0
0

Hello there.

I`ve never paid much attention to these terms. It was enough for me to see on my brocade SAN switches WWNs that correspond WWNs in iLO/OA/AMM/IMM (or another IPMI) of the servers to make zones.

Well now I`m trying to figure out why ESXi console shows 2 WWNs instead of single one for each HBA port. Okay, many sources say that there are 2 entitys: WWPN and WWNN. WWPN is a 64-bit physical address of FC port (interface) and WWNN is a 64-bit physical address of HBA itself. Okay then, it should be 1 WWNN and 2 WWPNs. But there are 2 WWNNs and 2 WWPNs.

 

My IPMI thinks that server got single HBA with 2 ports:

mezzanine.png

My brocade SAN switches see only WWPNs (the image is from 2 switches):

brocadeview.png

But ESXi says that there are 2 different WWNNs (50:01:43:80:21:de:29:6d and 50:01:43:80:21:de:29:6f):

esxihba.png

sanfclist.png

How so? And why so many sources say that host (its HBA) has single WWNN?

 

Is it possible that this mezzanine card have 2 processors and so have 2 WWNNs?

Thin on Thin?

$
0
0

Hi, up until now I have always used thick provisioned LUNs on the storage array with thin provisioned disks on the VMFS datastores. My question is with the automated unmap features in VMFS6, I assume both the vdisks and the array LUNs should be thinly provisioned? Can anyone confirm if this is the case for me please? I haven't been able to find any articles that confirm the same.

Thanks in advance!

Steve


Thin provisioning doesn't seem to be working correctly

$
0
0

I am using ESXi 6.5 Update 1.

not overly keen about the web based client interface. I never used 6.0 so i haven't experienced the PC based client, but web based has way more opportunities to skip back screens and lose input than applications.  (side bar $0.02).

 

I created a VM in the ESXI web client.  I created a disk 40GB in size, but marked it as "Thin Provision".   I looked at WinSCP (I normally keep it running so i can see a folder on the EXSi and my Windows workstation so i can move stuff if i need to) and in the datastore folder, i saw that the <VM name>_flat.vmdk file was 41,943,040 KB big.  I went to the command line via puTTY and navigated to the datastore directory and saw the same thing 42949672960 bytes.  Figuring i did something wrong, i went to the VM properties page and confirmed that the hard drive section said "Thin Provisioned" said "Yes" - which it did.  i then went to the datastore browser and clicked on the vmdk file and it said "0 B". 

 

what am i missing here??  a thinly provisioned drive means it's only supposed to consume the space on the physical hard drive representative of how much space is used in the virtual disk.  in Hyper-V if you create a fresh Dynamically Expanding disk of whatever size, the initial physical size of the VHD will be on the order of 2 KB per 1GB of virtual disk capacity. 

 

I went ahead and created a thickly provisioned disk of 40GB size.  in WinSCP and puTTY, it's exactly the same size on the hard drive as the thinly provisioned one.  it does show in the datastore browser that it's 40GB in size (as opposed to 0B), so this differentiation makes sense, but it seems to defeat the purpose of saving physical hard drive space.

 

One of my biggest concerns is if i have to move vmdk files off the ESXi host, i'd rather move only the consumed amount of disk space and not all the empty space as well.  especially since WinSCP and SFTP add encryption overhead to the transfer so gigabit ethernet feels like 10-BaseT speeds.  (no i do not have vMotion or HA, and yes i do have to move vmdk's on and off the ESXI host fairly regularly).  with Hyper-V, if only 10GB of a 200GB virtual disk is used, then copying the vhd file only requires moving 10GB.  with VMware, it requires moving the entire 200GB.  this doesn't seem right.

 

unless i'm just not seeing something, Microsoft's Dynamically Expanding drives seems to get the concept of only consuming physical drive space as it's needed, whereas Thin Provisioning is really just an illusion because whether you choose thin or thick, it still consumes the maximum size of the virtual drive on the hard disk.

 

if i'm wrong, please set me straight.

VAAI xcopy not working on ESX6.0, used to work with ESX 5.5

$
0
0

We are seeing that VMwareVAAI  XCOPY is not working with ESX 6.0. This used to work fine with ESX 5.5. The storage has not been updated, only the ESX server was updated to 6.0 and XCOPY stopped working. The same operation which works with a ESX 5.5 cluster on the  storage node, does not work with a ESX 6.0 cluster on the same storage node. So it does not seem to be a array vendor issue as it clearly works in ESX 5.5 version and not the newer ESX 6.0 version.

 

Anyone have a clue as to why this could be..

 

I have made sure of the following


The source and destination volumes were the same blocksize as I was cloning within the same VMFS volume

  1. The source file was not an RDM
  2. The disk was a flat disk format
  3. The VMFS datastore was created using the vSphere Web client

datastore sharing between clusters within same vCenter

$
0
0

I have read several articles stating this is possible such as With vSphere 5.0 and HA can I share datastores across clusters?

 

However, we presented a 100GB datastore to multiple clusters under one vCenter and only the first cluster is allowed to mount the datastore.

The other cluster wants to reformat it when attempting to add the datastore.

 

is this not possible any longer on vSphere 6?

thanks

Using Windows Storage Server 2012 as iSCSI Target in Production Environment

$
0
0

My searches in this forum haven't yielded discussions on this.  Please forgive me if I missed previous posts.

Would you use a Windows Server 2012 R2 as your (only) iSCSI target / shared storage for your 3 hosts in a production system?  We are looking at a new system with 3 Lenovo RD550 servers, each with 16 cores and 128G RAM, and 1 Windows Storage Server 2012 R2 with lots of SSD drives and connecting it all with dual 10G Ethernet for the iSCSI network.

The cost savings over even an inexpensive EMC is pretty substantial.  My concern is that Windows servers need care and feeding, and storage systems need to just run (and run, and run, and...)  What do you think?

Thanks

Can someone explain VVOLs to me?

$
0
0

I've read a number of articles and watched some YouTube videos of people trying to explain VVOLs but they still are not clear to me.  All of the explanations I've seen seem to start with the premise that I know how to configure the storage array or that the storage array is very homogeneous.

 

Let's say you have a storage array with a mix of SSDs, 15k, and SATA drives.  Usually these would be associated with Gold, Silver, and Bronze storage or a similar scheme.  How would you configure VVOLs for this scenario?  From everything I've seen there would be just one Storage Container and one Virtual Datastore and there would be three Storage Policies which each are associated with either Gold, Silver, or Bronze.  Is this correct?  If it is then what gets configured in the Gold policy that makes it so it only uses the SSDs?   And if everything is in just one datastore how will I know what percentage of my SSD space has been filled up?

 

https://ha.yellow-bricks.com/vvol%20diagram1.png

Viewing all 2328 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>