Quantcast
Channel: VMware Communities : Discussion List - vSphere™ Storage
Viewing all 2328 articles
Browse latest View live

Missing target and paths

$
0
0

Hi!

 

I recently installed esxi 6.0 on ibm x3550m4. The host was connected to ibm v3700 storage with a single sas cable (to controller a)

Yesteday I added a second cable from the second port to controller b.

 

In v3700 I assigned the newly appeared port to the esxi host.

So now I have two ports in v3700 assigned to esxi host (.....D73710 and .....D73711). The first shows as active, and the second port is inactive. # nodes logged in is 1 for both.

 

 

In vsphere client conf -> storage adapters I have LSI2308_2 and it has two devices (both LUNs from v3700), but only one target and two paths. I think I should have 2 targets and four paths?

 

running esxcli storage core path list gives me:

sas.500605b006d73710-sas.500507680305b09f-naa.60050763008086c28000000000000001
   UID: sas.500605b006d73710-sas.500507680305b09f-naa.60050763008086c28000000000000001
   Runtime Name: vmhba1:C0:T0:L0
   Device: naa.60050763008086c28000000000000001
   Device Display Name: IBM Serial Attached SCSI Disk (naa.60050763008086c28000000000000001)
   Adapter: vmhba1
   Channel: 0
   Target: 0
   LUN: 0
   Plugin: NMP
   State: active
   Transport: sas
   Adapter Identifier: sas.500605b006d73710
   Target Identifier: sas.500507680305b09f
   Adapter Transport Details: 500605b006d73710
   Target Transport Details: 500507680305b09f
   Maximum IO Size: 4194304

sas.500605b006d73710-sas.500507680305b09f-naa.60050763008086c28000000000000000
   UID: sas.500605b006d73710-sas.500507680305b09f-naa.60050763008086c28000000000000000
   Runtime Name: vmhba1:C0:T0:L1
   Device: naa.60050763008086c28000000000000000
   Device Display Name: IBM Serial Attached SCSI Disk (naa.60050763008086c28000000000000000)
   Adapter: vmhba1
   Channel: 0
   Target: 0
   LUN: 1
   Plugin: NMP
   State: active
   Transport: sas
   Adapter Identifier: sas.500605b006d73710
   Target Identifier: sas.500507680305b09f
   Adapter Transport Details: 500605b006d73710
   Target Transport Details: 500507680305b09f
   Maximum IO Size: 4194304

 

So it seems that I'm missing a second adapter (...........d73711). Any idea how to troubleshoot or fix this?


Got myself into problems with Extents

$
0
0

We have two RAID arrays on shared storage, using a P2000 on SAS.

RAID 6 being the main one with our vmware servers hosted on two physical servers.

There's a second RAID 5 array which is used by a 3rd physical server (Windows) which runs Veeam.

 

I recently added a 3 disks..

2 added to the RAID 6.

1 added to the RAID 5.

 

I started with vmware, expanding the size, letting it take all the space form that array.

Only that caused a problem, in that it didn't just take space from that array, it also created an Extent and took the space from all 3 disks!  (I should have read what it was doing, but at the time I had no idea that was even possible).

 

So ok, I've found I can't undo that or remove the  Extent without recreating the partition from scratch.

 

Solution underway..

I've shrunk the partition the Veeam server has taken from the RAID 5 array so there is enough space left over to create a new VMFS partition on it to temporarily move ALL our vmware servers onto, before deleting the original and recreating it.

However, it wont let me create the VMFS partition on the RAID 5 disk, I can only assume because it's already using it via the Extent!


Is there any way of manually adding a VMFS partition to a disk that already has an Extent on it?

I understand it's not supported to have multiple VMFS on one LUN/Disk, but it's only temporary.

The space is available, I just can't use it..

Server 2012 Storage Question (I'm thoroghly confused)

$
0
0

Hello Everyone and thank you in advance for any help or suggestions you can provide.  I'll make this as brief as possible.  I have a simple VM Environment.  2 ESXi hosts both running 5.5.  I also have a Dell SAN with about 30TB of space.  We currently have a Physical Dell File Server that has about 5TB of on board storage on it - we use this as a file server with many layers of NTFS Security.  We want to Virtualize this server and do away with the Physical Server.  My plan is to build a 2012 R2 Datacenter Server VM.  Then migrate the data over manually.  My problem is, I cannot seem to get my head around how to configure the "Storage" part of the 2012 server.  Since the 2012 VM is going to live in a datastore on the SAN - Do I just add a Virtual Disk to the server itself and make it 5 or so TB's big?  Or do I create a 5TB Volume on the SAN, mark it as only having access from a single source to avoid corruption, and then use iSCSI initiator to mount the volume.  Are there any VMotion issues by having a drive that big as part of a VM?

 

Thank you all!

Cannot unmount datastore in vSphere Web Client

$
0
0

I have a data store that was removed using the vSphere Client.  Although for some reason it is still showing in the vSphere Web Client.  I can rename the datastore but I cannot unmount it - not a listed option.  How can I remove this data store?

vMotion => Shared storage corruption. Wrong setup?

$
0
0

Hi,

 

We have tried setting up an MSSQL 2008 R2 cluster on vsphere 5.0. We did this using vmdks as shared storage.

All the ESXi hosts have access to the same software iSCSI target and thus VMFS5 datastore. In that datastore there are 2 vmdks for the boot drives of the VMs SQL-A and SQL-B (both VM v8 btw). Those vmdks are then mounted at SCSI(0:0) which represents an LSI Logic SAS without any SCSI bus sharing. Just two normal VMs like any other and this works fine.

What is different is that we add another 4 shared vmdks to both VMs and run SQL in a cluster. The 4 disks (Quorum,DTC,SystemDB,UserDB) were added to both VMs on SCSI(1:0) trough SCSI(1:3) with it's LSI Logic SAS controller set on physical bus sharing because both VMs/clusternodes SQL-A and SQL-B will reside on different ESXi hosts.

 

Now everything seems to run just fine until we do something seemingly unrelated: A vMotion of a completely independent VM to or from one of the hosts running a node in this setup. In fact, any vMotion of any VM will do. For some reason the side-effect of vMotion is that the shared disk go corrupt or offline for a sec.

 

The SQL nodes will report lots of errors stating:

"Operating system error 170"

and

"The system failed to flush data to the transaction log. Corruption may occur."

 

The result: SQL going offline, and in our initial setup also vCenter because it needed the DB. (now we run it seperately)

 

Our main question thus is: Why is this happening? Is this setup unsupported? Or are we dealing with a new combination of settings that just fails?

 

Our hosts are:

X8DTU Supermicro from the whitelist.

 

Our storage adapters are:

2x Intel 82398EB 10 Gbit AT CX4. Also from the whitelist.

 

Our normal network cards are:

4x Intel 82576 1 Gbit. Whitelisted.

 

Our storage is compromised of one large VMFS5 on a Open-E DSSv6 server running over the software iSCSI adapter trough the 10Gbit cards.

 

vMotion, Management, etc, is running trough a dedicated vLAN over the distributed switch spanning 3 hosts and all 4 1Gbit cards together with some other vLAN's some of the VMs are using.

 

None of it is indicating some dependency or faulty setup. We are at our wits' end... Any suggestions are more than welcome!

3PAR Management Plug In for vSphere

$
0
0

I am new to the world of 3PAR and VMWARE. I installed the 3PAR plug in for vsphere and I get the Website cannot display the web page error 500. Has anyone seen this or come across this issuse?

 

Thanks

ecb

Throttling storage vMotion

$
0
0

Hi all,

 

I thought I'd start a discussion about how people reined in svMotions in terms of IOps. I have seen big numbers when running only a couple of concurrent migrations on a backend array that doesn't have a huge amount to give. I've read an old-ish article (I believe on Yellow Bricks) that states that svMotion IO is attributed to its VM and so can be controlled by SIOC, but I am curious to see what other options (if any) there are.

 

Cheers

What is the Procedure to Reboot a iSCSI target on a NAS

$
0
0

Hi, all

 

I am new to VSPhere/iSCSI

 

I have a setup with two datastores, a local datastore with the VMs and a second 'backup' datastore to which I backup the VMs

The second datastore is via iSCSI to a NAS.

 

I need to reboot the NAS (the iSCSI target).

What do i need to do on the vsphere ? do i need to unmount the datastore or is it safe to "just" reboot.

There's only load on the backup datastore at night.

 

Thanks


Cannot increase datastore size in vSphere

$
0
0

Hello,


I have the issue as described in "Cannot grow VMFS datastore, the Increase button is disabled (1035285)".


After checking all the possible reasons described in the KB1035285 article, I find that none of the reasons have caused this issue.


So any other possible reasons have caused the "the Increase button is disabled" issue?


If you have any idea, welcome your responses,

 

PS:

My versions of the product are

vSphere Client v5.1,

ESXi v5.1


Thank you,

KuoHsin

Size .VMDK is differrent on SO of VM

$
0
0

Hi I have  the following error

 

  • The Size of LUN for datastore is 600GB
  • The VM only show 392GB, someone to know why?

Host Cache Configuration and Virtual Flash

$
0
0

Hi Community,

 

I'm trying to understand the difference between host cache configuration and virtual flash, and also what benefits do either of them have. I tried reading through to docs, but still don't get it. I have deployed 3 new servers, each have 2 SSD drives in them. I configured them as RAID1.  In the vSphere Client (software client), under configuration < Software , I see "Host Cache Configuration" and I see my local SSD's there. I have not checked the box for "Allocate space for host cache" yet. If I go into the Web Client and look under "Virtual Flash Resource Management", I don't see my SSD drives when I choose Add Capacity. I'm assuming maybe it's because my SSD's are in a RAID, but not 100% sure

 

I don't understand which one I should be using, or if I should use both?  It's for VDI environment

 

Thanks

 

Mark

Latency spikes with msa2040 and Hp proliant DL 360 g9 ( 5.5u3 and 6.0 )

$
0
0

Hello everybody.

 

2 Hp proliant servers dl360 gen9 servers with h241 sas ( every everything up to date .. firmware )

1 msa2040 sas12 with two raids ( 10rpm 7rpm ) two luns and two datastores

Esx 6.0 hp customized iso.. and 5.5 hp customized iso ( i test booth ver. )

Sometimes im getting spikes of 200-300-400 ms at datastore level or disk level.. ( every 1-2 hour aprox. )

Msa2040 not reporting this values

Esxtop can cause me blind if i am all the day seeing data, but the time i have been watching esxtop nothing happens..

This spike is not reflected in events normaly, ( famous event .. device naaxxxx has loose performance )

Some times i get this event.. ( has loose performance.. and in one minute has improveed performance.. but in veeam  backup hours )  The SPIKES, are not being reflected in the event.

 

Any suggestion or comment are wellcome.

poor man`s cluster in a box with raid1 across nfs

$
0
0

i have some idea in mind i´d like to share before starting a test setup (sort of a "cluster in a box" or "shared nohing san")

 

scenario:

- 2 esx boxes with local storage only

- each of those boxes hosting a linux vm which exports the local storage as a nfs datastore which is mounted on both esx servers as.

 

now create a vm which has it`s two identical system disks placed on nfsdatastore1 and nfsdatastore2 and create a software raid1 inside those vm (linux mdraid e.g).

 

so, each write inside that vm should go to both esx servers, travelling trough both "linux nfs vsa`s".

 

if i create a copy of the vm`s configuration files (so it exists on both nodes,) in theory i should be able to manually failover a vm to the "surviving" node.

 

I`m curious if i should spend time on giving this a  try or if there is some gotcha i missed - so - what do you think?

would this work?

 

 


Misc.APDHandlingEnable, preventing hostd failures and impact on VMs

$
0
0

So ESXi 5.1 provides some enhancements on handling APD conditions as described magnificently by our favorite VMware storage guy here:

http://cormachogan.com/2012/09/07/vsphere-5-1-storage-enhancements-part-4-all-paths-down-apd/

 

We actually had two occurrences of about 10-15 minutes long complete outages of our iSCSI storage some time ago. All hosts disconnected due to hostd getting hosed, making the mess even worse because we had no way of doing any troubleshooting or assessing the scope of the problem (virtual vCenter and vMA dead too).

However, the good news in these cases was that all VMs were still up and happily running when the hosts were able to reach the SAN again. No Bluescreens, no kernel panics. Only one or two Linux VMs mounted their filesystems read-only and needed a reboot, but apart from that, most VMs didn't even display errors in their (Windows)eventlogs. And that even though the outage of 10-15m lasted way longer than the GuestOS SCSI timeout (60-180s) by the way.

 

So as it sounds from the post by Cormac above, I'm quite eager to implement Misc.APDHandlingEnable in order to prevent hostd from crapping out globally due to storage issues again, resulting in the loss of any visibility as to what's actually going on during an emergency

 

On the other hand, I have to wonder about the practical implications this setting will have on running VMs if such problems do happen again. As I understand it, actively fast-failing Guest IOs after ~140 seconds instead of infinitely retrying them and leaving the Guests hanging with no response at all doesn't sound too bad or different at first. But I'm not sure if things will end as "graceful" as in our previous cases where everything got back to normal quickly after the storage recovered.

 

I might test this setting myself when I get around to play with 5.1, but for now I appreciate any input, opinions, experiences or test results anyone can give regarding this setting and how it impacts live guests.

Please explain Consolidate and Delete All Snapshots

$
0
0

I have 15 snapshots for a VM

 

I want to understand the background process for Consolidate and Delete All.

 

Which one should I prefer???? and which one impacts the performance of the VM?

 

Appreciate your help


Correct way to setup a MSFC For File shares with shared storage

$
0
0

So I'm a bit new to RDM's however the previous guy in my position setup a cluster on 2008 with 2 nodes that have 3 RDM drives (2 disks both 1.92 TB and a 3rd thats 5GB as quorum). The RDMs are in Virtual compatibility mode and using FC.

 

In our other VM's we use only vmdk's and they have no problem vmotioning them to different hosts. The problem I'm having is the current cluster does not vmotion, to move it I basiclly move the vm that's not active and then using cluster manager failover the node to the other node. I'm guessing there's something wrong with the current config because my guess would be this shouldn't need to be done this way.

 

Since we need to add more storage anyways and I'm unsure how to expand the current storage (if thats possible as RDM or possible inside the VM) I would like to setup a new cluster mirror the data useing robotcopy then change the mappings or dns to point to new locatio

 

My main question is what steps do I need to take so vmotion works correctly with shared storage usign RDM's.

 

Thanks!!!!

ESXi 5.0 - Cisco UCS and TAPE Oracle StorageTek Compatible !

$
0
0

Hi everybody,

My system has a configuration like below:

  • Blade Server: UCS B200 M2 (Model: N20-B6625-1) with mezzazine card M81KR
  • Host OS: VMWare ESXi 5.0.0 - 623680
  • VM-to-TAPE Connection: <(Windows Server 2008 R2 SP1) on an ESXi5.0> Blade Server --> Chassis --> Extender (FEX) --> Fabric Interconnect (UCS 6248UP) --> SAN Switch (Cisco MDS 9100) --> TAPE StorageTek SL150


I searched on google to find any possible solution and have concluded 2 methods:

 

Method 1: Add the TAPE driver like a SCSI device on VM


I followed this guide http://www.boche.net/blog/index.php/2008/10/27/connect-a-fibre-attached-tape-device-to-a-vm-on-esx/

But the diffrerence between the above guide and my real testing case is:

    • the guide: Perform a scan on the fibre HBA cards on host (it is an physical host with QLogic HBA Driver) to discover the fibre tape device. In this case, I’ve got an HP MSL5026 autoloader containing a robotic library and two tape drives
    • my real testing case: scan the fibre HBA cards on host (it is an Cisco UCS B200 M2 with Cisco VIC FCoE HBA Driver), i've got only 2 Tape HP Driver (cannot discover the Robotic Library-Media Changer) like images below:

 

1.jpg

 

2.jpg

 

3.jpg

 

Additional Symptoms are the log on ESXi host when the Robotic Device is unclaimed, ESXi host only recognizes 2 HP Fibre Channel Tape. Below is the output of esxcli storage command:

 

fc.20000025b500006f:20a10025b500006f-fc.500104f000d80990:500104f000d80991-naa.500104f000d80990

  UID: fc.20000025b500006f:20a10025b500006f-fc.500104f000d80990:500104f000d80991-naa.500104f000d80990

  Runtime Name: vmhba1:C0:T4:L0

  Device: naa.500104f000d80990

  Device Display Name: HP Fibre Channel Tape (naa.500104f000d80990)

  Adapter: vmhba1

  Channel: 0

  Target: 4

  LUN: 0

  Plugin: NMP

  State: active

  Transport: fc

  Adapter Identifier: fc.20000025b500006f:20a10025b500006f

  Target Identifier: fc.500104f000d80990:500104f000d80991

  Adapter Transport Details: WWNN: 20:00:00:25:b5:00:00:6f WWPN: 20:a1:00:25:b5:00:00:6f

  Target Transport Details: WWNN: 50:01:04:f0:00:d8:09:90 WWPN: 50:01:04:f0:00:d8:09:91

 

fc.20000025b500006f:20a10025b500006f-fc.500104f000d80990:500104f000d80991-

  UID: fc.20000025b500006f:20a10025b500006f-fc.500104f000d80990:500104f000d80991-

  Runtime Name: vmhba1:C0:T4:L1

  Device: No associated device

  Device Display Name: No associated device

  Adapter: vmhba1

  Channel: 0

  Target: 4

  LUN: 1

  Plugin: (unclaimed)

  State: dead

  Transport: fc

  Adapter Identifier: fc.20000025b500006f:20a10025b500006f

  Target Identifier: fc.500104f000d80990:500104f000d80991

  Adapter Transport Details: Unavailable or path is unclaimed

  Target Transport Details: Unavailable or path is unclaimed

 

fc.20000025b500006f:20b10025b500006f-fc.500104f000d80993:500104f000d80994-naa.500104f000d80993

  UID: fc.20000025b500006f:20b10025b500006f-fc.500104f000d80993:500104f000d80994-naa.500104f000d80993

  Runtime Name: vmhba2:C0:T4:L0

  Device: naa.500104f000d80993

  Device Display Name: HP Fibre Channel Tape (naa.500104f000d80993)

  Adapter: vmhba2

  Channel: 0

  Target: 4

  LUN: 0

  Plugin: NMP

  State: active

  Transport: fc

  Adapter Identifier: fc.20000025b500006f:20b10025b500006f

  Target Identifier: fc.500104f000d80993:500104f000d80994

  Adapter Transport Details: WWNN: 20:00:00:25:b5:00:00:6f WWPN: 20:b1:00:25:b5:00:00:6f

  Target Transport Details: WWNN: 50:01:04:f0:00:d8:09:93 WWPN: 50:01:04:f0:00:d8:09:94

 

2015-09-30T05:28:32.892Z cpu13:21107491)WARNING: VMW_SATP_ALUA: satp_alua_getTargetPortInfo:91:Could not find target port group ID for path "vmhba1:C0:T4:L1" - Not found (195887107)

2015-09-30T05:28:32.892Z cpu13:21107491)WARNING: NMP: nmp_SatpClaimPath:2093:SATP "VMW_SATP_ALUA" could not add  path "vmhba1:C0:T4:L1" for device "naa.500104f000d80990". Error Not found

2015-09-30T05:28:32.892Z cpu13:21107491)WARNING: ScsiPath: 4561: Plugin 'NMP' had an error (Not found) while claiming path 'vmhba1:C0:T4:L1'.Skipping the path.

2015-09-30T05:28:32.892Z cpu13:21107491)ScsiClaimrule: 1329: Plugin NMP specified by claimrule 65535 was not able to claim path vmhba1:C0:T4:L1. Busy

2015-09-30T05:28:32.892Z cpu13:21107491)ScsiClaimrule: 1554: Error claiming path vmhba1:C0:T4:L1. Busy.

 

esxcli storage core device list

 

naa.500104f000d80990

  Display Name: HP Fibre Channel Tape (naa.500104f000d80990)

  Has Settable Display Name: true

  Size: 0

  Device Type: Sequential-Access

  Multipath Plugin: NMP

  Devfs Path: /vmfs/devices/genscsi/naa.500104f000d80990

  Vendor: HP 

  Model: Ultrium 5-SCSI

  Revision: Y5BS

  SCSI Level: 6

  Is Pseudo: false

  Status: on

  Is RDM Capable: true

  Is Local: false

  Is Removable: true

  Is SSD: false

  Is Offline: false

  Is Perennially Reserved: false

  Thin Provisioning Status: unknown

  Attached Filters:

  VAAI Status: unknown

  Other UIDs: vml.0201000000500104f000d80990556c74726975

 

naa.500104f000d80993

  Display Name: HP Fibre Channel Tape (naa.500104f000d80993)

  Has Settable Display Name: true

  Size: 0

  Device Type: Sequential-Access

  Multipath Plugin: NMP

  Devfs Path: /vmfs/devices/genscsi/naa.500104f000d80993

  Vendor: HP 

  Model: Ultrium 5-SCSI

  Revision: Y5BS

  SCSI Level: 6

  Is Pseudo: false

  Status: on

  Is RDM Capable: true

  Is Local: false

  Is Removable: true

  Is SSD: false

  Is Offline: false

  Is Perennially Reserved: false

  Thin Provisioning Status: unknown

  Attached Filters:

  VAAI Status: unknown

  Other UIDs: vml.0201000000500104f000d80993556c74726975

 

Reference URL:

  http://www.gabesvirtualworld.com/changing-satp-claimrules-specific-storage-configurations/

  https://nordischit.wordpress.com/2012/05/31/sxi-5-hp-p212-and-lto-5-tape-drive-goes-offline/

 

 

Method 2: Configure VMDirectPath I/O pass-through devices on the VM to passthrough FC Device to the VM

 

I followed this guide:

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1010789

I have Create 2 news vHBA Card on Cisco UCS Manager and follow the guide:

41.jpg

 

42.jpg

 

43.jpg

 

Bbut the strange things that at the step of: When the devices are selected, they are marked with an orange icon. Reboot for the change to take effect. After rebooting, the devices are marked with a green icon and are enabled --> Reboot the ESXi host but when i login again, nothing changed when the orange icon and the warning: This device is pasthrough capable but not running in passhrough mode

is still appear (just as if the ESXi host is never rebooted !!!)

4.jpg

So that i cannot add PCI device on VM to passthrough new HBA

 

In conclusion:

Would you give me some clue or idea to connect my Oracle TAPE StorageTek SL150 with my Windows Server 2008 Virtual Machine:

  • Method1 or Method2 support for my case, is it possible to deploy and work (with infrastructure includes: ESXi 5.0; Cisco UCS BM200 M2, TAPE Oracle StorageTek SL150) ?
  • If any Method1 or Method2 is possible and is the right choice, is there any misconfiguration in my case so that i can make everything work correctly ?

 

I'm very appreciated for you help

Sincerely !

Mikel.

Moving VM Datastor to new Datastor which consists of Exchange data

$
0
0

Hi

 

We are migrating virtual machine to another Datastor, this virtual machines consists of Exchange application

 

we are not moving the host. we are fine to shutdown the VM and do the movement.

 

Need advice if any precautions to be taken, with respect to Exchange application.

 

Regards

Kamal

Extend storage recommendation.

$
0
0

Good day everyone.

 

We have one Dell Powervault MD3220i with 18x 900GB 10K Rpm drives that currently running out of space. We are going to attach to it an MD1220 with a 6Gb SAS Cable and having 18x 600GB 15K rpm drives and 4x800GB SSD.

 

We pretend to move to the SSD volume some of the most I/O intensive servers at least one app server and one exchange server and to use this SSD volumen for VMs snapshot so it can improve the time it takes backup software to create/remove the snapshot wich currently limit us to not run backups/replicas on production hours so our RPO is about 24 hours and we are looking to low that.

 

Please let me know your thoughts/recommendation on this or better ways to improve it.

 

Thanks in advance

Unmap clarification

$
0
0

Hi

 

I need some confirmation/details on the UNMAP/reclaim feature in 5.5

 

- While running unmap we see high Read rates and low/zero write rates on the storage adapters. One would expect high write rates because the unmap command writes zeroes to storage?

 

- After unmap is finish it seems the host runs a rescan of vmfs volumes. Is this by design of the unmap command? I found this in the hostd.log very soon after completion of unmap:

2015-10-21T09:46:37.628Z [26BC1B70 verbose 'Hostsvc.FSVolumeProvider'] RefreshVMFSVolumes called

2015-10-21T09:46:37.629Z [26BC1B70 verbose 'Hostsvc.FSVolumeProvider'] RescanVmfs called

 

We suspect that rescan causes a glitch in performance graphs as shown here (snip of storage adapters, happens also on the other graphs) :

perfgraph.PNG

As the graph shows it doesn't happen after every unmap run and I have experimented with different block values when running the unmap command, but I can't find any consistency in it...

 

The commands have been run on a esxi host (5.5 up2) running on HP BL460 Gen9 against HP 3Par storage (VMFS 5) through FCoE

 

Any clarification on this is much appreciated!

 

Regards,

Rudi

Viewing all 2328 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>