Quantcast
Channel: VMware Communities : Discussion List - vSphere™ Storage
Viewing all 2328 articles
Browse latest View live

Number of paths to a datastore

$
0
0

VMware vSphere 6.0

 

I might be light on details, but I don't quite know where to start.  I'm a "SAN beginner", but have been handed a built system and expected to support it.

 

I have a NetApp SAN that presents it's LUNs to a VMware environment.  I noticed an existing datastore only shows two paths when I check "Datastores", "Datastore Details":

 

one.png

However, a new LUN that was already configured on the SAN and that I just configured in NetApp to present to the same ESXi hosts is showing 8 paths:

eight.png

 

I'm not quite sure where to start to figure out the difference.  Both LUNs are being presented to the same number of hosts.

 

Where can I start to figure out if this is a "problem"?


Cannot attach VM Kernel adapter to vmhba for iSCSI port binding

$
0
0

Hi Gurus, I have a strange problem.

 

I am trying to bind a VMkernel adapter to iSCSI storage adapter but the VMKernel adapter that I want to use is not shown in the list. Let me attach the screen shots for better understanding.

 

The physical adapter in question is vmnic5

 

I have first created a virtual switch vswitch1 and assigned it a static IP 10.0.0.41

Now, when I try to bind a vmhba with the said Kernel adapter, I am not able to see the said Kernel adapter i.e. vmnic5 in the list.

Am I missing something here?

 

None of the vmhba(33,34,36) can see vnic 5

 

Although vmnic2 works fine

 

 

Is it because vmnic5 is 10000 and vnic2 is 1000?

 

Any advice would be really appreciated.

 

Thanks & Regards,

 

Siddhesh

vSAN Design and Sizing Guide 6.5 miscalculation?

$
0
0

I'm going through the VMware vSAN Design and Sizing Guide 6.5 trying to get a better grasp on vSAN.  Going through "14.2 Capacity Sizing Example II" I want to verify that the calculations done for "option 3 All Flash" are correct.  After calculating the Snapshot Storage Consumption the following calculation for Raw Storage Requirements (VMs + Snapshots) is listed as 49.87 TB + 2.5 TB.  The 49.87 TB is pulled from the cache size calculation which in previous examples was only used to get the Snapshot size.  I think that in place of 49.87 TB it is supposed to use 66.5 TB from the Raw Storage Requirements calculation.

 

I am very new to vSAN so I can't say for sure that this is a miscalculation but I can't find any instructions in the document that would explain why All Flash is calculated differently on these lines compared to Hybrid.  I'd appreciate it if someone with more experience with vSAN could let me know what is correct.  Thanks!

 

Also worth noting is that the lines "Raw Formatting Storage Capacity = 261.6/0.7" and "Raw Unformatted Storage Capacity = 373/0.99" are copy and pasted from a previous example and not applicable to the calculation for All Flash.

 

https://storagehub.vmware.com/export_to_pdf/vmware-r-virtual-san-tm-design-and-sizing-guide

Increasing datastore capacity: how long?

$
0
0

I "inherited" an older 5.5 environment. It has several 2TB datastores attached to each host (HITACHI Fibre Channel Disk) that are all pretty much filled up. There's also a 20TB datastore connected via iscsi that isn't being used at all. Because a couple VMs had ground to a halt due to their datastores running out of space, I migrated storage on 3 of them to the huge datastore...and *then* started investigating what exactly this thing is!

 

I found that the 20TB datastore is attached to each host (MSFT iSCSI Disk) and pointing to a windows 2012R2 server that has a 32TB iscsi virtual disk. For some reason, only 20 of the 32TB was used for this (virtual) disk, so I expanded it to the full 32TB. It took almost 3 days. :-)

 

Now if I go into vsphere (web client): datastore, manage, settings, general, capacity, increase, I can click the lun, specify configuration, datastore details, drop down the partition configuration, and "use 'Free space 12.70TB' to expand the datastore" is a choice.

 

My questions are:

1. How long is it going to take to expand the datastore from 20TB to 32.7TB?

2. Will this impact performance at all?

 

Unfortunately, the vm's now using storage on the 20TB datastore are critical and can't be down for more than 4hrs, between 9pm-1am, so I do *not* want to inadvertantly take any of them down during this process.

 

Thanks!

datastore has a folder with *.rdmp and vmdk files cannot delete

$
0
0

I am trying to remove  datastore but it has a folder of a VM thats has a bunch of rdmp and vmdk files on it

 

i cannot find whast vm is using it or even if its in use. I tried moving the folder and cannot move either

 

any idea?Screen Shot 2017-10-13 at 8.05.35 AM.png

Raid 10 vs Raid 5 read performance

$
0
0

We are using VMware and have a HPE 3PAR storage as primary
datastore.

We just bought 2 new ESXi hosts (HPE DL380 Gen10) with local SSD disks
(6x400GB) so we can create a local datastore to reduce the load for the 3PAR
storage. Every ESXi host is connected with 2 x 10Gbit/s network.

We are hosting customers SCCM servers so we need great read performance because
we are deploying 30-60 clients simultaneously from every VM.
Our goal is to have 2-3 VMs stored at the local datastore. When we don’t use
the, we move them back to the primary datastore.

The question is, should we use raid 5 or 10 (1+0) for the local disks, or doesn’t it matter in this
case?
If we use raid 5, it can read from 5 disks simultaneously.
Can it read from 3 or 6 disks simultaneously with raid 10?

With Smart Array configuration and raid 5, when we for example choose 512KiB as
Strip Size it will be 2.5 MiB Full Stripe Size. With raid 10 the Full Stripe
Size will be 1.5 MiB.
The explanation for Full Stripe Size is: “The full stripe size is the amount
data that the controller can read or write simultaneously on all the drives in
the array”.

Obviously we need the best read performance that we can get. Write performance isn’t
that important.
If a disk fails we just move the VMs to the primary datastore just to be safe.
So rebuild time isn’t a big thing either.

So which raid delivers the best read performance with 6 disks?

vSphere 6.0U1b can't connect to newly created vVOL ProtocolEndPoint

$
0
0

   I downloaded the latest vVNX to test the vVOL, my lab has 3 hosts with ESXi6 U1b and vCSA 6.0.0.10200. The vVNX is deployed to another esxi 6 and with one virtual HDD ( carved from 240G SSD drive using Thick provision Eager zero) is used to create a single storage pool, I strictly followed the instructions demonstrated in the following video, I did not create extra Storage Profile beyond these required for setting up vVOL. When vVOL is added to the three hosts, everything seems to be fine, but when I look at the storage size, it is 0 B, but when I looked at the storage devices attached to esx1 host, notice it says "inaccessible".   I also chchecked the time, my vCenter, ESXi, vVNX all NTPed to the same server, and their time skew is less than 1 min.

 

   For these small 3 host lab, I did not configure anything thing such as LUN masking or path selection rules. Also the Protocol Endpoint ip address can be seen from the hosts and vcenter,  When I ssh to any host and type

" esxcli storage vvol protocolendpoint list    " , the result is blank, so it appears that ProtocolEndPoint has some issues,  but yesterday, when I had two storage pools, for 30 min, everything worked ( I later destroyed that vVOL due to other mistakes).

 

    Does anyone have clue?  I mean the vVOL ver1.0 does seem to have some rough edges.

 

q.jpg

q2.jpg

 

EMC World 2015: vVNX VVOL Demonstration - YouTube

[solved] Intel rs3wc080

$
0
0

Hi,

i have purchased an vSphere 6.5 Essetials Bundle to evaluate some Datacenter ideas.

 

I have installed an Rizen-Server which works fine with ESXi 6.5u1.

 

The only problem i have is the RAID-Controller.

 

I have ordered a controller which ist listed in der Compatibilitylist of VMWare.

 

It ist the Intel rs3wc080 (LSI 3008 based).

 

I have Upgraded the Controller-Firmware to 4.680.01-8248  as listed in the vmware-List.(VMware Compatibility Guide - I/O Device Search )

I downloaded the driver (lsi-mr3 version 7.702.14.00-1OEM) and installed via esxcfg.

 

After booting no controller and drives are shown in the client.

 

via SSH is inspected the Kernel-Buffer:

 

2017-10-16T18:42:18.945Z cpu18:66103)Loading module lsi_mr3 ...

2017-10-16T18:42:18.946Z cpu18:66103)Elf: 2043: module lsi_mr3 has license ThirdParty

2017-10-16T18:42:18.949Z cpu18:66103)lsi_mr3: 7.702.14.00

2017-10-16T18:42:18.950Z cpu18:66103)Device: 191: Registered driver 'lsi_mr3' from 18

2017-10-16T18:42:18.950Z cpu18:66103)Mod: 4968: Initialization of lsi_mr3 succeeded with module ID 18.

2017-10-16T18:42:18.950Z cpu18:66103)lsi_mr3 loaded successfully.

2017-10-16T18:42:18.951Z cpu13:66068)lsi_mr3: mfi_AttachDevice:647: mfi: Attach Device.

2017-10-16T18:42:18.951Z cpu13:66068)lsi_mr3: mfi_AttachDevice:655: mfi: mfiAdapter Instance Created(Instance Struct Base_Address): 0x4304cb76d0d0

2017-10-16T18:42:18.951Z cpu13:66068)lsi_mr3: mfi_SetupIOResource:338: mfi bar: 1.

2017-10-16T18:42:18.951Z cpu13:66068)VMK_PCI: 915: device 0000:0b:00.0 pciBar 1 bus_addr 0xef800000 size 0x10000

2017-10-16T18:42:18.951Z cpu13:66068)DMA: 646: DMA Engine 'mfi00110000-dmaEngine' created using mapper 'DMANull'.

2017-10-16T18:42:18.951Z cpu13:66068)DMA: 646: DMA Engine 'mfi00110000-dmaEngine64' created using mapper 'DMANull'.

2017-10-16T18:42:18.951Z cpu13:66068)lsi_mr3: fusion_init:1535: RDPQ mode not supported

2017-10-16T18:42:18.951Z cpu13:66068)lsi_mr3: fusion_init:1547: fusion_init Allocated MSIx count 4 MaxNumCompletionQueues 4

2017-10-16T18:42:18.952Z cpu13:66068)VMK_PCI: 765: device 0000:0b:00.0 allocated 4 MSIX interrupts

2017-10-16T18:42:18.952Z cpu13:66068)lsi_mr3: fusion_init:1576: Dual QD exposed

2017-10-16T18:42:18.952Z cpu13:66068)lsi_mr3: fusion_init:1614: Extended IO not exposed:disable_1MB_IO=0

2017-10-16T18:42:18.952Z cpu13:66068)lsi_mr3: fusion_init:1626: maxSGElems 64 max_sge_in_main_msg 8 max_sge_in_chain 64

2017-10-16T18:42:18.952Z cpu13:66068)lsi_mr3: fusion_init:1678: fw_support_ieee = 67108864.

2017-10-16T18:42:20.820Z cpu13:66068)WARNING: lsi_mr3: fusion_init:1685: Failed to Initialise IOC

2017-10-16T18:42:20.820Z cpu13:66068)lsi_mr3: fusion_cleanup:1771: mfi: cleanup fusion.

2017-10-16T18:42:20.820Z cpu13:66068)WARNING: lsi_mr3: mfi_FirmwareInit:1972: adapter init failed.

2017-10-16T18:42:20.820Z cpu13:66068)WARNING: lsi_mr3: mfi_AttachDevice:687: mfi: failed to init firmware.

2017-10-16T18:42:20.820Z cpu13:66068)lsi_mr3: mfi_FreeAdapterResources:612: mfi: destroying timer queue.

2017-10-16T18:42:20.820Z cpu13:66068)lsi_mr3: mfi_FreeAdapterResources:616: mfi: destroying locks.

2017-10-16T18:42:20.820Z cpu13:66068)DMA: 691: DMA Engine 'mfi00110000-dmaEngine' destroyed.

2017-10-16T18:42:20.820Z cpu13:66068)DMA: 691: DMA Engine 'mfi00110000-dmaEngine64' destroyed.

2017-10-16T18:42:20.820Z cpu13:66068)WARNING: lsi_mr3: mfi_AttachDevice:715: Failed - Failure

2017-10-16T18:42:20.820Z cpu13:66068)Device: 2482: Module 18 did not claim device 0x4fc94303ec1d04c9.

2017-10-16T18:42:20.832Z cpu4:66106)Loading module iscsi_trans ...

 

any ideas?

 

greetings Andreas


Advice on disk setup and how to access across multiple VM's

$
0
0

I am really confused with this.

 

I have a Home Lab on a HP Gen8 MicroServer running ESXi 6.5u1.

1 Datastore 400GB with some Server/Client VMs

The Datastore is part of a 2TB SATA Disk. I have 1.43TB available which is visible under Storage > Devices

I use the same IP Range across all devices 192.168.0.x

 

I want to use the 1.4TB disk as a shared drive which I can access across my Client and Server VMs and also from my laptop that I RDP into those VMs on my ESXi host.

The main idea is to copy and store files I download and need to install etc. This is so I do not have to upload each file individually to the datastore via the datastore browser web interface.

 

I currently do this with a very old NAS disk on the network but this is slow and not working well as the disk goes into power save mode and it's a pain to wake it back up.

 

It is also not letting me create that free space into a new Datastore.

I can only expand the existing 400GB VMFS datastore into the full free space of the disk 1.43TB.

Eliminar datastore inactivo

$
0
0

Hola a todos los que lean este post. Soy nuevo en este gran mundo que es la virtualización.

 

Recientemente tuve necesidad de reinstalar ESXi en un host porque estaba con problemas al iniciar el SO y no podia administrar su datastore ni su contenido en vCenter.

 

Actualmente el host esta completamente funcional, pero en vCenter me aparece el antiguo datastore como inactivo y el contenido como huerfano. Quisiera saber si hay algun metodo para poder eliminar ese datastore inactivo y su contenido. Tambien quisiera saber si en el nuevo datastore, estaria bien habilitar el Storage I/O Control o seria mejor dejarlo deshabilitado.

 

Les adjunto unas capturas para ilustrar mejor mi problema.

 

Es espera de sus respuestas... saludos y gracias.

Moving large VMDK files to new VM container

$
0
0

Hi All

 

Hoping someone can help

 

We have just migrated an existing service from one VM to another using vSphere 6.0. As part of this I needed to unmount the drives which were connected to the old VM and reconnect them to the new VM.

 

I created the new VM in the same datastore as the old VM and during a quieter period I unmounts the drives and moved the VMDK files over to the new VM container but it ran out of space. It seemed to me that when copying it creates a duplicate and only once its fully copied it recovers the space. My issue is the VMDK files are nearly 4TB and I cant afford to give the datastore another 4TB of space to work with.

 

These are datastores connected as fibre channel luns on our NetApp array so it be difficult for me to reduce the datastore down and I cant create a new datastore because there is a replication job which runs and if I do this it will need to resync the whole volume which would cause further issues.

 

Is there any suggestion on how I can move the VMDKs to the new VM container?

 

Many Thanks

Ranj

ESXi 6.5 unable to add new datastore

$
0
0

Hi,

 

I have just installed a new Dell PowerEdge R640 with vmWare ESXi 6.5 installed. I have 2 RAID disks. 2x 600GB SAS in RAID 1 and 2x 2TB SAS in RAID1, ESXi is installed on the 600GB disk. I have tried creating a data store on the 2TB disk but it never appears as a storage device. I have rescanned the controller and the devices multiple times but it never shows up. Is there something else i need to do?

 

Thanks in Advanced,

Andrew

Change DNS mounted NFS volume

$
0
0

Hi  our topology is as follows:

 

We have a Netapp  Storage Controller  with two nodes

each node contains an IP address and hosts different datastores

(essentially each aggregate  is hosted by a separated node)

the datastores are NFS (NFS version 3)

 

 

we've mounted our Datastores with a DNS name  to all ESXI servers

 

 

we would like to move the datastore from one node to the other (using Vol move that will copy it to another aggregate)

(we've figured out the storage side of things so i won't elaborate of this)

this could be accomplished on the fly but would entail performance issues on the volume

because it would have to traverse between nodes on the Netapp Controller

 

to solve these performance  issues we would need to change the DNS record on the DNS server to point the DNS name to the new node's ip address   and then somehow

force the ESXI server to remount the  NFS volume with the new ip address

 

 

 

from the output of  " esxcli network ip connection list" i see that the ESXI server  contains one TCP connection for each NFS datastore

is there a command that issues a remount?  or does the ESXI server have some kind of mechinesemn that refreshes the DNS cache and performs a remount?

and also i would like to know what happens once a remount takes place? does the NFS protocol tolerate the downtime?

QLOGIC or EMULEX HBA for ESXi 6.5

$
0
0

Hi all,

 

We are planning a new hardware purchase to replace our current ESXi hosts running on HP DL380 G9 servers. So far we have had some issues with the HBA PCIe 8Gb (QLOGIC and some EMULEX ) cards like servers rebooting suddenly due to  PCIe NME errors, out of MSI-X vectors, difficulty finding appropriate firmware and upgrade procedure to upgrade HBA bios etc...

 

If you could share your experience  with new HBA 16gb models from QLOGIC or EMULEX an how they work with ESXi 5.5 and 6.5 it would be much appreciated

 

So, The question is Qlogoc or EMULEX ?

Which RAID is best for ESXi vSphere running Symantec Messaging Gateway?

$
0
0

Hello,

 

My organization is currently running Symantec Messaging Gateway on an ESXi hypervisor.  We are wanting to upgrade the hardware to a new Dell PowerEdge T130 server so we can utilize a RAID array for redundancy.  Are there any recommendations for which RAID level to use?  I am leaning towards either a RAID 5 or RAID 10 configuration.  The PowerEdge has 4 available drive bays so we will have a maximum of 4 drives total.  We only have around 60 employees so the Symantec Messaging Gateway usually doesn't have an extremely heavy load.

 

Currently, our Symantec Messaging Gateway is running on ESXi with a single disc, and it runs well, but our main goal is to have some kind of disc redundancy in the instance a drive fails.  As long as whatever RAID array we use has better performance than a single disc then we should be fine.  Would a RAID 5 or 10 be best for this scenario?

 

Any recommendations are appreciated!

 

Thanks


Storage oddities - Corrupted Partition Table resulting in Vanishing VMFS5 and VMFS6 Datastores - Why?

$
0
0

I've had something odd occur twice now in our environment, each occurrence in a different datacenter.  I'm a bit stumped, and a little concerned that I can't determine the root cause for this.

 

What has happened is:

 

1.  Datastore on FC attached EMC VMAX250f LUN becomes unmounted from all attached hosts

2.  Datastore cannot be re-mounted

3.  Rescan of storage results in the Datastore being removed entirely from the list of Datastores.  The underlying LUN for this datastore is now listed as available for creation of a new Datastore, and vSphere does not recognize that there is any existing Datastores on the LUN.

4.  VOMA recognizes that there is a VMFS file system detected on the lun, and shows 0 errors

5.  "partedUtil getptbl" shows that there are no partitions defined on the LUN

6.  Following KB2046610, I was able to recover both Datastores by re-creating it's entry in the Partition Table, and I'm then able to remount the LUN on all the hosts

 

So, this has happened twice now, each in a different datacenter and thus on different storage frames.  The commonality is however, that these LUNS were both provisioned on the same model of storage frame, an EMC VMAX250F.  These frames are roughly 6 months old and are the first of this model placed into service in our datacenters.

 

In datacenter 1, our vsphere hosts are ESXi 6.5U1 with vCenter 6.5U1.  In datacenter 2, our hosts are ESXi 6.0d with vCenter 6.5U1.

 

I'm at a loss for trying to determine why this may be happening.  Is there anything configuration wise on the SAN side that would possibly explain this issue?

chkdsk within a VM

$
0
0

Good Morning Storage VM people,

 

I have a SQL DB who insists on occasionally doing a scheduled reboot to do a chkdsk on the NTFS filesystem within a VM.  This takes a while and causes downtime of that VM, so I as a good infrastructure engineer am trying to think if this is really neccisary.  A lot of times I find people do things simply because that's how they did them in the physical world and never stopped to think / had the in depth knowledge to think if it is beneficial in a virtual world.

 

My thinking is, the NTFS file system is written to a VMDK file sitting on a VMFS file system, that in turn sits on a storage array with its own file system.  The array will take care of any bad sectors on the physical disks.  VMFS will do its thing for the datastores and vmkfstools can fix any issues there.  The NTFS filesystem is so far removed with so many layers of abstraction between it and physical disks, I question if the full check disk is required.  Sure there can be issues with the NTFS file system that do not involve issues with the physical disks that chkdisk can look for and correct, but that is a subset of what the utility scans for.

 

Does anyone have an intelligent commentary on if a full chkdsk is beneficial for a VM using NTFS on a VMFS datastore that is stored on a sorage array?  Or if there is an option that is a more efficient check for just NTFS issues that we might see in this situation?

 

Thanks.

Best way to use local disks

$
0
0

Hi,

 

I am not a Vsphere expert and have inherited a VMware estate with 16 MacPros (Cyllinder ones) running ESXi with VCSA all hooked upto an iSCSI LUN over 2 x 10G ethernet.

We use this to run our CI build agent VMs, these are linked clones of a VM and hence we get the ability to scale up and down very quickly. All this is ok.

 

Now the question is, on each of these MacPros there is 1TB of SSD disks which are very fast and perform better than iSCSI, so how can I best use them considering

 

- I don't want to loose ability to create Linked Clones for quick scale up / down

- I want to use vMotion atleast across some hosts if not the whole cluster

 

I looked at vSAN but it needs minimum of 2 disks per host and its not a possibility on MacPros.

 

What are my best options to use the fast 16TB storage.

 

Regards,

Shantur

Best Practices for NFS Datastore

$
0
0

I'm trying to get some advice on how to best set up an NFS server to use with ESXi as a datastore. I took a stab at it with CentOS 7, but the performance is abysmal. I'm hoping someone can point out some optimization that I've overlooked, but I'm open to trying another free OS as well.

 

I have an old Dell PowerEdge T310 with a SAS 6i/r hard drive controller. I have two 2 TB hard drives and two 1 TB hard drives. Due to the limitations of the SAS 6i/r controller, I have left the drives independent and went with software RAID 1 + LVM to get 3 TB of usable space like this:

 

# mdadm --create /dev/md0 --run --level=1 --raid-devices=2 /dev/sdd /dev/sde

# mdadm --create /dev/md1 --run --level=1 --raid-devices=2 /dev/sdf /dev/sdg

# vgcreate vg0 /dev/md0 /dev/md1

# lvcreate -l 100%VG -n lv0 vg0

 

Then I formatted the new LVM partition with XFS:

 

# mkfs.xfs /dev/vg0/lv0

 

I mounted this at /var/nfs and exported it with the following options:

 

# cat /etc/exports

/var/nfs        192.168.10.3(rw,no_root_squash,sync)


I was able to add this to my ESXi host using the vSphere Client as a new datastore called nfs01.


I then edited my VM through the vCenter web interface, adding a new 2.73 TB disk.


The guest OS is Windows Server 2012. Through the Disk Management interface, I initialized the disk GPT and created a new volume. This took several minutes. Then I tried quick formatting the volume with NTFS. I cancelled this after about 4 hours. I then shrunk the volume to 100 MB and formatted that instead. That succeeded after several minutes, but just creating a blank text document on this drive takes about 8 seconds.


The NFS server is plugged into the same gigabit switch as the ESXi server. Here are the ping times:


~ # vmkping nfs.qc.local

PING nfs.qc.local (192.168.10.20): 56 data bytes

64 bytes from 192.168.10.20: icmp_seq=0 ttl=64 time=0.269 ms

64 bytes from 192.168.10.20: icmp_seq=1 ttl=64 time=0.407 ms

64 bytes from 192.168.10.20: icmp_seq=2 ttl=64 time=0.347 ms


I ran an I/O benchmark tool and got these results: Imgur: The most awesome images on the Internet


At the same time vCenter showed this performance data for the datastore: Imgur: The most awesome images on the Internet

 

I noticed that some I/O operations done locally on the NFS server are also slow. For example I can run "touch x" and it completes instantly, but if I run "echo 'Hello World' > x" it can take anywhere from 0 to 8 seconds to complete.

 

This is my first attempt at using NFS (my two ESXi hosts use local storage) so I'm not sure if any of this is normal.

Use HPE D3600 storage as shared with two ESXI hosts

$
0
0

Hello there,

I'm not very fluent in this kind of hardware so I need some assitance I own HP D3600 12Gb SAS Disk Enclosure with two separate I/O moduiles. I want to connect this hardware to two different ESXI 6.5 hosts with HPE Smart Array P441 controllers. Basically this works one one ESXI hosta and I see full disks storage on one of them, but when I try connect this storage to different hosts i get an error with :

conflicts with an existing datastore in the datacenter that has the same URL but is backed by different physical storage.

Vmware support team points me that this storage is presented to ESXI as local disk so I cant use this configuration as shared storage with 2 ESXi hosts. I need to change enclosure or controllers configuration to different one, but because lack of knowledge I dont know how to do this. My config look like this

  • one Lenovo x3560 server with HPE P441 connected to port no 1 in HPE D3600 in first I/O module
  • second Fujitsu Primergy S8 server with HPE P441 connected to port no 1 in HPE D3600 in second I/O module

What am I doing wrong or what modification I need to do to achieve the intended effect and discs enclosure functioned as shared space.?

 

Thank you for your assistance, if any

Viewing all 2328 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>