Quantcast
Channel: VMware Communities : Discussion List - vSphere™ Storage
Viewing all 2328 articles
Browse latest View live

Disk device path not specified for RDM backing

$
0
0

Hi,

 

     I am currentlly trying to create  shared storage to use on a Microsoft cluster within ESX 5.1 environment. We are using a NetApp SAN (V3160 Series)

     When the LUN is created and attached to the HBA WWN on the ESX hosts, I then try to create a RDM disk and get the error once the finish button is pressed.

 

A general system erro occured: Disk device path not specified for RDM backing

 

any help would be appreciated.

 

Thanks


Impact of lost storage connection

$
0
0

I have servers with only 1 Fibre channel HBA.  If that 2-port HBA adapter fails, all VMs on the server lose connectivity to storage.  How can I estimate the risk of data loss that might occur if a VM is in the middle of a write to storage when the HBA fails?  Due to HA the VM would be restarted, and backups occur nightly.  I'm concerned about data corruption / data loss that might occur on the current days transaction and how likely that is if the HBA fails. 

What is wrong with my infrastructure ?

$
0
0

hello, to everybody.

 

currently I build up following infrastructure on vmware vsphere 5.1 Enterprise... the following is given:

 

hardware:

  • 7x hp DL 380 g7 server with 2 sockets cpu and in each case 192 GB ram.
  • every server is equipped with 12 network cards.
  • as storage is used a netapp 2240-2 with 2 controllers and 24 hard disks.
  • the netapp is equipped with a mezzanine card per controller. thereby 2x 10gb nics are available per controller.
  • the 10gb nics on the netapp are bundled up as a virtual interface on each controller.
  • used switches are cisco ws c3750x 24. these are also equipped with 10gb modules.
  • the cisco switch model is certificated from netapp. the cable connection from storage to switch is offered about netapp directly.

 

software:

  • each hp server is installed with vmware esxi 5.1 release 799733.
  • the esxi installation file is from hp with the drivers from the server compiled in it.
  • the netapp has the data ontap release 8.1.1P1 7-mode
  • we use software iscsi hba on the esx server

 

now the big problem:

  • i converted some physical servers through vmware converter to the the vm infrastructure.
  • some servers are running ms sql server und oracle on it.
  • the problem is now, that the storage i/o traffic over iscsi is now slower then before.
  • im posting screenshots from iometer in the posting.

 

virtual networking in vcenter is built up as follows:

  • four nics from a server in the vm cluster sends the storage traffic to the cisco switch.
  • the cisco sends the traffuc about his 10gb nic modules further to the netapp.
  • i've use the howtos to build iscsi multipathing von the vswtich, that sends the storage traffic to the netapp.
  • the virtual machine vmdk's are stored in one big aggregate on the netapp.
  • the aggregate is splitted in three volumes (one for the netapp wafl system and two for vm data).
  • the two data volumes has each one lun in it, where the vmdks are stored.

 

now i'm experimenting with the vmware best practice guide "oracle databases on vmware". interesting is, that my performance is the same bad on each server and virtual machine.

it's not the matter, if the virtual machine is with ms sql, oracle or just a windows 2008 server with nothing on it. iometer still brings bad results on the 4k blocks read and write.

 

now the question in here ist, what is wrong ?

 

many thanks for a solution

marc

VAAI - What Am I Getting... and what am I missing?

$
0
0

So I'm running vSphere 5.5, and my datastores reside on an HP 3PAR 7200, using 10Gb iSCSI. In order to reclaim space on my thinly provisioned volumes, I run "esxcli storage vmfs unmap -l <name>". This seems to free storage to allow me to compact my CPGs and reclaim space from my 3PAR. Looking at vSphere 5.5 STANDARD, which I have... it seems like my license doesn't include VAAI.... so am I running a command on my ESXi hosts that is in reality not doing anything? What exactly do I not get as far as storage integration with only standard licensing, and if I don't get unmap, why does it accept the command... and seem to free space on my Thin provisioned virtual volumes? I'm confused. Also, if I do not receive the ability to send UNMAP as part of my licensing, how am I supposed to reclaim storage?

3PAR Peer Persistence (Metro Storage Cluster) needs reboot ?

$
0
0

Hi there,

 

I installed two StoreServ 7200 with HP this week and Peer Persistence is working but only under certain circumstances :

 

During our tests we’ve been able to make a transparent switchover but it worked only the second try after an ESXi reboot (ESXi 5.1 Patch 1 Build 914609). The first time it only freezes and never reconnect.

 

HP said it was a known issue and we only had to reboot the ESXi once and that all the subsequently exported 3PAR Volumes will be able to switchover afterward.

 

I can’t find any explanation of why a reboot is needed and how I can check if it’s OK and if it WILL be able to switchover.

 

What I see from a VMware point of view is that during a switchover, the old Optimized paths stay optimized (AO) instead of transitioning to STANDBY :

 

  • Normal operation : {TPG_id=258,TPG_state=STBY}{TPG_id=256,TPG_state=AO}
  • During freeze : {TPG_id=258,TPG_state=AO}{TPG_id=256,TPG_state=AO}
  • Switched over : {TPG_id=258,TPG_state=AO}{TPG_id=256,TPG_state=STDBY}

 

All prerequisites are met (vSphere 5.1 Patch 1, inform 3.1.2 MU3, same WWN and LUN ID for volumes, Persona 11, Qfull, Custom Claim rule …)

 

I hope that someone will be able to help me.

 

Thank you in advance.

Shared hard drive in guest VMs

$
0
0

Hi,

 

I have absolutely no background in VMware technologies, so please bear with me. I'm just a user of a shared service inside the company where I work.

 

We have quite a few Linux boxes on which build jobs are executed from a central master (jenkins-ci.org).

Currently we are using a Samba share where each build job has its own workspace folder. There is absolutely no interaction between the folders, each job (VM) reads and writes from and to to its own directory structure.

 

Unfortunately Samba comes with a performance penalty of almost 30%. The idea was to provide a shared virtual disk from the host and make it accessible to the guest VMs. The KB article VMware KB:    Sharing a virtual disk between multiple virtual machines explains some of the issues with such an approach.

 

The question is, are there any other ways to achieve what I explained above without using Samba or NFS? Data corruption would not be an issue as the read/write operations would be on separate files. Even the fact that the changes in one VM are not visible to the other VM is OK as the jobs work on completely different folders and files.

 

Thanks,

    Michael

LTO4 Tape on Windows Host not working properly

$
0
0

Hi

 

I recently joined a company where I had to configure a new server esxi with windows and linux host and attach a Tape Drive and SAS adapter they have been using in an old server. I'm new to esxi and have been learning as I was setting this up so the problem is that I'm not sure if the problem is the Tape drive, the new media they got a just after I arrived or a bad configuration from my part of esxi.

 

The Hardware is:

Dell T420

Dell Powervault LTO4 external drive

PERC H310 adapter with 2 HDD in RAID 1

Dell SAS 5/E Adapter

Vmware esxi 5.5

 

The Tape drive is detected in the path vmhba1:C0:T0:L0 and I assigned the tape drive only to a windows 2k8 host in the virtual node SCSI (1:0) and SCSI controler is set to LSI Logic SAS. I've not configured the SAS adapter as Directpath I/O yet although I've been reading it will improve performance.

The host have two virtual HDD in the  set in the (0:0) and (0:3) Virtual node and the HDD is detected by esxi as vmhba2:C2:T0:L0

Are the virtual Nodes asigned properly???

 

 

My problem is Backups using Symantec Backup exec fails very often in this new server and I want to make sure my vmware configuration is not the problem.

 

Thnks!!

Vmware vsphere multipath

$
0
0

Hello all ..

 

I would like to ask few questions regarding iscsi multipath option in ESXi 5.5

 

I did setup my two kernel ports and bind them to software iscsi adapter ...

 

Mltipath- 1.png

 

I followed this blog : http://blogs.vmware.com/vsphere/2011/08/vsphere-50-storage-features-part-12-iscsi-multipathing-enhancements.html

 

 

The problem is , that i do not have two paths to same iscsi target. (I hope that i understand it well ,)

And one of the kernel adapter is shown as "not used"

 

I did setup only one active NIC per kernel network.

 

Is something what i do not understand correctly regarding this setup scenario ?

 

Pictures >

 

Mltipath- 2.png

 

 

Mltipath- 3.pngMultipaht-3.png


Array Controller Unmounted

$
0
0

Hello,

I have one of the of the array controllers (not LUN) with Operational State "Unmounted".

Even ESXi host reboot didn't solve the problem.

It looks like storage problem, but ..........

1. All Paths are OK via this controller. (please see picture)

2. This storage array (with same controller) is presented to another clusters, but no problems there.

So, the above 2 are pointing to ESXi issue, but I can't figure it out.

VMware KB doesn't have any related articles.

 

Image 103.jpg

High Availability without Shared Storage

$
0
0

I'm guessing the answer to my question might be VSA or possibly vSAN, but what is the simplest way to implement high availability for VMs using DAS on the vSphere 5.5 hosts?  I have a need to support (3) VMs on (2) hosts, but don't see the logic in purchasing the P4000 storage array that was proposed.  Can I make the VMs highly available with shared storage?  The "enhanced" vMotion sounds great, but if the host supporting the VMs fails, I'm down.  Maybe vSphere Replication is the answer, but thought I would throw this out to the community.  I'm using ProLiant DL380p G8 and have plenty of DAS capacity available.

 

Cheers

 

Scott

Getting error in VAAI NAS cross volume test case

$
0
0

hi,

 

I was running VAAI NAS cross volume test cases (using same volume as a source and destination). In NegLazyZTCV test case i got below error mesage

 

UTC [ MAIN      ] [0] ERROR: Argument "AtlantisNasPlugin: AtlantisNAS_StartSession(): Start\nAt..." isn't numeric in numeric ne (!=) at /opt/vmware/VTAF/vaainas50-cert/VTAF/Test/Storage/StorageCert/FVT/VAAINas/NegLazyZTCV.pm line 174.

 

(NegLazyZTCV verifty that native lazy file cloning on zeored thick file fails if destination is another NAS volume)

 

Can anybody suggest me how to handle this error. I attached my run log file for reference, please find the attachment.

iSCSI Port Binding Question

$
0
0

Hi,

 

Could do with a little help getting my head round the following:

 

I had a problem with a newly configured ESXi 5.1 host, at reboot it hung on vmw_satp_lsi successfully loaded.  I linked this to http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2017084 and the fact I had iSCSI port binding in place, but I’m not so sure.  Host and SAN configured as follows:

 

ESXi host:

 

vSwitch1

vmk1 IP: 192.168.132.105              (Port Binding vmnic 1)

vmk2 IP: 192.168.133.105              (Port Binding vmnic 2)

 

Storage array:

 

Controller 0/1: 192.168.130.100

Controller 0/2: 192.168.131.100

Controller 0/3: 192.168.132.100

Controller 0/4: 192.168.133.100

 

Controller 1/1: 192.168.130.101

Controller 1/1: 192.168.131.101

Controller 1/1: 192.168.132.101

Controller 1/1: 192.168.133.101

 

The Dell MD3220i documentation is what I’m finding a little unclear regarding this, as below:

 

 

"In a configuration assign one VMkernel port for each physical NIC in the system. So if there are 3 NICs, assign 3 VMkernel Ports. This is referred to in VMware’s iSCSI SAN Configuration Guide as 1:1 port binding.

 

- Note: Port binding requires that all target ports of the storage array must reside on the same broadcast domain as the VMkernel ports because routing is not supported with port binding. See VMware KB #2017084 here. "

 

 

So the above advises on 1:1 port binding for multiple NICs then notes that they must be on the same broadcast domain?

 

Any help on understanding this appreciated.

 

Thanks

VMFS on iSCSI not avaiable, still reported as busy.

$
0
0

Hi all,

 

last week I experienced an APD on a LUN for a VMFS datastores that contains the configuration files of all my VMs.

We run the license free version of 5.1.0 ESXi.

The VMs that had their configuration files on that storage kept running without any problem. I thought I would just unmount the storage and mount it again but I got an error of resource is busy.

It was quite weird to me thought I thought that it was related the vmx configuration files.

 

As soon as I shut down the machines they became unaccessible on the left panel of the vShpere and as soon as I shut down the last one the storage became suddenly available (I think there was a pending mount option on it)

 

I wonder if there would have been another way to recover such APD without the need of stopping the VMs.

 

Thanks,

 

Alex

Acceptable iSCSI target names.

$
0
0

Does ESXi limit iSCSI target names which can be used only to IQN and EUI?

 

When I try to add a static iSCSI target with a name like this:

ny-datastore-3

 

I get the following error in vCenter (ver 5.5.0, build 1476327):

 

Operation failed, diagnostics report: iScsiLibException: status(c0000000): Invalid parameter; Message= IMA_AddStaticDiscoveryTarget


The HBA is a Qlogic QMH4062 in an HP ProLiant blade (running VMware ESXi, 5.5.0, 1331820)

Expanding datastore what is the best way ?

$
0
0

Hello All,

i want you help about expanding a datastore. I have a HA cluster with three host (ESXi 5.5). all the host are actually sharing a unique Datastore where all VMs runs. the datastore is created in an ibm storwize V3700 storage. the host are connected with FC. But i want to extends the capacity of the datastore. I think that it's possible to do it by two ways:

-- creating a new volume from the IBM storwize and mapping it to all host then expanding the datastore to the new LUN (extension) created.

-- expanding the existing volume directely from the IBM storwize storage. So that it's the datastore capacity is directely increased.

I want your help because this cluster is running mission critical VMs. Also i have read in VMware documentation that it's not recommended to use more than one LUN by datastore each datastore must be in a separate LUN.

 

What way would you suggest ?

Can you do the manipulation without stopping VMs ?


How to move VM's to a new storage AND new hosts

$
0
0

Hi,

I'm having a hard time figuring out, how I am going to move all VM's from an NFS storage to a new iscsi storage (HP SAN). We are also introducing new hardware for the hosts, so all in all this will be a whole new vCenter environment (also going from vSphere 4.1 to 5.5). My first thought was to attach the new storage to the old hosts. Then do a storage migration one by one over a certain time (powered off machines), remove the VM from the old vCenter, and then add them to the new vCenter environment. But we have exhausted all the NICs on the old hosts, so I cannot attach the new storage, and I cannot share a vmkernel port for both iscsi and nfs.

 

Do you have any suggestions how I can approach this?

Convert thick to thin don't works

$
0
0

Hello,

 

Please help me to find out why when i convert thick lazy zeroed disk to thin via vmkfstools -i ./disk.vmdk  ./thin-disk.vmdk -d thin, i get same thick lazy zeroed disk, although the process was going very well. 

Independent disks of other disks. Safe removal?

$
0
0

We're using EMC Avamar as backup of VM's.

 

The VM avamar server add's the disks of the VM's it taking backup off, as Independent and nonpersistent disks.

Avamar failed the backup job and the server still has the disks mounted. We've shut it down.

 

We're afraid to remove the disks from the avamar server, we're not sure if that will remove the disk completely from the server that has the real disks mounted?

Does anyone know if it's safe to remove them? We can't startup the avamar server, because the disks it has mounted are in use.

 

This is how it looks on the Avamar server, it has the disks of the backed up server mounted:

dump.jpg

 

This is how it looks on the backed up server, you can see it's the same disk (DNFILXX_3.vmdk):

dump2.png

 

Can we safely remove the disks from the Avamar server, without them being deleted from the backed up server also?

 

Thank you in advance.

 

Regards
Pete

 

Message was edited by: PeterLind

Difference Between Provision and Used Space

$
0
0

Dear Team,

 

 

can some one please explain me the difference between provision and used disk space in simple words.

 

regards

Mr VMware

VSAN: cannot add storage policy, VSAN option missing under "rule based on vendor specific capabilities"

$
0
0

Hey,

 

I have a problem with the final version of VSAN.

 

I have upgraded all my hosts to 5.5 Update 1 and I have configured the VSAN cluster successfully.

Unfortunately, when I try to add a storage policy, there is no "VSAN" option selectable under "rule based on vendor specific capabilities". It only shows "None".

 

Does anybody have a solution for this problem?

 

I did not have this issue with the beta version of VSAN.

 

Johannes

 

Update: what I have found is that when I try to resync the storage providers, the following error message appears:

 

Viewing all 2328 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>