Quantcast
Channel: VMware Communities : Discussion List - vSphere™ Storage
Viewing all 2328 articles
Browse latest View live

Virtual machine provisioning 1Tb for a 512Gb disk/Machine virtuel avec un disque de 512GB provisionne 1TB de stockage

$
0
0

I have make a vm, vpshere esxi, with a 512Gb  virtual disk, but the vm is provisionning about a TB of space stockage.

Vmdk size is about 1Tb.

Can some one help me please.

 

J'ai fait une machine virtuelle sur esxi, avec un disque de 512GB, mais l'espace provisionné pour la machine est d'environ 1Tb,

la taille du fichier vmdk est environs 1TB.

Quelqu'un peut-il m'aider svp?


vSphere 6.5 and external storage array with dedupe

$
0
0

Hello, I have all-flash array with deduplication and compression enabled connected via 8Gb FC to two ESXi 6.5 hosts. I want to know if there is any way of presenting used space on array after dedupe and compression of data stored on it to ESXi hosts? Currenty I have three 3TB datastores and in vCenter i can see that used space on every single of them is around 80%, but on the storage array utilization of LUNs (with datastores created on them) is around 30%.

Is there any chance to present actual utilization of storage array LUNs to vCenter datastores? Some api like deprecated VAAI?

VMware Disk Provisioning with SSD Flash Storage

$
0
0

I'm building a new VM that will be used as a SQL 2019 Standard server running Windows Server 2016 Standard.  Very small environment with only 5 or fewer users connecting to the SQL server for use with an accounting application and small DBs.  My single ESXI host is running 6.7 and all flash SSD storage on a Dell PowerEdge R740.  RAID6 with 6 SSDs on a Perc H740p controller.

My question is regarding disk provisioning and best performance (Thick Provision Lazy Zeroed, Thick Provision Eager Zeroed, Thin Provision).  I know in the past for best performance with SQL and Exchange setups, Thick Provision Eager Zeroed was considered best even though the space required.  However, I am reading about the different disk provisioning setups when using all flash SSD storage and the differences appear to be somewhat the same now with the benefit of SSDs.

 

Any input is appreciated. 

 

Thanks.
Ken

What is best practice for large disks?

$
0
0

Hi,

 

We are looking to setup a video archival volume and are purchasing a new SAN for purpose (iSCSI).

 

I understand the maximum virtual disk size is 62TB but the usable capacity on the SAN will be about 80TB.

 

I'm wondering how to best set things up to fully utilize the SAN and allow any future expansion.

 

Do I provision multiple LUNs and present to vSphere?

Each LUN would have a datastore which would then have a single virtual disk.

Windows would then see two disks which would then be combined into a single volume with mount points or Storage Spaces.

 

or

 

Do I present a single storage LUN to the virtual operating system and bypass vSphere?

 

Looking for any suggestions from real world experience.

 

Thanks

 

Damien

VMWare Expanded Extent over 2 disks - Invalid argument on some files

$
0
0

Hi everyone

I have an ESX 6.7 which consists of a datastore named Vira nad this datastore was built over a 1TB Samsung SSD. after a while we needed to expand it and did it using another 1TB and expanded its storage space to about 2TB. last week we had a bad hardware restart and after restart Datastore named Vira had some issues. the first problem was its naming and I figured out that Datastore is not available correctly. I tried to power on a virtual machine that corresponds to this datastore and it didn't come up. after some investigation i found one of the disks included in 2TB Extent is not available. tried to change the missing disk SATA port and it was available in system again. after starting the ESX I could see the Datastore but after that change and start I cannot power on the machin and FLAT VMDK file is locked. every time I try to clone it, touch it , move it I get Invalid Argument error.

searche a lot in VMWare KBs and Forums and found it may be some process and checked it using VMKFSTOOLS -D. it has a lock that relates to ESX itself. restarting Services or even Host doesn't remove the lock.

 

vmkfstools -D W-Sharing\ SRV_2-flat.vmdk

Lock [type 10c00001 offset 134742016 v 8, hb offset 3702784

gen 15, mode 1, owner 57a3177f-dbdbe28c-4845-002590cb29c2 mtime 82378

num 0 gblnum 0 gblgen 0 gblbrk 0]

Addr <4, 52, 0>, gen 1, links 1, type reg, flags 0x9, uid 0, gid 0, mode 600

len 1869169766400, nb 1299233 tbz 0, cow 0, newSinceEpoch 1299233, zla 3, bs 1048576

affinityFD <4,52,0>, parentFD <4,35,0>, tbzGranularityShift 20, numLFB 0

lastSFBClusterNum 2364, numPreAllocBlocks 0, numPointerBlocks 218

 

tried to touch the file :

touch: W-Sharing SRV_2-flat.vmdk: Invalid argument

 

tried unmounting and checking the Datastore using VOMA and found these results:

here is the extent list when Datastore Vira is mounted:

esxcli storage vmfs extent list

Volume Name  VMFS UUID                            Extent Number  Device Name                                                               Partition

-----------  -----------------------------------  -------------  ------------------------------------------------------------------------  ---------

All OS       57892a55-88ac4e8a-d55d-002590cb29c2              0  t10.ATA_____Samsung_SSD_860_EVO_1TB_________________S4BDNG0M104285X_____          3

Kiantech     57a5fb65-e2d18e7a-36f6-002590cb29c2              0  t10.ATA_____Samsung_SSD_840_EVO_1TB_________________S1D9NEADA02146Y_____          1

Wiera1       5849d59d-f80d0f04-5805-002590cb29c2              0  t10.ATA_____Samsung_SSD_860_QVO_2TB_________________S4CYNF0M207684P_____          1

Vira         57a5fac4-a0e012b6-91f4-002590cb29c2              0  t10.ATA_____Samsung_SSD_840_EVO_1TB_________________S1D9NEADA02133Y_____          1

Vira         57a5fac4-a0e012b6-91f4-002590cb29c2              0  t10.ATA_____Samsung_SSD_840_EVO_1TB_________________S1D9NEADB01872B_____          1

 

and here is the result when Vira is not mounted:

esxcli storage vmfs extent list

Volume Name  VMFS UUID                            Extent Number  Device Name                                                               Partition

-----------  -----------------------------------  -------------  ------------------------------------------------------------------------  ---------

All OS       57892a55-88ac4e8a-d55d-002590cb29c2              0  t10.ATA_____Samsung_SSD_860_EVO_1TB_________________S4BDNG0M104285X_____          3

Kiantech     57a5fb65-e2d18e7a-36f6-002590cb29c2              0  t10.ATA_____Samsung_SSD_840_EVO_1TB_________________S1D9NEADA02146Y_____          1

Wiera1       5849d59d-f80d0f04-5805-002590cb29c2              0  t10.ATA_____Samsung_SSD_860_QVO_2TB_________________S4CYNF0M207684P_____          1

Vira         57a5fac4-a0e012b6-91f4-002590cb29c2              0  t10.ATA_____Samsung_SSD_840_EVO_1TB_________________S1D9NEADA02133Y_____          1

 

and here is the VOMA result itself:

 

voma -m vmfs -f check -d /vmfs/devices/disks/t10.ATA_____Samsung_SSD_840_EVO_1TB_________________S1D9NEADA02133Y_____:1

Running VMFS Checker version 2.1 in check mode

Initializing LVM metadata, Basic Checks will be done

 

Checking for filesystem activity

Performing filesystem liveness check..|Scanning for VMFS-6 host activity (4096 bytes/HB, 1024 HBs).

Phase 1: Checking VMFS header and resource files

   Detected VMFS-6 file system (labeled:'Vira') with UUID:57a5fac4-a0e012b6-91f4-002590cb29c2, Version 6:82

         ERROR: Trying to do IO beyond device Size

         ERROR: Failed to check fbb.sf.

   VOMA failed to check device : Limit exceeded

 

Total Errors Found:           0

   Kindly Consult VMware Support for further assistance

 

can you please help me investigate the problem over this deployment ?

thanks everyone

Mohammad

Deleted Partition Table

$
0
0

Hello, I have done multiple searches and have not come across the situation I have put myself in, so I will ask.
I have esxi 6.5 connected to a NAS unit with a iSCSI connector.
While re-configuring my hosts local storage I accidentally hit delete on the wrong storage device, deleting the NAS LUN partition table.6.5

 

This was done from the GUI in 6.5.

Storage - Devices - right click on the device - Clear partition table

I know. Click happy. Shouldn't have happened.

 

Looking at my nas it say im still using 2TB of space in the LUN and my total consumption hasnt changed.
This suggests the data is still there (I mean, I deleted the partition table)

 

Is my recovery possible and if so, what is my best option?

 

I thank you in advance

NFS Configuration questions

$
0
0

We are migrating to new NetApp MetroCluster storage. All storage is NFS.

In the past our storage targets only had one IP address to connect to so there was a question about what address to connect to.

the storage targets on the new system has 2 IP addresses to target. My questions are.

 

1. Is it necessary to connect volumes to the same IP address across hosts for vmotion compatibility oor does it just look at the Datastore name?

2. Is it best practice to add both IP addresses when you mount a volume to a cluster?

 

I opened a call with VMware but made it stupidly made it low priority, who knows when I'll hear from them.

 

Thanks

Disk array failure and recovery leave Spanned datastore degraded and unusable.

$
0
0

I have a spanned datastore volume that is comprised of 4 extents and I suffered a disk array failure that forced me to re-import the disk volume on a very old PERC 5/I disk controller. I was able to recover the original disk array, but it was assigned a new disk NAA number which is causing the VMFS-5 datastore not to recognize it. I am not sure what tools to use to update the datastore to replace the OLD NAA disk with the new NAA disk to provide for the missing extent.

 

[root@Pegasus:~] vmkfstools -Ph /vmfs/volumes/datastore1\ \(1\)/

VMFS-5.54 file system spanning 4 partitions.

File system label (if any): datastore1 (1)

Mode: public

Capacity 7.3 TB, 917.9 GB available, file block size 1 MB, max supported file size 62.9 TB

UUID: 51f563b0-25ee3451-895e-00188b440eff

Partitions spanned (on "lvm"):

        naa.600188b0436047001987f1834fc6754b:3

        naa.600188b0436047001987f1dce0ed5c8c:1

        (device naa.600188b0436047001987f219956c8e6e:1 might be offline)

        naa.600188b04360470019997bb04b041de4:1

        (One or more partitions spanned by this volume may be offline)

Is Native Snapshot Capable: YES

 

The drive that contains an extent "naa.600188b0436047001987f219956c8e6e:1" is the old disk array and the new disk array was assigned in as "naa.600188b043604700256a4374a7dd9376:1".

 

What are the steps to update the datastore to replace the old NAA disk with the new NAA disk?


Intel Optane 905P VS Samsung 983 ZET best performance for running many Virtual Machines in vSphere simultaneously

$
0
0

I am looking to purchase two premium SSD's that will give the best performance for running many Virtual Machines simultaneously on vSphere. The two that I am looking at are the Intel Optane 905P and Samsung 983 ZET. I have seen some conflicting information on websites that compare them, and cannot find any websites that compare them for the sole use of running virtual machines.

 

I will be running as many as 50 Virtual Machines on vSphere simultaneously with Windows installed on them and the applications that will consume the most resources on all of them will be chrome and other browsers.

 

Also, is it true that purchasing two of the same 1TB SSD's would provide better performance than one 2TB SSD if the majority of the disk(s) space would be occupied by the VM's running Windows when I want to run as many VM's as possible simultaneously?

 

Or, are there other SSD's that I should be looking at instead of these that are better suited for running many VM's with Windows simultaneously?

 

I would appreciate any advice. I am willing to pay a premium for the best performance.

Performance boost with Iscsi

$
0
0

 

 

 

Gentlemen, I have 2 servers with 8 virtual machines each, all machines are stored in IBM DS3512 storage, communicating via ISCSI, I have 2 dedicated cards only for storage communication as I do to have a better performance in communication with storage.

 

Error Removing Datastore

$
0
0

I have two clusters (Servers adn VDI) that share the same storage array.  I removed everything from the data-store and tried to remove the data-store form the system.  Out of 7 hosts 5 of them removed the data-store and 2  have not.  Now those two say inaccessible and no matter what i try they will not go away.  Any thought on how to remove them from the two hosts.  Any help is appreciated.

where to buy a speedometer?

$
0
0
Greetings everyone,
Just arrived here and found this forum quite good, 
so I wanna know that from where I can buy sphygmomanometer?
Actually I need it for my home, I saw some good digital and manual on https://www.reecoupons.com but I dont know which one is better to buy.
Should I buy a digital or manual?
Pl suggest me waiting for your replies.

Help Removing Datastore

$
0
0

I have a datastore that i am trying to remove and it tells me there is a Virtual Machine that is referencing this datastore. However, when i look at the DS there are NO files listed.  Any help is appreciated.  Thanks

ISCSI APD timeout issue in ESXi 6.7.0

$
0
0

Issue APD timeout issue in ESXi 6.7.0

 

Our setup:-

 

This is pre-prod setup with 4 ESXi servers backed by Windows ISCSI storage.

ISCSi data store  size is 10 TB( presented from Windows physical ISCSI server) and the same is presented to all the 4 ESXi servers.

 

When Im try to do storage vmotion of any big size VM from local datastore(600 GB VM) to ISCSI data store,it gets interrupted in the middle(I mean after 60 % of storage vmotion,esxi servers one by one one goes to not responding state because of APD situation),To solve this issue,we should take one by one ESXi reboot and happens frequently whenever we do storage vmotion.

 

See the logs below:-

 

2019-12-03T21:36:32.018Z: [APDCorrelator] 608631351756us: [vob.storage.apd.timeout] Device or filesystem with identifier [naa.604b53240003606301d5aa1ee87489b0] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed. 2019-12-03T21:36:32.018Z: [APDCorrelator] 608633935712us:

 

 

[esx.problem.storage.apd.timeout] Device or filesystem with identifier [naa.604b53240003606301d5aa1ee87489b0] has entered the All Paths Down Timeout state after being in the All Paths Down state for 140 seconds. I/Os will now be fast failed. 2019-12-03T21:36:47.993Z: [iscsiCorrelator] 608647326769us: [vob.iscsi.target.async.event]

The target iqn.1991-05.com.microsoft:-mts-iscsi-win-lun3-target issued an async event for vmhba64 @ vmk1 with reason code 0000

 

To solve this issue,

 

1) We have increased the value of APD timeout to 300 seconds instead of 140 seconds (on all the ESXi hosts) Change Timeout Limits for Storage APD

2) We have disabled the "Delayed ACK" on all the 4 ESXi hosts    VMware Knowledge Base

 

After this above 2 parameters change,now it is stable for the last 12 hours and storage vmotion got completed without any issues.

 

Any other recommendations(with respect to APD time out value) is there from ESXi/ISCSI end ? please suggest.

 

Thanks,

Manivel R

Improving Stun Times

$
0
0

VMware 6.0

Compellent SC5020 SAN connected with 10GB

VM Size 1.6TB

VM Workload: SQL

12 disks split among 3 paravirtual controllers

OS disk using LSI Logic SAS

 

Does anyone know if there is anything else we can do to increase performance for stun times? We are getting really good stun times but sometimes the stun times go up.

I pulled the stun times for the last 30 days for this VM and for the most part it's a tenth of a second or half a second but sometimes it increases to 1.5 seconds with the highest being at 1.9 seconds.

The average for the month is .59 seconds, I know these stun times are probably already in the "excellent" category but I would like to see if there are any tweaks to bring them down even further. especially the ones that creep up over a second.

 

I attached our report of stun times for this VM for the last 30 days in case anyone's interested.


Cannot reclaim space on VMFS store

$
0
0

Hey, hoping someone can point me in the right direction.

 

I recently took over an ESXi instance, and have run into space issues on the box.

 

Physical Server: HP Proliant DL380p Gen 8

OS: ESXi 6.5 U2 Sept 2018 – Last Pre-Gen9 custom image

License: VMware vSphere 6 Enterprise Plus

 

I have a datastore, which is 2.73TB in size, and is VMFS5. Thin Provisioning is supported.

All the VM's are Thin Provisioned, but ESXi is not reclaiming space from deleted VM's, deleted files on the VM's, or deleted Snapshots.

 

Checking the disk information:

 

[root@esxi01:~] esxcli storage core device list -d naa.600508b1001caf2ad46535555b3e0206

naa.600508b1001caf2ad46535555b3e0206

   Display Name: Local HP Disk (naa.600508b1001caf2ad46535555b3e0206)

   Has Settable Display Name: true

   Size: 2861511

   Device Type: Direct-Access

   Multipath Plugin: NMP

   Devfs Path: /vmfs/devices/disks/naa.600508b1001caf2ad46535555b3e0206

   Vendor: HP     

   Model: LOGICAL VOLUME 

   Revision: 5.42

   SCSI Level: 5

   Is Pseudo: false

   Status: on

   Is RDM Capable: true

   Is Local: true

   Is Removable: false

   Is SSD: false

   Is VVOL PE: false

   Is Offline: false

   Is Perennially Reserved: false

   Queue Full Sample Size: 0

   Queue Full Threshold: 0

   Thin Provisioning Status: unknown

   Attached Filters:

  VAAI Status: unsupported

   Other UIDs: vml.0200020000600508b1001caf2ad46535555b3e02064c4f47494341

   Is Shared Clusterwide: false

   Is Local SAS Device: true

   Is SAS: true

   Is USB: false

   Is Boot USB Device: false

   Is Boot Device: false

   Device Max Queue Depth: 1024

   No of outstanding IOs with competing worlds: 32

   Drive Type: unknown

   RAID Level: unknown

   Number of Physical Drives: unknown

   Protection Enabled: false

   PI Activated: false

   PI Type: 0

   PI Protection Mask: NO PROTECTION

   Supported Guard Types: NO GUARD SUPPORT

   DIX Enabled: false

   DIX Guard Type: NO GUARD SUPPORT

   Emulated DIX/DIF Enabled: false

 

[root@esxi01:~] esxcli storage core device vaai status get -d naa.600508b1001caf2ad46535555b3e0206

naa.600508b1001caf2ad46535555b3e0206

   VAAI Plugin Name:

   ATS Status: unsupported

   Clone Status: unsupported

   Zero Status: unsupported

   Delete Status: unsupported

 

And i've checked the related configurations in esxi

KeyNameValueDefaultOverridden

DataMover.HardwareAcceleratedInit

Enable hardware accelerated VMFS data initialization (requires compliant hardware)

1

1

No

DataMover.HardwareAcceleratedMove

Enable hardware accelerated VMFS data movement (requires compliant hardware)

1

1

No

DataMover.MaxHeapSize

Maximum size of the heap in MB used for data movement

64

64

No

VMFS3.HardwareAcceleratedLocking

Enable hardware accelerated VMFS locking (requires compliant hardware). Please see http://kb.vmware.com/kb/2094604 before disabling this option

1

1

No

 

Trying the manual unmap also doesn't work

[root@esxi01:~] vmkfstools -y /vmfs/volumes/datastore2/

Volume '/vmfs/volumes/datastore2/' spans device 'naa.600508b1001caf2ad46535555b3e0206:1' that does not support unmap.

Devices backing volume /vmfs/volumes/datastore2/ do not support UNMAP.

 

So really I'm at a loss on where the problem is. Is it the physical disks, or the LUN.
Is there any way to enable this unmap command without having to rebuild the entire datastore?

 

Can anyone point me in the right direction?

 

Thanks!

 

 

 

ESXi 6.5 storage share presentation and vVols

$
0
0

Hi all

I would like to ask you for some official documentation regarding Vmware Best Practices on mounting and presenting the volume shares and so vVols from a NetApp storage to a ESXi 6.5 solution designed for a private cloud.

Thank you,

Rafael

VMFS Problem

$
0
0

Hi, i have a vSphere 6.5, with 1 VM, after a crash in vm, i reboot the machine and after that my datastore dont show any more. Someone can help to restore vmfs partition ? There is the logs:

 

vmkernel:

2019-12-20T17:50:35.156Z cpu5:2097881)NMP: nmp_ThrottleLogForDevice:3802: Cmd 0x28 (0x459a40bcac00, 2098547) to dev "naa.5000c500b702bf3b" on path "vmhba0:C0:T0:L0" Failed: H:0x0 D:0x2 P:0x0 Va

2019-12-20T17:50:35.156Z cpu5:2097881)ScsiDeviceIO: 3449: Cmd(0x459a40bcac00) 0x28, CmdSN 0x1 from world 2098547 to dev "naa.5000c500b702bf3b" failed H:0x0 D:0x2 P:0x0 Valid sense data: 0x3 0x1

2019-12-20T17:50:35.749Z cpu3:2097419)WARNING: NFS: 1227: Invalid volume UUID mpx.vmhba1:C0:T4:L0

2019-12-20T17:50:35.749Z cpu1:2097419)FSS: 6092: No FS driver claimed device 'mpx.vmhba1:C0:T4:L0': No filesystem on the device

2019-12-20T17:50:35.802Z cpu8:2097412)WARNING: NFS: 1227: Invalid volume UUID naa.5000c500b702bf3b:3

2019-12-20T17:50:35.820Z cpu8:2097412)FSS: 6092: No FS driver claimed device 'naa.5000c500b702bf3b:3': No filesystem on the device

______________________________________________________________________________________________________________________________________________

[root@vmware04:/dev/disks] fdisk -l /vmfs/devices/disks/naa.5000c500b702bf3b

 

*

* The fdisk command is deprecated: fdisk does not handle GPT partitions.  Please use partedUtil

*

 

Found valid GPT with protective MBR; using GPT

 

 

Disk /vmfs/devices/disks/naa.5000c500b702bf3b: 1172123568 sectors, 2981M

Logical sector size: 512

Disk identifier (GUID): cb59f4eb-4a6a-4ff2-8a3b-e0a95a604c22

Partition table holds up to 128 entries

First usable sector is 34, last usable sector is 1172123534

 

 

Number  Start (sector)    End (sector)  Size Name

     1              64            8191 4064K

     2         7086080        15472639 4095M

     3        15472640      1170997214  550G

     4            8224          520191  249M

     5          520224         1032191  249M

     6         1032224         1257471  109M

     7         1257504         1843199  285M

     8         1843200         7086079 2560M

___________________________________________________________________________________________________________________________

[root@vmware04:/dev/disks] partedUtil getptbl /vmfs/devices/disks/naa.5000c500b702bf3b

gpt

72961 255 63 1172123568

1 64 8191 C12A7328F81F11D2BA4B00A0C93EC93B systemPartition 128

4 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

5 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

6 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

7 1257504 1843199 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

8 1843200 7086079 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0

2 7086080 15472639 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0

3 15472640 1170997214 AA31E02A400F11DB9590000C2911D1B8 vmfs 0

________________________________________________________________________________________________________________________

Mount Point                                        Volume Name  UUID                                 Mounted  Type        Size        Free

-------------------------------------------------  -----------  -----------------------------------  -------  ----  ----------  ----------

/vmfs/volumes/d764097a-908b92d4-180b-7d7633d7d443               d764097a-908b92d4-180b-7d7633d7d443     true  vfat   261853184   106246144

/vmfs/volumes/5da1138f-8c37d4c0-f8a6-842b2b688783               5da1138f-8c37d4c0-f8a6-842b2b688783     true  vfat  4293591040  4273799168

/vmfs/volumes/6132ef79-569f72e4-1515-4428b45bee3e               6132ef79-569f72e4-1515-4428b45bee3e     true  vfat   261853184   261840896

/vmfs/volumes/5da11380-395afa60-2963-842b2b688783               5da11380-395afa60-2963-842b2b688783     true  vfat   299712512   117432320

___________________________________________________________________________________________________________________________

[root@vmware04:/dev/disks] esxcli storage core device smart get -d naa.5000c500b702bf3b

Parameter                     Value              Threshold  Worst

----------------------------  -----------------  ---------  -----

Health Status                 IMPENDING FAILURE  N/A        N/A

Media Wearout Indicator       N/A                N/A        N/A

Write Error Count             1228               N/A        N/A

Read Error Count              20226778           N/A        N/A

Power-on Hours                N/A                N/A        N/A

Power Cycle Count             N/A                N/A        N/A

Reallocated Sector Count      N/A                N/A        N/A

Raw Read Error Rate           N/A                N/A        N/A

Drive Temperature             46                 N/A        N/A

Driver Rated Max Temperature  N/A                N/A        N/A

Write Sectors TOT Count       N/A                N/A        N/A

Read Sectors TOT Count        N/A                N/A        N/A

Initial Bad Block Count       N/A                N/A        N/A

SD clone

$
0
0

Hi, I'm trying to upgrade from vmware 5 to vmware 6.7.

My servers have a vmware boot on sd card and I think that is a good idea backing it or clone it before the upgrade.

 

I found this article on Internet https://www.virten.net/2014/12/clone-esxi-installations-on-sd-cards-or-usb-flash-drives/

https://www.virten.net/2014/12/clone-esxi-installations-on-sd-cards-or-usb-flash-drives/

 

but when I cloned the sd card I have this issue

dd: /dev/disks/mpx.vmhba32:C0:T0:L0: Function not implemented

 

Another way is to remove off the sd card from the server and clone it on Windows 10 software, but I don know if this way is a good way to do that.

 

Could anyone help me to know what happen with dd command?

 

Kinds regards

RAID card for Mac Pro 5.1 internal hard drives.

$
0
0

I want to install ESXi on a Mac Pro 5.1 using the internal drives (DAS).

This works just fine, but when it comes to creating a RAID on the internal drives there are some issues.

 

I need help to find a RAID Card that works with the internal hard drives on the Mac Pro 5.1 and meets the VMware ESXi 5.1 and newer compatibility requirements.

 

Please let me know if such a RAID Card excists.

Viewing all 2328 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>