If ransomware hit for example they could hit the disk based team stores (granted I've put things in place to hopefully stop that) the snapshots on the nimble should be unaffected as an additional recovery point. Back in the day they recommended using in-guest SCSI volumes, but now I think Nimble has moved more toward MDK approach.
In the case of a large file server, utilizing DFS and splitting the shares up into multiple VMs could provide additional operational agility. While most data restores are small in nature (accidentally deleted files), an operator error that deletes a partition or other larger “blast radius” failure domain could lead to a longer restore time.
This challenge can largely be mitigated by keeping full replicas of the VMs and activating this replica or using “instant recovery” style backup solutions (often using NFS or VAIN to share the data back). There are practical limits on how many IONS and what throughput at specific latencies can be achieved, and in some rare cases more SCSI Has (and volumes as they can not be multiplexed) may be necessary.
Lon germ NVMe end-to-end multiple queue IO paths will mitigate this. Thin provisioning is a powerful technology that allows you to over-provision upfront, and avoid the hassle of having to constantly expand guest partitions and file systems as Vodka need to grow.
Unfortunately, many modern file systems are “thin unfriendly” and will gradually redirect all writes into free space on the partition until the MDK becomes effectively ‘thick” and inflated. For more information check out this video: #StorageMinute: San Space Reclamation, and read this guide to using the feature.
While modern file systems (Refs, and CFS) don’t have these issues it is a concern to be aware of. In sphere 5.5.x and 6.0.x, large capacity virtual disks have these conditions and limitations: An ESXi 5.5 or later host is required.
A maximum of 62 TB is enforced, even if the underlying NFS file system supports a greater size. Virtual machines with large capacity disks have these conditions and limitations: The guest operating system must support large capacity virtual hard disks.
The data store format must be VMFS-5 or later, or an NFS volume on a Network Attached Storage (NAS) server. VSphere Flash Read Cache supports a maximum hard disk size of 16 TB.
When you add or configure virtual disks, always leave a small amount of overhead. These operations cannot finish when the maximum amount of disk space is allocated.
You cannot relocate RMS larger than 2 TB to data stores other than VMFS-5 or, to hosts older than ESXi 5.5. To enable the Microsoft Windows operating system to address a maximum storage capacity for a device greater than 2 TB, the disk must be initialized by using the GUID partition Table (GPT) partitioning scheme.
Changes in virtual machine snapshots for Vodka larger than 2 TB: Snapshots taken on Vodka larger than 2 TB are now in Space Efficient Virtual Disk (SPARSE) format. The redo logs are automatically created as SPARSE instead of VMFSSPARSE (delta) when the base flat MDK is larger than 2 TB.
Linked clones are SPARSE if the parent disk is larger than 2 TB. Troubleshooting When you attempt to create a large virtual disk on a VMFS-3 data store or on NFS using ext3, you see this error in the sphere Client or when using vmkfstools : Failed to create virtual disk: The destination file system does not support large files (12).
When you attempt to create a large MDK using the sphere Client, you see the error: The disk capacity entered was not a properly formed number or was out of range. Checking the size of the newly created or expanded MDK, you find that it is 4 TB.
VMware ESXi 5.5 introduces sup portability for virtual machine disks (Vodka) larger than 2 TB. This article provides information on the conditions and limitations of large capacity virtual disks in ESXi 5.5.x and 6.0.×. Back when we were running VMware server, we were instructed to keep the virtual disks on our guests relatively small (approx.
The reason being because larger Vodka were unwieldy and often caused performance issues. If you have concerns about large unwieldy vmdk's you can always use RDM's or store the data on a NAS share.
AntonVZhbankov Mar 25, 2009 12:52 PM (in response to sant0sk1) If you're using Windows guests, keep VMDK's as little as possible. If you need temporary space, do not expand disks, create new and delete after use.
Vmmeup Mar 26, 2009 6:24 AM (in response to jasoncllsystems) I am curious as to where the 300 GB number comes form? Granted less VMs are better but data stores that are too small creates other issues such as the overhead of more runs to manage within the environment.
Depending on how the storage array is configured and the risked are carved up it can become irrelevant on the San side as to how many vm's are on each RUN if they are reading and writing to the same spindles. There are still other factors such as the ex queues etc.....but I think 300 GB is too small for a cap, 500Gb is a more realistic number with the sizes of the VMS out there today.
VCP, TSP, CCNA, CCA(Men Server), MCTS Hyper-V & SCVMM08 Uushaggy Sep 24, 2009 4:39 AM (in response to makeup) I can see some logic in the 300 GB RUN with a maximum file size (MDK not VMS) of 256 GB (which is the default).
Based on these calculations, we could safely set the following size for the database les: VMs are distributed across 20 ESXi hosts, which use the Mount Local option.
Based on these calculations, we could safely set the following size for the database les: VMs are distributed across 100 ESXi hosts, which use the Mount Local option.
Based on these calculations, we could safely set the following size for the database les: Important : These examples provide a high-level estimate and are intended to give you a good idea of the approximate size of the database.
The actual size will vary based on the way SQL Server stores data. Sizing of the transaction log with a dierent database recovery model (such as full or bulk) is much less precise and much more dependent on the environment.
Despite the relatively small size of the App Volumes database and the lack of critical customer data in it, availability of the database is crucial for App Volumes Manager performance. For production App Volumes environments, use an Enterprise or Standard edition of Microsoft SQL Server.
When designing the SQL Server environment that supports App Volumes, be sure to follow Microsoft best practices. VMware recommends that if you use the full recovery model, set the size of the transaction log large enough so that the auto-grow option is used only as a contingency for unexpected growth, or set the transaction log to a led size.
Figure 2 : Transaction Log Set to a Fixed Maximum Size This strategy maintains the transaction log at a reasonable size without impacting SQL Server performance.
If auditing data is not required, consider pruning the VMware App Volumes SQL database. In most environments, multiple App Volumes Manager servers are deployed.
You would see this increase in the number of background jobs only on rare occasions, when the rate of change in the environment is high. Contact VMware Technical Support to obtain new interval values.
For better performance and reliability, consider sizing and con during App Volumes Manager servers as described in the VMware Knowledge Base article VMware App Volumes Sizing Limits and Recommendations (67354). In production environments, it is a good practice to combine multiple applications into each AppS tack/package.
Con sure the highly available database by following the Microsoft SQL Server documentation. Using the ODBC control panel, con sure the new system DSN to use the SQL Server native client and point to a primary and a failover SQL server.
Verify log access to the App Volumes Manager UI. Verify that log entries appear on the page, as shown in the following sure.
The SQL Server mirroring and Always On availability-groups options require that the database use the full recovery model. The transaction log must be backed up to prevent excessive growth and fragmentation.
App Volumes Manager requires a reliable and constant connection to the SQL database. Any delays or loss of communication between App Volumes Manager and its SQL database will cause performance and stability issues.
Slower user logins and logouts Delays in AppS tack/package attachment Duplicate jobs executed by multiple managers VMware recommends that an App Volumes deployment not span data centers with a network latency that can aect communications between App Volumes Manager and SQL Server.