Skip to content

Bringing SMB back to Windows – A new recipe for File Sharing

“A lot of customers we’ve spoken to recently are considering moving their file servers away from unified storage systems and back onto Windows Server.”

Before we go into why they are thinking about this, let’s reminisce about the way we used to do file sharing.

Depending on how far back we go, typically we would set up a physical Windows Server (NT/2000/2003), assign a number of RAID-protected local disks, create folders and share them out.

But how did this scale?

1. Just add disks and move

If we ran out of space, we would add more disks to the server, assign a new drive and then create a new folder and share it out. Because we could not easily expand the original drive the folder resided on, we would probably then migrate a bunch of folders to the newly created drive.

2. Just add a server and move

If we ran out of disk slots on the server, then we would probably buy a new server with a set of its own disks and start new shares from there, and like before, maybe migrate some of the data onto this new server, freeing up space on the old server to allow for growth of the shares that remained on there.

You can see the amount of data movement that was required to manage capacity.

3. Just add SAN (and maybe move)

We could now assign storage (a volume or LUN) from a centralised storage array, and present this to Windows as additional drives, regardless of whether or not we had any free disks slots on the physical server.

With a traditional array with fixed raid groups that did not allow expansion, we still eventually faced the issue of migrating data between drives to even it out.

4. Just add Windows clustering

A SAN also gave us the ability to use Windows Clustering to provide high availability of file shares, so that in the event of a physical Windows server failure, the resources would automatically failover to the cluster partner(s) and thus file serving could continue.

5. Just add complexity

As you can see, file server design started to become more complicated, and we still had the issue of backing up all this file data quickly and efficiently on a regular basis.

6. Just add NAS

Then along came the NAS appliance or unified storage array, a device that served files directly, without the need for physical window servers. It had the ability to emulate a Windows file server in Active Directory along with some of its management capability.

An enterprise class device would provide a file system that was dynamically scalable, provided high availability by default in most cases and had some form of snapshot capability to allow for rapid backup and recovery of files.

If you had multiple arrays, you also had a great way of replicating that data to a secondary site for disaster recovery.

It was great except for a few things:

  1. You had to have a bit of money – an enterprise class unified array would typically cost more than the equivalent server based solution, especially if you considered all of the licenses for things like snapshotting and replication.
  2. Whilst scalability was good if you had the space within the array, when you ran out, adding more storage could come at a significant cost.
  3. It required a higher skill level, so would require a skilled storage administrator if you didn’t have one previously, or to retrain your existing staff.
  4. The concept of a unified array was great; it could serve multiple protocols including FC, iSCSI, NFS and CIFS. The downside to this was that if the solution was not managed properly and all these protocols were in use simultaneously, complexity and system utilisation increased which could in turn affect the performance and stability of ALL services, including the file serving capability.
  5. Advanced features like quotas, for example, were either missing, limited, or did not scale well with larger environments. Often, third party products were required to complement it.
  6. Anti-virus protection of these files was often an additional cost and / or complexity.

Nevertheless, if you designed and managed your solution properly, you had a great system for file sharing.

Soon after SANs became more affordable, and more commonplace, VMware revolutionised the industry by introducing VMware ESX. Now, almost a decade later, the vast majority of customers have some sort of virtualisation in place (be it VMware, Hyper-V or even Xen).

So how did this change our recipe for File Sharing success?

7. Just add Server Virtualisation

Things started changing when virtualisation become more prominent. When we started virtualising Windows servers, many of the limitations we experienced with physical servers no longer affected us.

Scalability is one example. A virtual server uses a virtual disk file. We can expand this virtual disk file to expand the Windows drive very quickly and easily. The introduction of Server 2008 meant we could do this more easily and on the fly, without using third party partitioning tools and taking services offline. We can also add drives just as quickly by provisioning a new virtual disk file.

High availability was catered for by the underlying hypervisor. In the case of VMware, VMware HA ensured that when a VMware host failed, the VMs running on that host would be powered up on any one of the remaining hosts.

We can even cluster virtualised Windows servers together for increased HA.

8. Just add SAN (again)

Since virtualised platforms in most cases use some form of centralised storage array, we could now leverage the features of the array for backup and replication, thus protecting the Windows file servers just like we do all of the other virtual machines. This allowed rapid recovery of not only the file data but of the file server itself.

9. Just add Windows Server 2012 R2

So what we are seeing is a shift of file sharing back to Windows. This is not only because the platform on which it is running has improved, largely thanks to the features of virtualisation, but also the vast number of new and improved features in Windows Server 2012 R2.

These include:

  • SMBv3 – a massive improvement over SMBv1/SMBv2 with transparent failovers for uninterrupted access to file sessions and performance
  • De-duplication – identified duplicate blocks to reduce capacity utilisation
  • File Server Resource Manager – allows you to manage and classify data including quotas, file screening and reporting
  • Encryption – provides end to end encryption of SMB data
  • Scale-out – a single namespace with the ability to show shares in all nodes of a cluster
  • VSS – this ensures consistent backups of the data

So what’s our new recipe for file sharing?

Mandatory Ingredients

  1. Use a virtualisation platform – most organisations are already using VMware or Hyper-V.
  2. Use Windows Server 2012 R2 – virtualised of course. Use as many as you want, but in most cases a single VM with sufficient resources allocated will do the job.
  3. Use a storage array – most organisations will use a device in one form or another, but we have found Nimble Storage to provide a good combination of high capacity and high performance at a very affordable price point.

What this combination gives you is a virtual machine that is served from a high performance array. There is no compromise on file sharing performance from a disk IO perspective and server CPU and memory resources are not much of an issue these days. Scalability is simple: expand the virtual disk and expand datastores on the fly. Out of space on the Nimble? Expand that on the fly too. Protect the data using Nimble Snapshots and replication to a secondary array for DR.

Optional Ingredients

  1. Use Windows Failover Cluster – If you want additional resilience (most customers will find the HA capabilities of the hypervisor sufficient).
  2. Use ALL SMBv3 features – your unified/NAS device might not have them all.
  3. Encrypt and de-duplicate data – your unified/NAS device might not be able to.
  4. Use DFS – your NAS device might have limitations.
  5. No lock in – it might be tricky migrating file sharing away from your NAS device in the future.
  6. Take advantage of new features/security fixes quicker – your NAS device may take longer to patch and fix bugs.
  7. Use hypervisor-level backup software (or snapshots) and enjoy more flexible data recovery – your NAS device may have supported NDMP but recovery of files may have been painfully slow – and data could only be restored to the same type of NAS device (not great in a DR situation).

Don’t like cooking?

Then treat yourself to a ready-made meal. The Nimble SmartStack is a complete datacentre converged infrastructure solution, that blends servers, networking and storage in a dense and efficient package.

NG-IT will design, install, build the virtualisation platform and deploy a Windows 2012 R2 virtualised server, ready to start migrating your file services onto. Need help with migration? We can do that too.

Still not sure and need to speak to a Master Chef? Then please do not hesitate to contact NG-IT. We can arrange for one of our technical consultants to discuss your environment, relevant options and how to facilitate the transition to a new solution.

Blog written by

Amirul Islam, Technical Director

https://www.linkedin.com/in/amirulislam/

Start your journey today