Convergence Channel: Storage Strategies, the Sequel
The second installment in a three-part series on storing video data covers distributed storage as a design so
lution for large-scale projects. Specifically, learn the differences between network-attached storage and storage area networks.
When most of us think about hard drives and storage, we think about the built-in storage we’re used to in our own PCs. As digital video security systems have grown larger, so have the requirements for video storage, far outstripping the ability of that storage to be contained in the video server itself.
Digital, IP network-based systems have also evolved to the point that it is no longer desirable, or practical, in some cases to have the storage centrally located inside the recording servers. “Distributed” is the new battle cry. There are many technologies available now that allow us to locate our storage arrays anywhere there is a connection to the network.
In last month’s “Convergence Channel,” Part 1 of a three-part discussion on video storage served as a refresher course by covering some of the foundational principles of hard drives. This time, we’ll take a look at some of the things you need to know to build larger storage systems.
There are two main technologies in use today for large, network-based storage systems: network-attached storage (NAS) and storage area networks (SAN). While similar, there are some important distinctions.
NAS devices, which are the most common, are being used in even small home networks to store media files such as music, photos and movies. A NAS box is essentially a file server. Instead of a Windows or other operating system (OS) on a dedicated server with lots of storage inside of it, NAS is self-contained, usually using a slimmed-down OS like Linux. It sits on the network and presents itself as a storage share. To access that share, you would go to an address like \NAS box\share name. The share can then be mapped to a drive letter for Windows-based devices.
Inside the NAS box some of the principles we looked at in Part I of this series are being put to good use. Most mid- to high-end NAS devices use some level of RAID for either redundancy (critical to video security applications) or performance, most likely both.
NAS boxes can be an effective entry level way to expand storage options for some systems, but keep in mind some of the cautions we discussed in last month’s column surrounding low-cost hardware. Just like the drives themselves, off-the-shelf, consumer-level NAS devices may not be able to handle the 100-percent write cycle for the same reasons consumer-grade drives can’t. And remember that in a multidisk array, heat is a big issue. So it is very important to ensure that your NAS device has sufficient cooling or at least venting capabilities to minimize drive failures.
NAS recording boxes also connect to standard TCP/IP networks using file protocols like NFS or SMB, and can be located anywhere a network drop is available. This gives NAS a great deal of flexibility for small to midsize applications.
[IMAGE]11985[/IMAGE]Storage Area Network Systems
At the other end of the scale, SAN rules the roost. Most of you aren’t going to need to set up your own SAN, but it’s important to understand what it is and some foundational concepts, as you may run into it in larger, campus-sized projects.
As opposed to a NAS device, a SAN is an entire network segment dedicated to storage arrays. Without getting into all the complexities of SAN, and there are many, let’s look at some of the key differences between a SAN and NAS.
A NAS box, as mentioned above, works like any other external hard drive setup, using standard file system protocols, and is in fact file based. A SAN, by comparison, works at the drive block level and actually looks like a local drive to your computer.
While the storage volume in a NAS can serve up files directly to multiple users, each logical volume (identified by a Logical Unit Number or LUN) in a SAN can be dedicated to an individual server. It’s not really designed for direct client or user access. It’s used more as a way to expand the storage on a server.
One of the greatest benefits of a SAN is that since the volumes are logical, they can be moved and/or allocated as necessary. For instance, if you have multiple NVRs feeding into a SAN, and one NVR has more activity or higher image resolution than the others, you can allot more storage to the busier NVRs than the others — all via software and all on the fly.
While a NAS device sits directly on the network via standard Ethernet switches, a SAN has to be behind specially designed switches utilizing high-speed connectivity technologies such as iSCSI or FibreChannel. This also requires the use of specialized cards in the servers.
iSCSI is an Ethernet-based technology that allows very fast file transfers. It can utilize the standard network infrastructure in most facilities. iSCSI takes standard SCSI commands and wraps them in Ethernet IP data, allowing it to use those standard networks. It allows storage management and transfer over long distances, basically anywhere the network can reach.
FibreChannel is a high speed (gigabit, in fact) transmission protocol. It was originally used for communication between supercomputers but has since been adapted for use in SANs. In spite of the name (and European spelling), FibreChannel can actually be run over twisted pair copper wire or fiber-optic cable. Some implementations like FibreChannel over Ethernet can be run on 10Gb Ethernet networks.
One of the reasons FibreChannel has been kept at the high end of the market is cost. It requires special FibreChannel-capable switching hardware and interface cards for the servers.
Both SANs and NAS devices have their applications for video security. Some systems manufacturers are starting to include support for iSCSI and FibreChannel in their enterprise-level products. At the lower end, support for RAID NAS devices is also becoming commonplace, built right into the product’s operating system.
Applying the Right Solution
As I mentioned back at the beginning of the last section, you may not run into a need to build a SAN yourself, although if you deal primarily in large, enterprise-class systems you might be called on to do just that. More than likely, your contact with SAN systems is going to come in the form of a customer like a hospital or college that already has a SAN in place, and wants to utilize it to store your video.
If this is the case, then you may need to look to the manufacturer of your particular system to determine its compatibility with either NAS or SAN devices. In many cases, the NAS box will be relatively easy to implement. Using technologies such as iSCSI or FibreChannel with video security systems could be more of a challenge.
In the final entry in this series, we will look at Flash recording technology and see where the storage industry just might be taking us.
MCSE- and CCNA-certified Steve Payne has more than 15 years of industry experience and heads Convergence Consulting, an IP and security solutions consulting firm. Be sure to also read his Integrated Thoughts blog.
Security Is Our Business, Too
For professionals who recommend, buy and install all types of electronic security equipment, a free subscription to Security Sales & Integration is like having a consultant on call. You’ll find an ideal balance of technology and business coverage, with installation tips and techniques for products and updates on how to add sales to your bottom line.
A free subscription to the #1 resource for the residential and commercial security industry will prove to be invaluable. Subscribe today!
Recommended For You
Cloud security can present a paradox: companies love the flexibility and versatility of cloud security management, but are unsure if the cloud itself is secure enough to house their vitally important systems.
From processing power to lens selection to proper positioning, here are 13 tips to help shed light on proper installation of cameras in low-light conditions.