Storage Area Networks (SAN)-background


Data centers have the infrastructure to hold, and support, vast numbers of servers, ensuring connectivity, redundancy, and protective factors such as temperature control, fire safety, and physical safety.

There are three layers: the access layer/edge layer (closest to the machines themselves), distribution layer (switches connecting those access layer switches), and finally the motherlode of them all- the core layer (which is the direct connection to the ISP, internet).

Where to store data for all those servers? A SAN is one option, being a pool of mass storage devices. These act as a virtual disk to devices that connect to them. The infrastructure connected to a SAN can be either Fibre Channel (FC), though $ and clumsy, or Internet Small Computer System Interface (iSCSI), which uses TCP/IP and thus the availability of mass-produced (read, cheap) devices. You can use jumbo frames here, to help speed up data transfer, though I have read articles that argue that the added complexity of such is not worth the marginal speed increases.

There are levels to creating a data center (classic): the founder of a start-up can have the server in the garage. Next step up: create a carved-out space, such as a closet, that can hold a few servers and mass storage devices. Need more data, can afford it? Time to move up to the big leagues off a true data center. In the pre-cloud days, this would be very expensive: the facility/building, outlay for the resources, maintenance and upkeep. Co-location services began to appear, though, instigated by virtualization and software-defined networking. Now, instead of that three layer of switches before, there is the leaf layer connected to the spine layer- this is much more of a mesh approach, with the spine layer providing redundancy.


Leave a comment