There is a huge difference between DATA ONTAP 7 Mode and C-Mode on data access methods. In 7-Mode, we will be having HA pair controllers to access the data. When the request comes , it will go to any one of the HA controllers. But in C-Mode , Multiple controllers will be aggregated in the cluster and the request might land on any of the controller irrespective of storage origin. The requested data will go through the cluster-interconnect if the LIF is not hosted on same storage node. In this article ,we will see the Clustered DATA ONTAP architecture and data access methods.
1. How Typical Clustered DATA ONTAP six node cluster looks like ?
In the following diagram , we can see that 3 HA pair controllers formed the 6 Node clusters.
The NetApp storage system architecture includes multiple components: storage controllers, high-availability
interconnect, multipath high-availability storage connections, disk shelves, system memory, NVRAM, Flash Cache modules, solid-state drive (SSD) aggregates, hard-disk-drive (HDD) aggregates, and flash pools. Storage systems that run the clustered Data ONTAP operating system also include cluster-interconnect and multiple cluster nodes.
Note: HA Pair = Two controllers interconnect using backplane on single chassis .
2. What are the protocols are supported on NetApp controllers ?
The Data ONTAP architecture consists of multiple layers, which are built on top of the FreeBSD Unix operating system. Above the FreeBSD Unix kernel is the data layer that includes the WAFL (Write Anywhere File Layout) file system, RAID,storage, failover, and the protocols for Data ONTAP operating in 7-Mode. Also above the FreeBSD kernel is the NVRAM driver and manager. Above these layers is the NAS and SAN networking layer, which includes protocol support for clustered Data ONTAP. Above the networking layer is the Data ONTAP management layer.
3. What is the two different type of data access on Clustered Data ONTAP ?
- Direct Access (7-Mode & C-Mode)
- In-direct Access (Only on C-Mode)
Both clustered Data ONTAP and Data ONTAP operating in 7-Mode support direct data access; however, only
clustered Data ONTAP supports indirect data access.
Scenario :1
Direct Access : – When the client is trying to access the data and LIF is sitting on the node which own the storage disks shelf.
Scenario :2
In-Direct Access: – When the client is trying to access the data and LIF is sitting on the node which doesn’t own the storage disks shelf. In this case, data will pass through the cluster interconnect.
[box type=”info” align=”” class=”” width=””]Indirect data access enables you to scale workloads across multiple nodes. The latency between direct and indirect data access is negligible, provided that CPU headroom exists. Throughput can be affected by indirect data access, because additional processing might be required to move data over the cluster-interconnect.[/box]
Data Access type is protocol dependent. SAN data access can be direct or indirect depending on path selected by
Asymmetric Logical Unit Access (ALUA). NFS data access can be direct or indirect, except that pNFS is always direct.
CIFS data access can be either direct or indirect.
Let’s have a closer look of Direct Data Access .
1. The read request is sent from the host to the storage system via a network interface card (NIC) (For ISCSI/NFS/CIFS) or a host bus adapter (HBA) (For FC-SAN).
2. If the read is in system memory, it is sent data to the host; otherwise, keep looking for the data in the storage.
3. Flash Cache is checked (if it is present) and, if the blocks are present, they are brought into memory and then sent
to the host; otherwise, keep looking for the data within the storage.
4. Finally the block is read from storage, brought into memory, and then sent to the host.
Let’s have a closer look of Indirect Data Access .
Read operations for indirect data access take the following path through the storage system:
1. The read request is sent from the host to the storage system via a NIC (NFS/CIFS/ISCSI)or an HBA(SAN-FC).
2. The read request is sent to the storage controller that owns the volume.
3. If the read is in system memory, it is sent to the host; otherwise, keep looking for the data on that storage controller.
4. Flash Cache (if it is present) is checked and, if the blocks are present, they are brought into memory and then sent
to the host; otherwise, keep looking for the data.
5. Finally the block is read from storage, brought into memory, and then sent to the host.
Hope this article is informative to you. In the next artcile ,we will see about the write operation on Netapp Clustered DATA ONTAP.
Leave a Reply