


To the best of our knowledge, this is first HBase design utilizing high performance RDMA capable interconnects. Throughput evaluations using four HBase region servers and 64 clients indicate that the new design boosts up throughput by 3 X times over 1 GigE and 10 GigE networks.

This is about a factor of 3.5 improvement over 10 Gigabit Ethernet (10 GigE) network with TCP Offload. Our performance evaluation reveals that latency of HBase Get operations of 1KB message size can be reduced to 43.7 μs with the new design on QDR platform (32 Gbps).
#Hdr ib tor switch software#
Our design extends the existing open-source HBase software and makes it RDMA capable. In this paper, we present a novel design of HBase for RDMA capable networks via Java Native Interface (JNI). RDMA follows memory-block semantics, which can be adopted efficiently to satisfy the object transmission primitives used in HBase. These interconnects provide advanced network features, such as Remote Direct Memory Access (RDMA), to achieve high throughput and low latency along with low CPU utilization. The High Performance Computing (HPC) domain has exploited high performance and low latency networks such as InfiniBand for many years. This makes it hard to provide high performance services for data-intensive applications. The byte-stream oriented Java sockets semantics confine the possibility to leverage new generations of network technologies. However, the existing HBase implementation is built upon Java Sockets Interface that provides sub-optimal performance due to the overhead to provide cross-platform portability. For this kind of system, low latency and high throughput is expected when supporting services for large scale concurrent accesses. Facebook, Twitter, etc.) because of its portability and massive scalability. It is being used in many data-center applications (e.g. HBase is an open source distributed Key/Value store based on the idea of BigTable.
