0% found this document useful (0 votes)
62 views7 pages

Consistent Hashing - Explanation and Implementation

Consistent Hashing is a dynamic hashing technique that optimizes data distribution in scalable distributed systems, minimizing data movement during scale-ups and scale-downs. It maps both storage nodes and files to a large, constant hash space, allowing for efficient associations based on the position of the hashed values. The document outlines the implementation of Consistent Hashing, including hash functions, adding/removing nodes, and associating items with nodes, highlighting its significance in various applications such as load balancing and data partitioning.

Uploaded by

kAz3n S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
62 views7 pages

Consistent Hashing - Explanation and Implementation

Consistent Hashing is a dynamic hashing technique that optimizes data distribution in scalable distributed systems, minimizing data movement during scale-ups and scale-downs. It maps both storage nodes and files to a large, constant hash space, allowing for efficient associations based on the position of the hashed values. The document outlines the implementation of Consistent Hashing, including hash functions, adding/removing nodes, and associating items with nodes, highlighting its significance in various applications such as load balancing and data partitioning.

Uploaded by

kAz3n S
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Knowledge Base / System Design / Algorithms

Consistent Hashing - explanation and implementation


Consistent hashing is a hashing technique that performs really well when operated in a dynamic environment where the distributed system scales up
and scales down frequently. The core concept of Consistent Hashing was introduced in the paper Consistent Hashing and RandomTrees: Distributed
Caching Protocols for Relieving Hot Spots on the World Wide Web but it gained popularity after the famous paper introducing DynamoDB - Dynamo:
Amazon’s Highly Available Key-value Store. Since then the consistent hashing gained traction and found a ton of use cases in designing and scaling
distributed systems efficiently. The two famous examples that exhaustively use this technique are Bit Torrent, for their peer-to-peer networks and
Akamai, for their web caches. In this article we dive deep into the need of Consistent Hashing, the internals of it, and more importantly along the way
implement it using arrays and Binary Search.

Hash Functions
Before we jump into the core Consistent Hashing technique we first get a few things cleared up, one of which is Hash Functions. Hash Functions are any
functions that map value from an arbitrarily sized domain to another fixed-sized domain, usually called the Hash Space. For example, mapping URLs to
32-bit integers or web pages’ HTML content to a 256-byte string. The values generated as an output of these hash functions are typically used as keys to
enable efficient lookups of the original entity.
An example of a simple hash function is a function that maps a 32-bit integer into an 8-bit integer hash space. The function could be implemented using
the arithmetic operator modulo and we can achieve this by taking a modulo 256 which yields numbers in the range [0, 255] taking up 8-bits for its
representation. A hash function, that maps keys to such integer domain, more often than not applies the modulo N so as to restrict the values, or the
hash space, to a range [0, N-1] .
A good hash function has the following properties
The function is computationally efficient and the values generated are easy for lookups
The function, for most general use cases, behaves like a pseudorandom generator that spreads data out evenly without any noticeable correlation
Now that we have seen what a hash function is, we take a look into how we could use them and build a somewhat scalable distributed system.

Building a distributed storage system


Say we are building a distributed storage system in which users can upload files and access them on demand. The service exposes the following APIs to
the users
upload to upload the file
fetch to fetch the file and return its content
Behind the scenes the system has Storage Nodes on which the files are stored and accessed. These nodes expose the functions put_file and
fetch_file that puts and gets the file content to/from the disk and sends the response to the main API server which in turn sends it back to the user.

To sustain the initial load, the system has 5 Stogare Nodes which stores the uploaded files in a distributed manner. Having multiple nodes ensures that
the system, as a whole, is not overwhelmed, and the storage is distributed almost evenly across.
When the user invokes upload function with the path of the file, the system first needs to identify the storage node that will be responsible for holding
the file and we do this by applying a hash function to the path and in turn getting the storage node index. Once we get the storage node, we read the
content of the file and put that file on the node by invoking the put_file function of the node.

# storage_nodes holding instances of actual storage node objects


storage_nodes = [
StorageNode(name='A', host='[Link]'),
StorageNode(name='B', host='[Link]'),
StorageNode(name='C', host='[Link]'),
StorageNode(name='D', host='[Link]'),
StorageNode(name='E', host='[Link]'),
]

def hash_fn(key):
"""The function sums the bytes present in the `key` and then
take a mod with 5. This hash function thus generates output
in the range [0, 4].
"""
return sum(bytearray([Link]('utf-8'))) % 5

def upload(path):
# we use the hash function to get the index of the storage node
# that would hold the file
index = hash_fn(path)

# we get the StorageNode instance


node = storage_nodes[index]

# we put the file on the node and return


return node.put_file(path)

def fetch(path):
# we use the hash function to get the index of the storage node
# that would hold the file
index = hash_fn(path)

# we get the StorageNode instance


node = storage_nodes[index]

# we fetch the file from the node and return


return node.fetch_file(path)

The hash function used over here simply sums the bytes and takes the modulo by 5 (since there are 5 storage nodes in the system) and thus
generating the output in the hash space [0, 4] . This output value now represents the index of the storage engine that will be responsible for holding
the file.
Say we have 5 files ‘[Link]’, ‘[Link]’, ‘[Link]’, ‘[Link]’, ‘[Link]’ if we apply the hash function to these we find that they are stored on storage nodes E, A, B, C, and
D respectively.
Things become interesting when the system gains some traction and it needs to be scaled to 7 nodes, which means now the hash function should do
mod 7 instead of a mod 5 . Changing the hash function implies changing the mapping and association of files with storage nodes. We first need to

administer the new associations and see which files required to be moved from one node to another.
With the new hash function the same 5 files ‘[Link]’, ‘[Link]’, ‘[Link]’, ‘[Link]’, ‘[Link]’ will now be associated with storage nodes D, E, F, G, A. Here we see that
changing the hash function requires us to move every single one of the 5 files to a different node.

If we have to change the hash function every time we scale up or down and if this requires us to move not all but even half of the data, the process
becomes super expensive and in longer run infeasible. So we need a way to minimize the data movement required during scale-ups or scale-downs,
and this is where Consistent Hashing fits in and minimizes the required data transfer.

Consistent Hashing
The major pain point of the above system is that it is prone to events like scale-ups and scale-downs as it requires a lot of alterations in associations.
These associations are purely driven by the underlying Hash Function and hence if we could somehow make this hash function independent of the
number of the storage nodes in the system, we address this flaw.
Consistent Hashing addresses this situation by keeping the Hash Space huge and constant, somewhere in the order of [0, 2^128 - 1] and the storage
node and objects both map to one of the slots in this huge Hash Space. Unlike in the traditional system where the file was associated with storage node
at index where it got hashed to, in this system the chances of a collision between a file and a storage node are infinitesimally small and hence we need a
different way to define this association.
Instead of using a collision-based approach we define the association as - the file will be associated with the storage node which is present to the
immediate right of its hashed location. Defining association in this way helps us
keep the hash function independent of the number of storage nodes
keep associations relative and not driven by absolute collisions

Consistent Hashing on an average requires only k/n units of data to be migrated during scale up and down; where k is the total number of keys
and n is the number of nodes in the system.

A very naive way to implement this is by allocating an array of size equal to the Hash Space and putting files and storage node literally in the array on
the hashed location. In order to get association we iterate from the item’s hashed location towards the right and find the first Storage Node. If we reach
the end of the array and do not find any Storage Node we circle back to index 0 and continue the search. The approach is very easy to implement but
suffers from the following limitations
requires huge memory to hold such a large array
finding association by iterating every time to the right is O(hash_space)

A better way of implementing this is by using two arrays: one to hold the Storage Nodes, called nodes and another one to hold the positions of the
Storage Nodes in the hash space, called keys . There is a one-to-one correspondence between the two arrays - the Storage Node nodes[i] is present at
position keys[i] in the hash space. Both the arrays are kept sorted as per the keys array.

Hash Function in Consistent Hashing


We define total_slots as the size of this entire hash space, typically of the order 2^256 and the hash function could be implemented by taking SHA-
256 followed by a mod total_slots . Since the total_slots is huge and a constant the following hash function implementation is independent of the
actual number of Storage Nodes present in the system and hence remains unaffected by events like scale-ups and scale-downs.

def hash_fn(key: str, total_slots: int) -> int:


"""hash_fn creates an integer equivalent of a SHA256 hash and
takes a modulo with the total number of slots in hash space.
"""
hsh = hashlib.sha256()

# converting data into bytes and passing it to hash function


[Link](bytes([Link]('utf-8')))

# converting the HEX digest into equivalent integer value


return int([Link](), 16) % total_slots

Adding a new node in the system


When there is a need to scale up and add a new node in the system, in our case a new Storage Node, we
find the position of the node where it resides in the Hash Space
populate the new node with data it is supposed to serve
add the node in the Hash Space
When a new node is added in the system it only affects the files that hash at the location to the left and associated with the node to the right, of the
position the new node will fit in. All other files and associations remain unaffected, thus minimizing the amount of data to be migrated and mapping
required to be changed.

From the illustration above, we see when a new node K is added between nodes B and E, we change the associations of files present in the segment B-K
and assign them to node K. The data belonging to the segment B-K could be found at node E to which they were previously associated with. Thus the
only files affected and that needs migration are in the segment B-K; and their association changes from node E to node K.
In order to implement this at a low-level using nodes and keys array, we first get the position of the new node in the Hash Space using the hash
function. We then find the index of the smallest key greater than the position in the sorted keys array using binary search. This index will be where the
key and the new Storage node will be placed in keys and nodes array respectively.

def add_node(self, node: StorageNode) -> int:


"""add_node function adds a new node in the system and returns the key
from the hash space where it was placed
"""

# handling error when hash space is full.


if len(self._keys) == self.total_slots:
raise Exception("hash space is full")

key = hash_fn([Link], self.total_slots)

# find the index where the key should be inserted in the keys array
# this will be the index where the Storage Node will be added in the
# nodes array.
index = bisect(self._keys, key)

# if we have already seen the key i.e. node already is present


# for the same key, we raise Collision Exception
if index > 0 and self._keys[index - 1] == key:
raise Exception("collision occurred")

# Perform data migration

# insert the node_id and the key at the same `index` location.
# this insertion will keep nodes and keys sorted w.r.t keys.
[Link](index, node)
self._keys.insert(index, key)

return key

Removing a new node from the system


When there is a need to scale down and remove an existing node from the system, we
find the position of the node to be removed from the Hash Space
populate the node to the right with data that was associated with the node to be removed
remove the node from the Hash Space
When a node is removed from the system it only affects the files associated with the node itself. All other files and associations remain unaffected, thus
minimizing the amount of data to be migrated and mapping required to be changed.

From the illustration above, we see when node K is removed from the system, we change the associations of files associated with node K to the node
that lies to its immediate right i.e. node E. Thus the only files affected and needs migration are the ones associated with node K.
In order to implement this at a low-level using nodes and keys array, we get the index where the node K lies in the keys array using binary search.
Once we have the index we remove the key from the keys array and Storage Node from the nodes array present on that index.

def remove_node(self, node: StorageNode) -> int:


"""remove_node removes the node and returns the key
from the hash space on which the node was placed.
"""

# handling error when space is empty


if len(self._keys) == 0:
raise Exception("hash space is empty")

key = hash_fn([Link], self.total_slots)

# we find the index where the key would reside in the keys
index = bisect_left(self._keys, key)

# if key does not exist in the array we raise Exception


if index >= len(self._keys) or self._keys[index] != key:
raise Exception("node does not exist")
# Perform data migration

# now that all sanity checks are done we popping the


# keys and nodes at the index and thus removing the presence of the node.
self._keys.pop(index)
[Link](index)

return key

Associating an item to a node


Now that we have seen how consistent hashing helps in keeping data migration, during scale-ups and scale-downs, to a bare minimum; it is time we see
how to efficiently we can find the “node to the right” for a given item. The operation to find the association has to be super fast and efficient as it is
something that will be invoked for every single read and write that happens on the system.
To implement this at low-level we again take leverage of binary search and perform this operation in O(log(n)) . We first pass the item to the hash
function and fetch the position where the item is hashed in the hash space. This position is then binary searched in the keys array to obtain the index
of the first key which is greater than the position (obtained from the hash function). if there are no keys greater than the position, in the keys array, we
circle back and return the 0th index. The index thus obtained will be the index of the storage node in the nodes array associated with the item.

def assign(self, item: str) -> str:


"""Given an item, the function returns the node it is associated with.
"""
key = hash_fn(item, self.total_slots)

# we find the first node to the right of this key


# if bisect_right returns index which is out of bounds then
# we circle back to the first in the array in a circular fashion.
index = bisect_right(self._keys, key) % len(self._keys)

# return the node present at the index


return [Link][index]

The source code with the implementation of Consistent Hashing in Python could be found at [Link]/arpitbbhayani/consistent-hashing.

Conclusion
Consistent Hashing is one of the most important algorithms to help us horizontally scale and manage any distributed system. The algorithm does not
only work in sharded systems but also finds its application in load balancing, data partitioning, managing server-based sticky sessions, routing
algorithms, and many more. A lot of databases owe their scale, performance, and ability to handle the humongous load to Consistent Hashing.

References
Hash Functions - Wikipedia
Consistent Hashing - Wikipedia
Consistent Hashing - Stanford
Consistent Hashing and RandomTrees
Dynamo: Amazon’s Highly Available Key-value Store

Writings and Learnings Courses


Blogs System Design Masterclass
Knowledge Base System Design for Beginners
Maxims Redis Internals
Bookshelf
Papershelf

Legal and Contact Everything Else


About me Collaborate with me
Terms and Conditions DiceDB
Privacy Policy Revine
Refund Policy The Smarter Chimp
Contact Me

Arpit's Newsletter read by 125,000 engineers


Weekly essays on real-world system design, distributed systems, or a deep dive into some super-
clever algorithm.

Subscribe on LinkedIn Subscribe on Substack

YouTube (150k) Twitter (70k) LinkedIn (200k) GitHub (5k)

© Arpit Bhayani, 2024

Common questions

Powered by AI

Consistent hashing addresses the problem of data migration and re-association in distributed storage systems during scale-ups and scale-downs. Traditional hashing methods require moving a large proportion of data when the number of storage nodes changes, making the system inefficient. By mapping both objects and storage nodes to a large, fixed-size hash space, consistent hashing ensures that the number of data elements that need to be moved is minimized to a fraction (k/n, where k is the total number of keys and n is the number of nodes). This is achieved by associating each object to the immediate right node in the hashed space, resulting in minimal data shifting. Thus, consistent hashing allows for a scalable and dynamic distributed system with efficient data reassignment .

Implementing consistent hashing using a large array directly poses several challenges, including the requirement of significant memory to hold the entire hash space, which can be enormous. Another challenge is the inefficiency of iteratively searching the array to find the right association, which has a time complexity of O(hash_space), making it impractical. These challenges can be mitigated by using two smaller, sorted arrays: one for storing nodes and another for node positions in the hash space. The use of binary search on these arrays for association reduces complexity to O(log(n)), making the system more efficient and scalable .

Consistent hashing ensures independence of the hash function from the number of storage nodes by mapping both data and nodes onto a large, fixed-sized hash space. The hash function typically uses a cryptographic hash like SHA-256, which produces a uniform distribution of hash values over this entire space. This constant size of the hash space means that additions or removals of nodes do not affect the distribution of hash values, and only localized changes around the modified node position are necessary for reassignment. This stability ensures that the system remains efficient and responsive to scaling changes .

In consistent hashing, hash functions are designed to map both data and nodes into a large, fixed-size hash space that remains constant regardless of the number of nodes. This independence ensures that the mapping and association of data do not rely on the total number of nodes, thus minimizing data movement when nodes are added or removed. Traditional hashing methods tie the hash function to the number of nodes, leading to substantial data reassignment during node changes. By contrast, the use of a fixed hash space in consistent hashing maintains efficiency in scalability and node independence .

Consistent hashing offers several advantages for distributed storage systems. It efficiently distributes data across nodes by ensuring minimal data movement during scaling operations, such as adding or removing nodes. This reduces overhead and maintains system stability. The hash function used is independent of the number of nodes, enabling seamless scalability and making the hash space constant. Furthermore, consistent hashing supports efficient lookup operations through binary search, which optimizes read and write access times. These features together enhance system design robustness and performance, providing scalable and resilient storage solutions .

Consistent hashing minimizes data movement during node addition and removal by distributing data uniformly across a fixed-sized hash space and associating each data item with the nearest node to the right in the hash space. When a node is added, only the data items that fall between the previous right node and the new node's position require reassignment. Similarly, when a node is removed, only the data associated with it is migrated to the next immediate right neighbor. This contrasts with traditional partitioning schemes where changes in node count often result in re-partitioning of the entire dataset, leading to higher data movement and disruption .

Binary search optimizes the process of finding the correct node for an item in consistent hashing by allowing the system to quickly locate the first node greater than the hashed position of the item. By maintaining a sorted array of node positions (keys array), the binary search operates in O(log(n)) time complexity, making it efficient even as the number of nodes increases. This efficiency is critical because it affects every read and write operation in the system, directly impacting overall performance. Efficient node association ensures quick data retrieval and storage, essential for maintaining the distributed system's high availability and responsiveness .

Binary search plays a crucial role in efficiently finding the correct node for a given data item in consistent hashing. When a new data item needs to be associated with a storage node, the item is first hashed to determine its position in the hash space. A binary search is then performed on the sorted array of node positions (keys array) to find the first node position greater than the item's hash position, which determines its associated node. This association operation is done in O(log(n)) time, making it efficient for read and write operations in the distributed system .

Adding a new node in consistent hashing involves determining its position in the hash space using the hash function. Once positioned, data items that hash to positions between the previous node and the new node are reassigned to the new node. This process minimizes impact as only the subset of data in the immediate vicinity of the new node is moved. The binary search is used to identify this position in the sorted keys array, enhancing efficiency while maintaining system consistency and limiting data migration .

SHA-256, when used in consistent hashing, provides robust security due to its cryptographic nature, resisting hash collisions and ensuring data integrity. Additionally, SHA-256 outputs a 256-bit hash value, which enables a vast hash space (typically size 2^256), ensuring a uniform and even distribution of hash values across this space. This large distribution aids in spreading data evenly across nodes, mitigating hotspots and enhancing system performance by ensuring that no single node becomes a bottleneck in data handling. Thus, SHA-256 enhances both security and efficient data distribution in the system .

You might also like