Binary Search Tree (BST) is a data structure that combines the hierarchical structure of binary tree with the ordering of a sorted list, and the efficiency of binary search.
This simple but powerful structure makes operations like searching, inserting, and deleting elements much more efficient, often in O(log n) time if the tree is balanced.
BSTs are widely used in real systems — from databases and compilers to in-memory indexes — wherever we need fast lookups while maintaining sorted order. The are also an important topic for coding interviews especially in problems that involve dynamic insertions, deletions, and ordered traversal.
In this chapter, I’ll cover:
So, what exactly is a Binary Search Tree, or BST?
A Binary Search Tree is just a binary tree with one extra rule — the BST property:
For every node:
This simple property makes BSTs incredibly powerful. It allows us to search, insert, or delete elements efficiently by cutting the search space in half at each step — very similar to how binary search works on a sorted array.
Imagine we insert the values: 50, 30, 70, 20, 40, 60, 80. A balanced BST might look like this:
50 contains values smaller than 50 → {20, 30, 40}.50 contains values greater than 50 → {60, 70, 80}.So if we want to search for 60:
50 → since 60 is greater, move right.70 → since 60 is smaller, move left.60.Instead of scanning every node like in a linked list, we quickly narrowed it down, step by step.
Now that you know what a BST is, let's explore the key operations that make it so powerful: search, insert, delete, and traversal.
We'll use this balanced BST as our running example:
All code examples will be methods within this basic BST class structure:
Searching in a BST is just like binary search on a sorted array.
Here’s the algorithm:
root is null, we've hit a dead end. The target is not in the tree, so return false.root.val is equal to the target, we've found it! Return true.target is less than the root.val, it must be in the left subtree. Return the result of calling search on the root.left.target is greater than root.val), it must be in the right subtree. Return the result of calling search on the root.right.Example: Lets say you are searching for 60:
👉 Average time complexity for search is O(log n) when the tree is balanced since each comparison eliminates roughly half of the remaining nodes.
But in the worst case it is O(n) — if the tree is unbalanced (or skewed). For example, if you insert numbers in sorted order (1, 2, 3, 4, 5)—it degenerates into a linked list.
Searching for 5 would require visiting every single node.
Insertion works almost the same way as search.
We "search" for the value, and when we hit a null spot, we know that's where the new node belongs.
Here’s what it looks like in code:
root) is null, we've found the empty spot. Create a new Node(val) and return it. This returned node will be assigned as the child of the parent node (from the previous recursive call).val < root.val, recursively call insert on the left subtree. Set the current node's left child to be the result of this recursive call (this handles the linking).val > root.val, recursively call insert on the right subtree. Set the current node's right child to be the result of this recursive call.root. This passes the node's reference back up the call stack, ensuring all links remain intactExample:
Insert 25 into the earlier tree.
20 go right (to null).insert(null, 25) hits the base case. It creates Node(25) and returns it.right child of 20.The complexity is identical to Search:
Deletion is the most complex operation because we must remove a node while ensuring the BST property (left < parent < right) is maintained for the entire tree.
There are three distinct cases to handle.
This is the simplest. We just remove it by setting its parent's pointer to null.
Example: Deleting 20. We just set 30.left = null.
This is also simple. We "bypass" the node by linking its parent directly to its only child.
Example: if 60 had a child 65, we'd delete 60 by setting 70.left = 65.
This is the tricky one. We can't just remove the node as it would leave two "orphan" subtrees.
The solution is to:
Here’s what the delete code looks likes:
root is null, the value isn't here to delete. Return null.val < root.val, the node to delete is in the left subtree. Recursively call delete on root.left and assign the result back to root.left.val > root.val, the node is in the right subtree. Recursively call delete on root.right and assign the result back to root.right.else we've found the node (root.val == val). Now we handle the three cases:root.left is null, we bypass the current node by returning root.right. (This covers a leaf node, where root.right is also null, and a node with one right child).root.right is null, we bypass the current node by returning root.left.findMin(root.right) helper. b. Copy the successor.val into the current node (root.val = successor.val). c. Recursively call delete on the right subtree to remove the original successor node (passing successor.val). Assign the result back to root.right.root (which may be unchanged, or have a new value, or be replaced by a child) up the call stack.Example: Delete 50
delete(root, 50) is called. The else block (Step 4) is hit.root.left (30) and root.right (70) are not null. This is Case 3.findMin(root.right) is called on node 70. It goes left and finds 60.successor is node 60.root.val = successor.val; (The 50 node's value is changed to 60).delete(root.right, 60) is called to remove the original 60.60. 60.left is null, so it hits Case 1/2 and returns 60.right (which is null).70.left is set to null.60 as the new root, is returned.The time complexity of delete is O(log n) on average if the tree is balanced and O(n) in the worse case for a skewed tree.