AlgoMaster Logo

Introduction to Arrays

Ashish

Ashish Pratap Singh

Arrays are one of the most fundamental data structures in computer science. The are the building block and form the foundation for many other data structures and algorithms.

You’ll see arrays everywhere from basic loops and sorting to advanced interview techniques like two pointers, sliding window, and dynamic programming.

In this chapter, I’ll break down:

  • What an array is and how it works
  • What are dynamic arrays
  • Common array operations and their time complexities
  • and some of the most popular interview patterns that use arrays

Lets get started.

What is an Array?

So… what exactly is an array?

At its core, an array is a data structure used to store a collection of elements of the same type, arranged in a contiguous block of memory.

Integer array:

0
1
1
3
2
5
3
7
4
9

Boolean array:

0
true
1
false
2
false
3
true
4
true

String array:

0
abc
1
efg
2
uvw
3
pqr
4
xyz

Suppose you want to store 100 numbers. You wouldn’t create variables like num1, num2, ..., num100 — that would be impossible to manage.

Instead, you store them in one structure:

Now you can access any value using an index: arr[0], arr[1], arr[2], ...

Most programming languages use zero-based indexing, so the first element is at index 0.

Accessing an element in an array is extremely fast because elements are stored next to each other in memory.

We can compute the address of any index using simple math:

address of arr[i] = base address + (i × size of each element)

This allows you to retrieve any element in constant time, O(1), no matter how large the array is.

But arrays aren’t perfect.

In languages like C++ and Java, arrays have a fixed size. Once you allocate memory for 100 elements, you can’t expand it.

If you need more space, you’ll have to create a new, larger array and copy the data over, which takes O(n) time.

Most programming languages like Python, Java, C++, and JavaScript offer more flexible alternatives like List, ArrayList, and Vector which are dynamic in nature.

  • Pythonlist
  • JavaArrayList
  • C++vector

Dynamic arrays can resize automatically as new elements are added.

But how does that work?

Behind the scenes, dynamic arrays allocate more memory than needed. For example, if you insert 5 elements, it might allocate space for 8 or 10.

0
1
1
3
2
5
3
7
4
9
5
-
6
-
7
-
(size = 5, capacity = 8)

This way, the array don’t require resizing for each insertion.

When the array runs out of space, it typically doubles its capacity — allocating a bigger chunk of memory, and coping all elements from the old array to the new one.

This operation is expensive, it takes O(n) time.

But because it doesn’t happen often, the average time for inserting at the end remains O(1).

Common Array operations

Now that you understand what arrays are and how they’re stored in memory, let’s go over some of the most common operations you’ll perform on arrays and how efficient they are in terms of time complexity.

1. Accessing by Index

Since the elements are stored in contiguous memory layout, the address of any element can be calculated directly using a simple formula.

That’s why accessing an element by index always takes constant time, whether the array has 10 elements or 10 million.

2. Traversal

Traversal means going through the array, one element at a time — commonly used for printing values, calculating a sum, or searching for a specific item.

Since each element is visited exactly once, this takes O(n) time, where n is the number of elements in the array.

3. Insertion

The performance of insertion depends entirely on where the new element is placed.

If you insert at the end, it’s fast. Just place the element at the next index, if there is free capacity.

This takes O(1) time in dynamic arrays.

If you insert at a the beginning or the middle, it gets slower since all elements after the insertion point must be shifted one position to the right to make space. This shifting requires O(n) time in the worst case.

Even though languages like Python (list) or Java (ArrayList) handle the shifting internally, the time complexity does not change — it still costs O(n) for inserting anywhere except the end.

4. Deletion

Removing an element is similar to insertion — it depends on the position.

If you drop the last element, no shifting is required. It takes O(1) time.

But if you delete from start or middle, you’ll need to shift all following elements left to fill the gap. That takes O(n) time.

So deleting the first element in a 1,000-element array means 999 elements need to move one step left.

The performance of searching depends heavily on whether the array is sorted.

If the array is unsorted, you’ll have to scan through each element one by one which takes O(n) time.

But if the array is sorted, you can use Binary Search, which divides the search space in half with each step — reducing the time to O(log n).

Common Array Interview Patterns

Arrays show up in almost every coding interview. But the questions don’t just ask you to “loop and print”. Instead, they are often based on problem-solving patterns that use arrays in clever ways.

Here are the most common array patterns you should know.

1. Two Pointer Technique

This pattern works especially well for sorted arrays or problems where you compare values from both ends.

You place one pointer at the start and another at the end, then move them inward based on certain logic.

0
1
left
1
3
2
5
3
7
4
9
5
11
right
0
1
1
3
left
2
5
3
7
4
9
5
11
right
0
1
1
3
2
5
left
3
7
4
9
5
11
right
0
1
1
3
2
5
left
3
7
4
9
right
5
11

This approach often turns an O(n²) brute-force solution into a clean O(n) solution.

Common use cases include:

  • Checking if a number pair sums to a target
  • Removing duplicates in a sorted array
  • Checking if a string is palindrome

2. Sliding Window

It is used for problems that involve subarrays or ranges.

Instead of recomputing from scratch each time, you “slide” a window over the array and update the answer efficiently.

0
1
1
3
2
5
3
7
4
9
5
11
0
1
1
3
2
5
3
7
4
9
5
11
0
1
1
3
2
5
3
7
4
9
5
11
0
1
1
3
2
5
3
7
4
9
5
11

If we use sliding window cleverly, it can reduce the time complexity of many brute force solution from O(n^2) to O(n).

This technique is extremely useful for problems like:

  • Longest / shortest subarray with a condition
  • Maximum sum subarray (fixed or variable size)
  • Number of substrings with certain properties

3. Prefix Sum

Prefix sums help you answer range queries in constant time.

You precompute a running sum such that: sum(L to R) = prefix[R] – prefix[L - 1]

Input array:

0
1
1
3
2
5
3
7
4
9
5
11

Prefix Sum array:

0
1
1
4
2
9
3
16
4
25
5
36

This avoids recomputing sums for each query, which is critical in problems with many range checks.

I’ve created separate videos breaking down each of these three patterns with example LeetCode problem walkthrough. You can find them in the LeetCode Patterns playlist on this channel.

4. Dynamic Programming or DP

Arrays also serve as the foundation for many DP solutions.

They are used to store the results of smaller subproblems so you don’t recompute them.

Examples include:

  • Fibonacci sequence (iterative bottom-up)
  • Coin change
  • Longest Increasing Subsequence
  • and Knapsack variants