Last Updated: March 29, 2026
We are given an m x n matrix and need to produce its transpose, which is an n x m matrix where the element at position (i, j) in the original ends up at position (j, i) in the result.
Think of it this way: the first row of the original becomes the first column of the result, the second row becomes the second column, and so on. If the original matrix has dimensions 2x3 (2 rows, 3 columns), the transposed matrix will be 3x2 (3 rows, 2 columns).
The key observation here is straightforward: for every element matrix[i][j], it maps to result[j][i]. The challenge is mostly about correctly creating the output matrix with swapped dimensions and iterating through the elements without mixing up indices.
1 <= m, n <= 1000 → The matrix dimensions can be up to 1000x1000, but the total element count is capped.1 <= m * n <= 10^5 → At most 100,000 total elements. An O(m * n) solution that visits each element once is perfectly efficient.-10^9 <= matrix[i][j] <= 10^9 → Values can be large, but since we are just copying values without arithmetic, overflow is not a concern.The transpose of a matrix swaps rows and columns. Every element at position (i, j) moves to position (j, i). So the most natural approach is to create a new matrix with swapped dimensions and copy each element to its transposed position.
If our original matrix is m rows by n columns, we create a result matrix that is n rows by m columns. Then we walk through every cell in the original and place it at the swapped position in the result.
This is the go-to approach and it is already optimal. We need to visit every element exactly once, and there is no way to produce the output without doing at least that much work.
The transpose operation is a one-to-one mapping: every element in the original has exactly one destination in the result, and every cell in the result is filled by exactly one element from the original. The rule is simple, (i, j) -> (j, i), and applying it to all cells produces the correct output.
This is already the optimal approach. We must read every element at least once to produce the output, so O(m n) time is a lower bound. We also need O(m n) space for the result matrix (you cannot transpose a non-square matrix in-place since the dimensions change).
m and columns n from the input matrix.result[j][i] = matrix[i][j].Both approaches for this problem have identical time and space complexity. The next approach offers a different mental model: instead of iterating over the original and placing elements, we iterate over the result and pull elements from the original.
Instead of iterating over the original matrix and placing elements into the result, we can think about it from the result's perspective. Each row of the transposed matrix corresponds to a column of the original. So we iterate column by column through the original and build each row of the result.
This produces the exact same output with the same time and space complexity, but the mental model is different. Some people find it more intuitive to think "the j-th row of the result is the j-th column of the original."
By definition, the j-th row of the transposed matrix contains all elements from the j-th column of the original. Collecting elements column-by-column from the original and stacking them as rows is exactly what the transpose does. The iteration order is different from Approach 1, but the final mapping is identical: result[j][i] = matrix[i][j].
In Python, this approach can be written as a one-liner using zip: return [list(row) for row in zip(*matrix)]. The zip(*matrix) unpacks the rows and zips them together, effectively collecting columns.
m and columns n from the input matrix.j (0 to n-1), create a new row by collecting matrix[0][j], matrix[1][j], ..., matrix[m-1][j].