Last Updated: March 29, 2026
We need to figure out whether a given positive integer is the square of some other integer. For example, 1, 4, 9, 16, 25 are all perfect squares because they equal 1^2, 2^2, 3^2, 4^2, 5^2, respectively. Numbers like 2, 3, 5, 14 are not.
The catch is that we can't use any built-in square root function. So we need to find another way to determine if there exists an integer x such that x * x == num. The input can be as large as 2^31 - 1 (about 2.1 billion), so the square root could be up to around 46,340. That's a manageable search space, which opens the door to both linear and logarithmic approaches.
The key observation is that perfect squares are spaced further and further apart as numbers grow. Between 1 and 100 there are 10 perfect squares, but between 10,000 and 10,100 there might be just one. This spreading-out property means we're searching for a needle in an increasingly sparse haystack, and binary search is a natural fit since the function f(x) = x * x is monotonically increasing.
1 <= num <= 2^31 - 1 → The input is always positive, so we don't need to handle zero or negative numbers. The maximum value is about 2.1 billion, and its square root is about 46,340. A linear scan up to sqrt(num) involves at most ~46,340 iterations, which is fast. Binary search gives us ~31 iterations.sqrt allowed → We must implement our own search or mathematical approach.The most natural starting point: just try every integer starting from 1 and check if its square equals num. We multiply i * i and compare. If i * i equals num, we found our answer. If i * i exceeds num, we've gone too far and can stop, since squares only get bigger from here.
Think of it like checking a multiplication table one row at a time. You go through 11, 22, 3*3, and so on until you either hit num exactly or overshoot it.
i = 1.i * i is less than or equal to num:i * i == num, return true.i.false.The linear search checks every candidate sequentially through a sorted space. What if we jumped to the middle and eliminated half the candidates in a single comparison?
Since perfect squares grow monotonically (1, 4, 9, 16, 25, ...), the function f(x) = x * x is strictly increasing for positive x. This is exactly the property binary search needs. Instead of checking every integer from 1 to sqrt(num), we can search the range [1, num] and halve it at each step.
Pick a midpoint, compute its square, and compare to num. If mid * mid is too small, the answer must be in the right half. If mid * mid is too big, the answer must be in the left half. If mid * mid equals num, we've found it.
One important detail: when computing mid * mid, the result can overflow a 32-bit integer. So we need to use a 64-bit integer (long) for the multiplication.
Binary search works here because the function f(x) = x^2 is monotonically increasing for positive x. If mid^2 < num, then every value less than mid also squares to less than num, so we can safely discard the left half. Similarly, if mid^2 > num, every value greater than mid is also too large. This guarantees we eliminate half the search space at each step, converging to the answer (or confirming it doesn't exist) in O(log n) steps.
left = 1 and right = num.left <= right:mid = left + (right - left) / 2.square = mid * mid using 64-bit arithmetic.square == num, return true.square < num, set left = mid + 1.square > num, set right = mid - 1.false.Binary search is already very efficient, but there's a mathematical approach that converges even faster: Newton's method, which roughly doubles the number of correct digits each iteration.
Newton's method is a classic numerical technique for finding roots of equations. We want to find x where x^2 = num, which is equivalent to finding the root of f(x) = x^2 - num. The update rule simplifies to:
x_next = (x + num / x) / 2
This is the same formula known as the "Babylonian method" for computing square roots. We start with an initial guess (num itself works fine) and keep refining. The sequence decreases monotonically toward floor(sqrt(num)) when using integer division. Once it stabilizes, we just check whether the converged value squared equals num.
The beauty of Newton's method is quadratic convergence: the number of correct digits roughly doubles with each iteration. So even for the largest inputs, we converge in about 5-6 steps.
Newton's method converges because each step brings the estimate closer to the true square root. The update formula x = (x + num/x) / 2 computes the arithmetic mean of x and num/x. If x is too large (x > sqrt(num)), then num/x is too small, and their average lands closer to the true root. With integer division, the sequence is strictly decreasing until it reaches floor(sqrt(num)), at which point it stabilizes. We then just check whether that floored value squared equals num exactly.
x = num.x * x > num:x = (x + num / x) / 2 using integer division.true if x * x == num, otherwise false.