Polymorphism allows the same method name or interface to exhibit different behaviors depending on the object that is invoking it.
The term "polymorphism" comes from Greek and means "many forms." In programming, it allows us to write code that is generic, extensible, and reusable, while the specific behavior is determined at runtime or compile-time based on the object’s actual type.
Polymorphism lets you call the same method on different objects, and have each object respond in its own way.
You write code that targets a common type, but the actual behavior is determined by the concrete implementation.
Think of a remote control. Whether it operates a TV, an air conditioner, or a projector, the button press action remains the same for the user. Internally, though, each device responds differently.
That’s polymorphism at work—the same interface (remote control) triggers different behaviors based on the receiver (device type).
Also known as method overloading, this occurs when:
When you call add()
, the compiler selects the appropriate method based on the arguments passed.
Also known as method overriding, this happens when:
Suppose you’re designing a system that sends notifications. You want to support email, SMS, push notifications, etc.
You start by defining a common interface.
Now, you implement it in multiple ways:
And use it like this:
You can pass any implementation of NotificationSender
, and the correct behavior will be triggered based on the object passed.
This is runtime polymorphism, where the decision of which method to execute is made during execution, not at compile time.
Polymorphism is especially useful in Low-Level Design when:
For example: if you're designing a PaymentProcessor
interface, you can have multiple implementations like CreditCardProcessor
, PayPalProcessor
, and UPIProcessor
. The payment system doesn’t need to care which one it's using, it just calls processPayment()
.