C# in Depth

Overloading

Just as a reminder, overloading is what happens when you have two methods with the same name but different signatures. At compile time, the compiler works out which one it's going to call, based on the compile time types of the arguments and the target of the method call. (I'm assuming you're not using dynamic here, which complicates things somewhat.)

Now, things can get a little bit confusing sometimes when it comes to resolving overloads... especially as things can change between versions. This article will point out some of the gotchas you might run into... but I'm not going to claim it's an authoritative guide to how overloading is performed. For that, read the specification - but be aware that you may get lost in a fairly complex topic. Overloading interacts with things like type inference and implicit conversions (including lambda expressions, anonymous methods and method groups, all of which can become tricky). All specification references are from the C# 4 spec.

This article is also not going to go into the design choices of when it's appropriate and when it's not. I'll give a little advice about when it might be potentially very confusing to use overloading, but anything beyond that will have to wait for another time. I will say that in general I believe overloading should be used for convenience, usually with all overloads ending up calling one "master" method. That's not always the case, but I believe it's the most common scenario which is appropriate.

In each example, I'll give a short program which will declare some methods and call one - and then I'll explain what gets called in which version of C#, and why. As I'm not trying to focus on the design decisions but merely the mechanical choices the C# compiler makes, I haven't tried to make the examples do anything realistic, or even given them realistic names: the overloaded method is always Foo, and it will always just print its own signature. Of course the action taken is irrelevant, but it makes it easier to grab the code and experiment with it if you want to.

Simple cases

Let's start off with a couple of really simple cases, just to get into the swing of things. First, the trivial case where only one overload is possible at all.

using System;

class Test
{
    static void Foo(int x)
    {
        Console.WriteLine("Foo(int x)");
    }
    
    static void Foo(string y)
    {
        Console.WriteLine("Foo(string y)");
    }
    
    static void Main()
    {
        Foo("text");
    }
}

This will print Foo(string y) - there's no implicit string conversion from string (the type of the argument here, "text") to int, so the first method isn't an applicable function member in spec terminology (section 7.5.3.1). Overloading ignores any methods which can't be right when it's deciding which one to call.

Let's actually give the compiler something to think about this time...

using System;

class Test
{
    static void Foo(int x)
    {
        Console.WriteLine("Foo(int x)");
    }

    static void Foo(double y)
    {
        Console.WriteLine("Foo(double y)");
    }
    
    static void Main()
    {
        Foo(10);
    }
}

This time, Foo(int x) will be printed. Both methods are applicable - if we removed the method taking an `int`, the method taking a `double` would be called instead. The compiler decides which one to pick based on the better function member rules (section 7.5.3.2) which look at (amongst other things) what conversions are involved in going from each argument to the corresponding parameter type (int for the first method, double for the second). There are more rules (section 7.5.3.3) to say which conversion is better than the other - in this case, a conversion from an expression of type int to int is better than a conversion from int to double, so the first method "wins".

Multiple parameters

When there are multiple parameters involved, for one method to "beat" another one it has to be at least as good for each parameter, and better for at least one parameter. This is done on a method-by-method comparison: a method doesn't have to be better than all other methods for any single parameter. For example:

using System;

class Test
{
    static void Foo(int x, int y)
    {
        Console.WriteLine("Foo(int x, int y)");
    }

    static void Foo(int x, double y)
    {
        Console.WriteLine("Foo(int x, double y)");
    }

    static void Foo(double x, int y)
    {
        Console.WriteLine("Foo(double x, int y)");
    }
    
    static void Main()
    {
        Foo(5, 10);
    }
}

Here the first method (Foo(int x, int y)) wins because it beats the second method on the second parameter, and the third method on the first parameter.

If no method wins outright, the compiler will report an error:

using System;

class Test
{
    static void Foo(int x, double y)
    {
        Console.WriteLine("Foo(int x, double y)");
    }

    static void Foo(double x, int y)
    {
        Console.WriteLine("Foo(double x, int y)");
    }
    
    static void Main()
    {
        Foo(5, 10);
    }
}

Result:

error CS0121: The call is ambiguous between the following methods or
properties: 'Test.Foo(int, double)' and 'Test.Foo(double, int)'

Inheritance

Inheritance can cause a confusing effect. When the compiler goes looking for instance method overloads, it considers the compile-time class of the "target" of the call, and looks at methods declared there. If it can't find anything suitable, it then looks at the parent class... then the grandparent class, etc. This means that if there are two methods at different levels of the hierarchy, the "deeper" one will be chosen first, even if it isn't a "better function member" for the call. Here's a fairly simple example:

using System;

class Parent
{
    public void Foo(int x)
    {
        Console.WriteLine("Parent.Foo(int x)");
    }   
}
    
class Child : Parent
{
    public void Foo(double y)
    {
        Console.WriteLine("Child.Foo(double y)");
    }
}
    
    
class Test
{
    static void Main()
    {
        Child c = new Child();
        c.Foo(10);
    }
}

The target of the method call is an expression of type Child, so the compiler first looks at the Child class. There's only one method there, and it's applicable (there's an implicit conversion from int to double) so that's the one that gets picked. The compiler doesn't consider the Parent method at all.

The reason for this is to reduce the risk of the brittle base class problem, where the introduction of a new method to a base class could cause problems for consumers of classes derived from it. Eric Lippert has various posts about the brittle base class problem which I can highly recommend.

There's one aspect of this behaviour which is particularly surprising though. What counts as a method being "declared" in a class? It turns out that if you override a base class method in a child class, that doesn't count as declaring it. Let's tweak our example very slightly:

using System;

class Parent
{
    public virtual void Foo(int x)
    {
        Console.WriteLine("Parent.Foo(int x)");
    }   
}
    
class Child : Parent
{
    public override void Foo(int x)
    {
        Console.WriteLine("Child.Foo(int x)");
    }   

    public void Foo(double y)
    {
        Console.WriteLine("Child.Foo(double y)");
    }
}
    
    
class Test
{
    static void Main()
    {
        Child c = new Child();
        c.Foo(10);
    }
}

Now it looks like you're trying to call Child.Foo(int x) in my opinion - but the above code will actually print Child.Foo(double y). The compiler ignores the overriding method in the child.

Given this oddness, my advice would be to avoid overloading across inheritance boundaries... at least with methods where more than one method could be applicable for a given call if you flattened the hierarchy. You'll be glad to hear that the rest of the examples on this page don't use inheritance.

Return types

The return type of a method is not considered to be part of a method's signature (section 3.6), and an overload is determined before the compiler checks whether or not the return type will cause an error in the wider context of the method call. In other words, it's not part of the test for an applicable function member. So for example:

using System;

class Test
{
    static string Foo(int x)
    {
        Console.WriteLine("Foo(int x)");
        return "";
    }
    
    static Guid Foo(double y)
    {
        Console.WriteLine("Foo(double y)");
        return Guid.Empty;
    }

    static void Main()
    {
        Guid guid = Foo(10);
    }
}

Here the overload of string Foo(int x) is chosen and then the compiler works out that it can't assign a string to a variable of type Guid. On its own, the Guid Foo(double y) would be fine, but because the other method was better in terms of argument conversions, it doesn't have a chance.

Optional parameters

Optional parameters, introduced into C# 4, allow a method to declare a default value for some or all of its parameters. The caller can then omit the corresponding arguments if they're happy with the defaults. This affects overload resolution as there may be multiple methods with a different number of parameters which are all applicable. When faced with a choice between a method which requires the compiler to fill in optional parameter values and one which doesn't, if the methods are otherwise "tied" (i.e. normal argument conversion hasn't decided a winner), overload resolution will pick the one where the caller has specified all the arguments explicitly:

using System;

class Test
{
    static void Foo(int x, int y = 5)
    {
        Console.WriteLine("Foo(int x, int y = 5)");
    }
    
    static void Foo(int x)
    {
        Console.WriteLine("Foo(int x)");
    }

    static void Main()
    {
        Foo(10);
    }
}

When considering the first method, the compiler would need to fill in the argument for the y parameter using the default value - whereas the second method doesn't require this. The output is therefore Foo(int x). Note that this is purely a yes/no decision: if two methods both require default values to be filled in, and they're otherwise tied, the compiler will raise an error:

using System;

class Test
{
    static void Foo(int x, int y = 5, int z = 10)
    {
        Console.WriteLine("Foo(int x, int y = 5, int z = 10)");
    }
    
    static void Foo(int x, int y = 5)
    {
        Console.WriteLine("Foo(int x, int y = 5)");
    }

    static void Main()
    {
        Foo(10);
    }
}

This call is ambiguous, because the one argument which has been given is fine for both methods, and both require extra arguments which would be filled in from default values. The fact that the first method would need two arguments to be defaulted and the second would only need one is irrelevant.

Just to be clear, this tie breaking only comes in after the methods have been compared to each other using the pre-C# 4 rules. So, let's change our earlier example a little:

using System;

class Test
{
    static void Foo(int x, int y = 5)
    {
        Console.WriteLine("Foo(int x, int y = 5)");
    }
    
    static void Foo(double x)
    {
        Console.WriteLine("Foo(double x)");
    }

    static void Main()
    {
        Foo(10);
    }
}

This time the method with the optional parameter is used because the int to int conversion is preferred over the int to double one.

Named arguments

Named arguments - another feature introduced in C# 4 - can be used to effectively reduce the set of applicable function members by ruling out ones which have the "wrong" parameter names. Here's a change to an early simple example - all I've done is change the calling code to specify an argument name.

using System;

class Test
{
    static void Foo(int x)
    {
        Console.WriteLine("Foo(int x)");
    }

    static void Foo(double y)
    {
        Console.WriteLine("Foo(double y)");
    }
    
    static void Main()
    {
        Foo(y: 10);
    }
}

This time the first method isn't applicable, because there's no y parameter... so the second method is called and the output is Foo(double y). Obviously this technique only works if the parameters in the methods have different names though.

Introducing a new conversion is a breaking change

Sometimes, the language changes so that a new conversion is available. This is a breaking change, as it means that overloads which were previously not applicable function members can become applicable. It doesn't even have to be a better conversion than any existing overloads - if you have a child class with a method which was previously inapplicable, and it becomes applicable, then that will take precedence over any methods in the base class, as we've already seen.

In C# 2 this occurred with delegates - you could suddenly build a MouseEventHandler instance from a method with a signature of void Foo(object sender, EventArgs e), whereas in C# 1 this wasn't allowed.

In C# 4 there's a more widely-applicable kind of conversion which is now available: generic covariance and contravariance. Here's an example:

using System;
using System.Collections.Generic;

class Test
{
    static void Foo(object x)
    {
        Console.WriteLine("Foo(object x)");
    }
    
    static void Foo(IEnumerable<object> y)
    {
        Console.WriteLine("Foo(IEnumerable<object> y)");
    }

    static void Main()
    {
        List<string> strings = new List<string>();
        Foo(strings);
    }
}

The C# 3 compiler will pick the overload Foo(object x). The C# 4 compiler, when targeting .NET 3.5, will pick the same overload - because IEnumerable<T> isn't covariant in .NET 3.5. The C# 4 compiler, when targeting .NET 4, will pick the Foo(IEnumerable<T>) overload, because the conversion to that is better than the conversion to object. This change occurs with no warnings of any kind.

Conclusion

This article may well expand over time, covering other oddities (such as params parameters, for example) but I hope I've given enough to think about for the moment. Basically, overloading is a minefield with lots of rules which can interact in evil ways. While overloading can certainly be useful, I've found that often it's better to create alternative methods with clear names instead. This is particularly useful for constructors, and can be a helpful technique when you would otherwise want to declare two constructors with identical signatures: create two static methods to create instances instead, both of which call a more definitive constructor (which would potentially have two parameters). The debate about static factory methods vs publicly accessible constructors is one for a different article, however.

Overloading causes a lot less of a problem when only one method will ever be applicable - for example when the parameter types are mutually incompatible (one method taking an int and one taking a string for example) or there are more parameters in one method than another. Even so, use with care: in particular, bear in mind that one type may implement multiple interfaces, and even potentially implement the same generic interface multiple times with different type arguments.

If you find yourself going to the spec to see which of your methods will be called and the methods are under your control then I would strongly advise you to consider renaming some of the methods to reduce the degree of overloading. This advice goes double when it's across an inheritance hierarchy, for reasons outlined earlier.