C# in Depth

Cover of C# in Depth
Order now (3rd edition)

(You are currently looking at the first edition version of this page. This page is also available for the second and third editions.)

Notes for Chapter 5: Fast-tracked delegates

5.0 (Introduction): Foresight or luck?

Eric admits in comments that the language designers perhaps weren't quite as foresighted as I gave them credit for. However, as he puts it:

It was foresighted in the sense that the designers knew that if they added generics, iterator and anonymous functions, then that would open up vast new areas for extension of the language. What exactly those areas were going to look like, no one knew.

Either way, the limited improvements to delegates in C# 2 certainly act as a welcome stepping stone before the full-on functional emphasis of C# 3.

5.4.2: Predicate<T> in LINQ? Not so much...

In section 5.4.2, I wrote:

The Predicate<T> delegate type we've used so far isn't used very widely in .NET 2.0, but it becomes very important in .NET 3.5 where it's a key part of LINQ.

It's possible that this was true at the time I originally wrote chapter 5 (significantly before .NET 3.5 was released) but it certainly isn't true now. LINQ tends to use Func<TSource,bool> for its predicates (where TSource is the type of the sequence involved). The two delegates are equivalent, of course - they have the same signature - but it's still not strictly speaking a use of Predicate<T>. Ah well.

5.5.1: Was Scheme the first language to support closures?

Scheme was the first language to require closure semantics. It's possible that there were earlier Lisp dialects which implemented them to a greater or lesser extent - so perhaps my statement at the start of this section is overly bold. If you know of a language earlier than Scheme with closures in, please let me know so I can change this note accordingly...

5.5.4: Stacks, heaps, and caring too much

Remember the section in chapter 2 where I mention that in some ways managed developers shouldn't care about whether things are placed on the stack or the heap? Well, you've got Eric to thank for that. I've just always cared by default.

This section about closures shows a good reason why it's sometimes not worth caring - you end up being less surprised when things move around unexpectedly. Unsurprisingly, Eric puts it best:

The whole point of managed memory is that every object lives at least as long as it needs to. The idea that "local stuff vanishes" has nothing to do with whether the implementation is "on the stack" or not – stacks are a means to an end, not an end in themselves.

It had never occurred to me to think in such a liberating, non-implementation-specific way before writing this book. Since reading Eric's comments, I've been coming up with all kinds of bizarre ideas, many of which are completely unworkable - but it's a valuable experience nonetheless.

5.5.7: Accidental capture of expensive resources

There's a note of caution I originally wanted to include in the chapter, but it ended up making the whole thing too long - and frankly too negative. In the current implementation of captured variables, any variable which is captured by any anonymous method is captured by all anonymous methods which capture a variable from the same scope. This can - in very rare scenarios - mean that something isn't eligible for garbage collection for far longer than anticipated.

Rather than go over the details, I'll redirect you to Eric's blog post on the topic. Note that this behaviour certainly isn't mandated, so it's possible that it may be fixed in a future version of the compiler - but it's unlikely to affect very many people.