Why virtual/overridable not default to on?

  • Thread starter Thread starter Samuel Neff
  • Start date Start date
S

Samuel Neff

Moving to .NET from former VB6 and Java background. One thing I don't
understand is why class members are not-overridable by default. The
class author has to specifically say that a member is
virtual/overridable in order to allow another author to extend that
class and modify functionality. Why is this?

I could understand some situations where you would want members to be
final/sealed/whatever but I don't understand why this is the default
behavior.

Is there a performance reason or is it just different style? Is it
meant to make class developers think about how their class will be
used in the future more? Is it to favor composition over inheritance?
What's the history and rationale?

What are the norms regarding declaring members as virtual/overridable?
Is it common to declare all/most members this way or leave them as
final?

I understand the option to declare a member as new/shadows in a
derived class but that is even uglier and seems to just cause huge
problems--how can the behavior of the instance be governed by the type
of reference associated with it?? Rehetorical question--I'm more
concerned with the earlier ones.

Thanks,

Sam
 
Moving to .NET from former VB6 and Java background. One thing I don't
understand is why class members are not-overridable by default. The
AHA! Someone else from the vb and java world making the jump to C# finding
this a bit odd too! :> I was thinking I was too dumb in asking this
question, so thank you! And I got Grish's link, I'll read it over lunch
with time to ponder the reasons! :> Thank you for posting this (and to
Grish for the response! :>).
 
Samuel Neff said:
Moving to .NET from former VB6 and Java background. One thing I don't
understand is why class members are not-overridable by default. The
class author has to specifically say that a member is
virtual/overridable in order to allow another author to extend that
class and modify functionality. Why is this?

If I'd designed it that way, it would be to prevent accidental
overriding. I find that inheritance should be used very sparingly when
designing your own classes - and the amount of work involved in writing
a class which works appropriately and predictably when various members
are overwritten is quite substantial.

From this point of view, it makes a lot of sense for non-virtual to be
the default. What slightly disappoints me is that classes aren't sealed
by default...
I could understand some situations where you would want members to be
final/sealed/whatever but I don't understand why this is the default
behavior.

Because IMO it's only in *some* situations where you would want members
to be virtual :)
Is there a performance reason or is it just different style? Is it
meant to make class developers think about how their class will be
used in the future more? Is it to favor composition over inheritance?
What's the history and rationale?

What are the norms regarding declaring members as virtual/overridable?
Is it common to declare all/most members this way or leave them as
final?

I leave almost everything non-virtual.
I understand the option to declare a member as new/shadows in a
derived class but that is even uglier and seems to just cause huge
problems--how can the behavior of the instance be governed by the type
of reference associated with it?? Rehetorical question--I'm more
concerned with the earlier ones.

Just one point to consider as a semi-answer to your question: you
expect the compiler to make choices based on the type of an expression
when it comes to overloading, so is it that odd to have it make other
choices based on that?
 
Just one point to consider as a semi-answer to your question: you
expect the compiler to make choices based on the type of an expression
when it comes to overloading, so is it that odd to have it make other
choices based on that?

I see those as totally different. With an overload the call is chosen
based on the arguments supplied to the method. With new/shadows the
call is chosen based on the type declared of the reference to that
object, not anything related to the object itself.

Sam
 
Samuel Neff said:
I see those as totally different. With an overload the call is chosen
based on the arguments supplied to the method. With new/shadows the
call is chosen based on the type declared of the reference to that
object, not anything related to the object itself.

They're both examples of the compiler choosing the method based on the
compile-time type of expression rather than the run-time type of the
value of that expression. Sure, in one case it's the type of the
"target" of the method which is under question, and in another it's the
type of the parameters to the method, but the compile-time/run-time
difference is the same.

Member hiding certainly isn't terribly pleasant, and should be avoided
wherever possible, but the *ability* to do it (as opposed to having to
override) is important, IMO.
 
They're both examples of the compiler choosing the method based on the
compile-time type of expression rather than the run-time type of the
value of that expression. Sure, in one case it's the type of the
"target" of the method which is under question, and in another it's the
type of the parameters to the method, but the compile-time/run-time
difference is the same.

Member hiding certainly isn't terribly pleasant, and should be avoided
wherever possible, but the *ability* to do it (as opposed to having to
override) is important, IMO.

The difference I see between hiding and overloading is that with
hiding the choise is based on the type of the reference whereas with
overloading the type is based on the type of the actual object. Take
this example:

using System;

public class Test
{
public static void Main()
{
Test t = new Test();
A a = new B();
t.m(a);
}

public void m(A a) {
Console.WriteLine(a.id);
}

public void m(B b) {
Console.WriteLine(b.id);
}
}

public class A {
public virtual string id {
get { return "A"; }
}
}

public class B : A {
public override string id {
get { return "B"; }
}
}

The variable "a" is declared as type A but has an instance of B. When
it's passed to "m", it calls m(B) because the object being passed is
of type B ignore the fact that it's declared as A. Thefore for
overloading the important type is the type of the object. The type of
the reference is irrelevant.

btw, do other non-dotnet languages have this hiding ability? Just
curious.

Thanks,

Sam
 
Thanks, that was very informative. The outgoing contract aspect is a
good point.

Best regards,

Sam
 
On the surface, you have the performance issue, as virtual classes take a
slight hit over final classes. I am not overly concerned about this, although
perf is often the first thing most devs look at.

Deeper, however, is the issue of how classes are used. While it is certainly
more flexible to have all classes overridable by default, in most instances,
you are creating classes that are not really something that should be
overridden. If you are architecting your software, rather than running a code
and fix shop, you are generally pretty aware of which items should be set up
to be overridden and which should not. You explicitly mark those items.

Now, it could be argued that flexibility should win, but I have seen some
pretty bad reasons for overriding methods on a class or deriving from
classes. If you set items up as final, by default, this does not happen
without someone explicitly going out and setting the original members up as
virtual.

If you really poke down into the internals of .NET, you can see even greater
issues with the "everything is virtual" methodology.

---

Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA

***************************
Think Outside the Box!
***************************
 
We need to be very careful about terminalogy when discussing things like this. There is no such thing as a virtual class in .NET - abstract yes - virtual no. I assume you were referring to classes with virtual methods. Final classes ... do you mean sealed classes or non-virtual methods (final methods in Java). I guess you have said class when you meant method.

Now in terms of performance there are two issues with virtual and non-virtual methods

1) There is an extra level of indirection to call a virtual method as it requires a runtime lookup in the v-table, whereas non-virtual does not and can be bound at compile time. Anyone seriously using this as an argument over using non-virtual rather than virtual in .NET is using the wrong platform. If this type of performance difference is important to you, you really shouldn't be using .NET.

2) Virtual methods, because the actual implementation that is to be called isn't known until runtime cannot be inlined by the JIT compiler. This can have a noticible performance impact in certain scenarios.

But the fundemental thing about perfornce tuning is don't attempt to tune your application prematurely. Once you have a cut of your application, profile and time it to find out where *your* bottlenecks are. They are very often in completely different places to where you thought they might be.Making your code *really* efficient in machine instruction by making it more convoluted when you are writing it is almot always the wrong approach because most business applications are IO/network bound not processor.

Apart from that I agree with others that have stated that C# should have sealed classes by default. Unless you *specifically* code with derivation in mind you will very easily create a class that can be broken by someone deriving from you. Also, once you have released a version of your code you can always un-seal a class without breaking a client; sealing a class after the fact may well break clients.

Regards

Richard Blewett - DevelopMentor
http://staff.develop.com/richardb/weblog

nntp://news.microsoft.com/microsoft.public.dotnet.framework/<[email protected]>

On the surface, you have the performance issue, as virtual classes take a
slight hit over final classes. I am not overly concerned about this, although
perf is often the first thing most devs look at.

Deeper, however, is the issue of how classes are used. While it is certainly
more flexible to have all classes overridable by default, in most instances,
you are creating classes that are not really something that should be
overridden. If you are architecting your software, rather than running a code
and fix shop, you are generally pretty aware of which items should be set up
to be overridden and which should not. You explicitly mark those items.

Now, it could be argued that flexibility should win, but I have seen some
pretty bad reasons for overriding methods on a class or deriving from
classes. If you set items up as final, by default, this does not happen
without someone explicitly going out and setting the original members up as
virtual.

If you really poke down into the internals of .NET, you can see even greater
issues with the "everything is virtual" methodology.

---

Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA

***************************
Think Outside the Box!
***************************
 
Is the same true of methods? Does adding "virtual" to a base-class
method break clients or is it seemless?

Sam


On Wed, 29 Sep 2004 13:06:14 -0700, "Richard Blewett [DevelopMentor]"
Apart from that I agree with others that have stated that C# should
have sealed classes by default. Unless you *specifically* code with
derivation in mind you will very easily create a class that can be
broken by someone deriving from you. Also, once you have released a
version of your code you can always un-seal a class without breaking a
client; sealing a class after the fact may well break clients.
 
If a method wasn't already virtual, then the client would never have been able to override it, therefore the only option they would have had, having a member with the same name, is to hide the base class version. If the base class then evolved to making the member virtual then this still doesn't effect the derived class as the derived class needs to mark its method as override to actually override the base class version, if it doesn't it still hides the base class version. This is deinfitely the correct choice for the language designers as it obeys the law of least surprise. In other words, the derived class method is not going to suddenly be invoked by the base class simply because they have a name collision.

Regards

Richard Blewett - DevelopMentor
http://staff.develop.com/richardb/weblog

nntp://news.microsoft.com/microsoft.public.dotnet.framework/<[email protected]>


Is the same true of methods? Does adding "virtual" to a base-class
method break clients or is it seemless?

Sam


On Wed, 29 Sep 2004 13:06:14 -0700, "Richard Blewett [DevelopMentor]"
Apart from that I agree with others that have stated that C# should
have sealed classes by default. Unless you *specifically* code with
derivation in mind you will very easily create a class that can be
broken by someone deriving from you. Also, once you have released a
version of your code you can always un-seal a class without breaking a
client; sealing a class after the fact may well break clients.
Regards

Richard Blewett - DevelopMentor


---
Incoming mail is certified Virus Free.
Checked by AVG anti-virus system (http://www.grisoft.com).
Version: 6.0.770 / Virus Database: 517 - Release Date: 27/09/2004



[microsoft.public.dotnet.framework]
 
Samuel Neff said:
The difference I see between hiding and overloading is that with
hiding the choise is based on the type of the reference whereas with
overloading the type is based on the type of the actual object. Take
this example:

<snip>

You've shown an example using *overriding*, not *overloading*. Here's
an example using overloading:

The variable "a" is declared as type A but has an instance of B. When
it's passed to "m", it calls m(B) because the object being passed is
of type B ignore the fact that it's declared as A.

It doesn't, actually. It calls m(A), but that has the same effect as m
(B) because id is overridden. Let's change your program so that it
doesn't show the effects of overriding, just overloading:

using System;

public class Test
{
public static void Main()
{
Test t = new Test();
A a = new B();
t.m(a);
}

public void m(A a)
{
Console.WriteLine("m(A)");
}

public void m(B b)
{
Console.WriteLine("m(B)");
}
}

public class A {}

public class B : A {}

By your reckoning, that should print m(B), but it actually prints m(A).
Thefore for overloading the important type is the type of the object.
The type of the reference is irrelevant.

See above.
 
2) Virtual methods, because the actual implementation that is to be
called isn't known until runtime cannot be inlined by the JIT
compiler. This can have a noticible performance impact in certain
scenarios.

<snip>

Note that although this is true in .NET, it's not true for all JITs.
Sun's JVM, Hotspot, is a multi-pass JIT. It can inline virtual methods
and then "uninline" them later on if they ever get overridden. For this
reason, the claim Anders made about Java developers getting worse
performance due to forgetting to make things final doesn't hold water.

See

http://groups.google.com/groups?selm=MPG.185fb5071f750ed6989694%
40news.microsoft.com

aka http://tinyurl.com/5mjqk for an example of this.
 
It doesn't, actually. It calls m(A), but that has the same effect as m
(B) because id is overridden. Let's change your program so that it
doesn't show the effects of overriding, just overloading:

Ah, so my poor example was hiding true behavior. Declared type does
matter for overloading.. didn't know that..

Thanks for clarifying.

Sam
 
Back
Top