Blah.ToIList(list) should work the same as list.ToIList(). Agree or
disagree?
What is "list.ToIList()"?
I can neither agree nor disagree if I don't know what you're asking.
[...]
I remain unconvinced that it's a noticeable performance enhancement at
all. I'll bet that even in a trivial benchmark that only enumerates,
there'd be no practical difference between the two, and certainly in
code that does something interesting with the enumeration, any
difference that may exist (if any) would be inconsequential.
Try it. I bet it doesn't do much for performance either. However, why
bring a variable into existance if you're not going to use it.
Because the code is simpler doing so. All else being equal,
less/simpler code is always better.
I'd like to experiment with this one also. But why risk it?
Why risk what?
[...]
It should be rare. As far as foregoing "using" goes, that's a
non-issue. The "using" statement is for people writing C#. The C#
compiler isn't writing C#; it's just emitting MSIL. So it can just put
the same try/finally pattern in place that it uses for "using", with the
small modification of checking for the IDisposable implementation before
trying to dispose.
Yes, of course. I am just unsure that the compiler would even bother
checking for IDisposable.
It had better. It's one thing for your own code to create an
IDisposable and then fail to dispose it. But for the compiler to do so?
That'd clearly be a bug.
Here is the relevant portion of the C# specification:
A foreach statement of the form
foreach (V v in x) embedded-statement
is then expanded to:
{
E e = ((C)(x)).GetEnumerator();
try {
V v;
while (e.MoveNext()) {
v = (V)(T)e.Current;
embedded-statement
}
}
finally {
… // Dispose e
}
}
and:
The body of the finally block is constructed according
to the following steps:
• If there is an implicit conversion from E to the
System.IDisposable interface, then the finally clause
is expanded to the semantic equivalent of:
finally {
((System.IDisposable)e).Dispose();
}
except that if e is a value type, or a type parameter
instantiated to a value type, then the cast of e to
System.IDisposable will not cause boxing to occur.
• Otherwise, if E is a sealed type, the finally clause
is expanded to an empty block:
finally {
}
• Otherwise, the finally clause is expanded to:
finally {
System.IDisposable d = e as System.IDisposable;
if (d != null) d.Dispose();
}
The local variable d is not visible to or accessible
to any user code. In particular, it does not conflict
with any other variable whose scope includes the finally
block.
Which is all a long way of saying, the compiler _always_ disposes
disposable enumerators. This is true even for the IEnumerator interface.
I don't know the runtime impacts of dynamic
type checking, but I'm betting MS wouldn't do it for ever foreach
loop.
You'd lose that bet. It happens all the time. The "foreach" statement
can do all sorts of implicit extra work you might not realize, including
casting, type-checking, and even disposing.
It would just cost too much. Again, why risk it? Just write your
code under that assumption.
Where's the risk?
On a similar note, ADO.NET IDbCommand objects are supposedly
IDisposable as well. However, aside from myself, I don't know any
programmers who bother to surround a command with a using block. They
do it for the connection and IDataReaders, but not IDbCommands. It's
reasonable to ask, "What could there be needing disposed?" I can't
think of anything! Nonetheless, I use a using anyway. Just about the
only thing I don't use a using for are Forms.
If you are showing a Form instance modally, you definitely should be
using a "using" statement or otherwise disposing the instance when
you're done with it.
If you are not showing a Form instance modally, the documentation is
very clear that when the Form instance is closed, it is disposed.
Either way, guidance is clear, and there's no question about what the
right thing to do is.
In any case, this is a complete non-issue with respect to the question
of whether to separate argument checking into a different method from
the implementation, as well as to the question (the ACTUAL one at hand)
of whether it makes sense to have a policy of a class's code never
calling that class's own public methods.
I agree. So why pay for a method call when you can avoid it?
Exactly.
So why pay for an argument check when you can avoid it?
Because a) you can't really avoid it (the code is broken if you do), and
b) because the code is _simpler_ and no worse when you don't avoid it.
In other words, there's a benefit to not avoiding it, and none to
avoiding it.
I suppose it matters
which one you are going to do more often. If you are calling a method
from outside the class, it's probably is faster to not have a private
method. If you are going to be calling a lot from within the class,
it's probably faster to call an unchecked implementation.
I doubt you will find a significant performance difference either way.
The question doesn't depend on the issue of performance. It's a code
correctness and maintenance issue.
See, I think there is a big difference between code that does
exception handling, resource management and argument checking and the
actual business logic. The core code should be oblivious to such
considerations and operate in a "care-free" manner. All that other
stuff is just necessary fluff as far as I am concerned. It helps other
programmers filter out what is and is not important. So what if it
requires an extra method call?
Frankly, I disagree with your desire to separate input checking from
implementation. However, even if you assume that's a reasonable goal,
the fact remains that in no way justifies a _policy_ of never calling
public methods of a class from within the class.
My experience says otherwise.
Your experience says that for X where X doesn't happen very often, that
automatically means that you should try to avoid letting X happen?
That's pretty bizarre logic if you ask me. I can think of lots of very
nice things that don't happen very often, but which I definitely would
not like to avoid letting happen.
[...]
Irrelevant to the question of refactoring to separate argument checking
from implementation.
Sometimes the only way you find a better solution is to make
opportunities for them during development. In that sense, it is very
relevant. I have wound up with entire libraries that initial started
out as private static methods. I wouldn't have even noticed it had I
always written my logic in-line.
Huh? There is NOTHING about the question of separating argument
checking from the rest of the implementation that has ANYTHING to do
with whether a method can be a static method or not.
It's completely irrelevant to this discussion. That's why I wrote this
next line:
I'd say it is less in favor of my approach.
If you like. I don't see how it's different one way or the
other...expensive methods tend to hide implementation details intended
to reduce overhead, regardless of what overhead we're talking about.
But if you want to use that to argue against your own policy, that's
fine with me.
Personally, I like to think of the internals of a class as a free-for-
all.
Well, I'll just have to disagree. I find there's a real benefit in
having internals have many of the same encapsulation and data
verification features found elsewhere.
Now, that's not to say every single method goes around verifying its
input. But, a useful goal is to try to be able to prove at some point
in a call chain, based on methods higher in the call chain, that you
know for sure the data is valid. Calling through a public method that
does argument checking is a perfectly reasonable thing to do, where
there's a public method that represents the thing that code within the
class needs to do.
Once inside, there are no restrictions. I trust that my class is
intelligent enough to know how to work on itself. Personally, I like
to think of ArgumentException and its derivitives as uncatchable
exceptions. It's not the user's fault or bad data; it bad programming.
Indeed. In fact, even for public members, it is debatable whether that
sort of argument checking is really required. It depends a lot on
context, and in some contexts, it's sufficient if a bad argument is
passed, even to a public member, to just let that public member fail
when it reaches the bad argument.
It depends on what the actual consumer of the public method is, and the
nature of the argument, among other things. Policies that don't take
those issues into account are flawed policies.
Somehow I let something through that I shouldn't have. Perhaps I need
better UI validation or perhaps I just screwed up. In opposition to my
policy, immediately detecting bad arguments would make it easier to
resolve such bugs. However, all that extra code just clouds up the
significant logic.
Well, what's more important? Ease of maintenance, and correct code? Or
avoiding having to scroll past a few lines of argument checking to see
the main implementation of a method?
In fact, there are patterns people have come up with that help
encapsulate argument checking somewhat, to minimize the prologue code in
a method. And in .NET 4.0, code contracts should help a lot to reduce
the explicit argument checking code that can distract from
implementation details.
So it's not that the issue you raise isn't something other people don't
notice, or something that affects the way people code. It is. But it's
just not significant enough to justify a policy to never call public
methods in a class from within the class.
[...]
In my experience, bypassing argument checking in arbitrarily chosen
situations has caused problems.
It's not arbitrary at all. If you want to reduce the number of these
private, care-free methods floating around, make your classes smaller.
Small classes are a good thing, but making classes smaller just so you
can avoid having a single class call its own public methods doesn't make
any sense. The policy is bad enough when applied to an implementation
detail such as where the implementation of a method actually exists, but
when it starts guiding the actual _design_ of your class hierarchies,
that's really going too far.
[...]
That all makes sense for private implementation that isn't just a repeat
of the public API. But obviously what you wrote does not apply to
private implementation methods that are simply called directly by a
method that has as its only purpose argument checking.
Here is an example. Suppose you write your class to originally support
a range between two numbers. Then 6 months down the road you decide to
support ranges that include a starting number and the number of
numbers (count). You can still use the same private implementation,
just by modifying the input arguments.
Uh, okay...let's see if I have this straight:
-- Your policy is based on the desire to avoid the overhead of an
argument check when calling a method from within the class
-- This is in spite of the overhead created by calling an extra method
-- On top of that, you conclude that there's a benefit to making a
private implementation called by the public method, even though that
means that your public method will have to include even _more_ overhead
in order to translate the given inputs to those used by the private
implementation (e.g. adding a count to the first argument so that you
can pass a start and end number to the private implementation).
As you know, I really think it doesn't make sense to waste time
optimizing code until you know for sure it needs optimizing. It's easy
enough to avoid doing _stupid_ things that will completely break
performance, leaving any remaining potential optimizations having only
relatively small degrees of effect on the overall execution of your code.
So there's certainly a philosophical disconnect between what you say is
the root of your policy, and my own preferences. But, if you're going
to argue in favor of the policy on the basis of minimizing overhead, I
don't see how it helps support your policy when you point out the ways
that your policy leads to a requirement to add overhead.
I mean, there are some things about your statements relative to your
policy that make me wonder which of us is arguing against the policy,
you or me!
If you had your logic inline
with your argument checks you could have achieved the same thing.
However, before when you had a start and stop, now you must calculate
the stop. So you have the "count" variable sitting around for the rest
of the method that doesn't do anything else.
How is that worse that having a whole extra method in your call stack,
with its own "start" and "count" arguments, and of course the overhead
of the call itself?
In any case, I would say that in most cases, your implementation would
reflect the inputs. Rather than writing:
void Method(int start, int count)
{
int last = start + count;
for (int i = start; i < last; i++)
{
}
}
I think it more likely you'd have this:
void Method(int start, int count)
{
for (int i = start; i < start + count; i++)
{
}
}
Or even this (slightly less efficient, but even fewer variables and less
code):
void Method(int start, int count)
{
while (count-- > 0)
{
start++;
}
}
There's no extra variable there (even if that were a serious
consideration, which it's not).
Different experiences I suppose. For me it has always seemed to reduce
complexity and promoted small, cohesive classes. [...]
I found the idea that you can reduce complexity and promote a smaller
class by _adding_ methods to the class, um...well, counter-intuitive at
the least. If not outright self-contradictory.
Pete