Why has hungarian notation been abandoned?

B

Bob Grommes

1) Just as someone gave an example where the type of a variable which
was a Hashtable could change to SortedList, so the discount might need
to be changed to a double, or a decimal. Are you definitely, definitely
going to change all your code? If you don't, it's worse than not having
the type there at all.

If you have a system, and are committed to it, yes. Generally you are not
talking about a lot of instances of the variable to deal with, and this kind
of refactoring is easily automatable if you are a slow typist or whatever.

That said, I made the original point about Hashtable ==> SortedList and that
in my mind is a somewhat different beast. Both of those classes implement
the same interfaces and are descendants of the same ancestor; making a
distinction between the two is probably less useful than the distinction
between an int and a string or even an int and a float. I mainly advocate
using prefixes on value types.
2) It makes it harder to read, IMO. You either have to teach your brain
to skip over the letters, or you have to mentally read them every time,
which breaks up the flow (for me, anyway).

One gets used to it -- but you're right, we're all wired differently and
this may be a perpetual distraction for some.
Yes, if that means greater readability when I'm looking at the code
when I *am* familiar with what things are. You don't really need to use
the debugger - just hover over a variable to see its type.

Ah, but to me, this is where the evaluation of the usefulness of prefixing
variable names usually runs afoul. It is much less useful when you are
writing the code, or have familiarity with it. But where it really shines
is months or years later when you're no longer familiar with it, or tomorrow
when you hire someone new to work on it. That is where the value is -- in
making code you're either unfamiliar or rusty with, more self-evident and
therefore, easier to read accurately in the long run.

--Bob
 
J

Jim Cooper

There is nothing the least bit difficult about Hungarian notation

I strongly disagree. It rapidly descends into nonsense.
The second version makes it clear what the types are.

I don't agree with that statement at all. It makes the hidden assumption
that you know what "int" and "str" mean. In all but the most trivial
type systems that gets completely out of hand. Only using it for
primitive types, as someone suggested earlier, seems completely silly.
What's the point of only using it sometimes? That suggests to me that
there is no point using it at all, since you can get by without it most
of the time.

Also, if you know so little about a variable that you don't even know
its type, I think you have no business using it, frankly (anyway, all
modern IDEs will tell you the type by hanging the mouse pointer over the
symbol).
perhaps dismissing entirely it as useless complexity is a little too out-of-hand.

Having worked extensively both with and without it, my experience is
that it is completely worthless. It adds nothing to the readability of
the code, it justs adds another maintenance headache.

Cheers,
Jim Cooper

_______________________________________________

Jim Cooper (e-mail address removed)
Falafel Software http://www.falafelsoft.co.uk
_______________________________________________
 
J

Jon Skeet [C# MVP]

Bob Grommes said:
If you have a system, and are committed to it, yes. Generally you are not
talking about a lot of instances of the variable to deal with, and this kind
of refactoring is easily automatable if you are a slow typist or whatever.

I thought it was a lot of work, and error-prone? That's what you
claimed before.
That said, I made the original point about Hashtable ==> SortedList and that
in my mind is a somewhat different beast. Both of those classes implement
the same interfaces and are descendants of the same ancestor; making a
distinction between the two is probably less useful than the distinction
between an int and a string or even an int and a float.

What about between a float and a double, or an int and a long though?
I mainly advocate using prefixes on value types.

Either it's a pain to change the names of variables or it's not,
frankly. I don't think your point can stand up for reference types but
not for value types.
One gets used to it -- but you're right, we're all wired differently and
this may be a perpetual distraction for some.

And why should you have to get used to something in the first place? I
dare say I could get used to K&R bracing if I absolutely had to, but
I'd rather use something which was natural to start with...
Ah, but to me, this is where the evaluation of the usefulness of prefixing
variable names usually runs afoul. It is much less useful when you are
writing the code, or have familiarity with it. But where it really shines
is months or years later when you're no longer familiar with it, or tomorrow
when you hire someone new to work on it. That is where the value is -- in
making code you're either unfamiliar or rusty with, more self-evident and
therefore, easier to read accurately in the long run.

If they're sufficiently unfamiliar with the code that they don't even
know the type of it, how are they meant to know its meaning? The two go
together - and a well chosen name can imply both with no prefixing, in
my experience.

That's also where hovering over a variable name can help - if it
displays the XML documentation (which I can't remember off-hand whether
or not VS.NET does, but I would hope it does) then you get more
information than just the type anyway.
 
J

Jon Skeet [C# MVP]

Bob Grommes said:
There is nothing the least bit difficult about Hungarian notation unless
someone has produced a byzantine naming standard that is undocumented, tries
to do too much, or both.

The visual clues from moderate usage of Hungarian makes code more
immediately self-evident and I think helps the developer to keep things
straight in his or her own mind while working on the code. That is the
sense in which it is "easier".

To use a concrete example:

for (i = 0;i < lastName.Length;i++) {
}

for (intCount = 0;intCount < strLastName.Length;intCount++) {
}

for (lCntrIntCount = 0;lCntrIntCount <
lDatStrLastName.Length;lCntrIntCount++) {
}

Here's a version which makes it even clearer though:

for (int i=0; i < lastName.Length; i++)
{
....
}

Look ma, the type is visible in the code itself!

Keep declaration as close to first use as possible and there's usually
no problem in the first place for local variables - and if you've got a
non-local variable called either "intCount" or "i" you've got more to
worry about than just whether or not to use Hungarian.

Rather than deal with the complexity, avoid it cropping up in the first
place: keep methods short, keep declaration close to first use, and
keep names meaningful. The more semantic meaning is in the name, the
more obvious the type becomes anyway.
 
B

Bob Grommes

Jon,

I almost always do declare a counter in a for loop. It was a contrived
example.

Perhaps at the end of the day my choice has to do with who has to work
harder, the writer of the code or the one who reads it at some later time.
Personally I find code easier to read with prefixes (up to a point of
diminishing returns that varies according to the language; I certainly use
prefixing much less in C# than in any language I've used to date). And
since I end up maintaining (or supervising the maintenance) of hundreds of
thousands of lines of code I've written or become responsible for over the
years, I prefer to put in a little more effort up front to make my life
easier later. I recognize and respect that some would see this as making
both the writing *and* the reading of code harder.

Actually your point about keeping declarations close to first use addresses
the same issue -- can I read one or two lines of code, or at most a single
code block, and not have to hunt around in a half-dozen different places for
missing bits of info that will make that code self-evident? I find
declaring variables close to their use, good naming conventions, consistent
formatting choices, a reasonable amount of commenting, and avoiding the
temptation to say too much in one line without some overriding good reason,
all contribute to this end.

Different people have different perceptions and sensibilities about what's
smooth or jarring, helpful or annoying, elegant or kludgy. I also know that
most of the people I've worked with are fairly comfortable with prefixing
and don't see it as an evil to be stamped out. These are intelligent people
of good conscience, just as you are.

Ultimately what it boils down to is, settle on a naming convention; then
document and use it consistently. What has made my professional life more
unpleasantly interesting than the exact variable naming convention used
(with or without prefixes), has been the lack of a convention, or a mixture
of several.

--Bob
 
B

Bob Grommes

Either it's a pain to change the names of variables or it's not,
frankly. I don't think your point can stand up for reference types but
not for value types.

It's a question of cost vs benefits. There are too many reference types to
come up with memorable prefixes for. But there are a very finite number of
value types, and the list is not likely to grow much, if at all. One can
find prefixing useful, even if it is not practical for 100% of the places it
could be used.
And why should you have to get used to something in the first place? I
dare say I could get used to K&R bracing if I absolutely had to, but
I'd rather use something which was natural to start with...

Well, perhaps I should not admit this -- but I happen to find K&R bracing
more natural, as well as more compact and readable overall. I find opening
braces on their own line considerably harder to follow. It probably has to
do with how my brain has wired itself to follow the text.

There is no right or wrong about it. It's an individual matter.

Let me express my appreciation for your points -- yours and everyone else's
here -- against prefixing (and K&R bracing!) If I take nothing else away
from this discussion, it'll be to consider the possibility that I might
inconvenience myself a bit more for the sake of others, if I can be
convinced that more people find my conventions appalling than don't. I have
had the luxury of using only occasional outside help with most of my
projects; as time wears on that is less and less true. It's not like I
would be horribly crippled without prefixing or bracketing as I do. It's
just how my own comfort zone has evolved, and none of the two dozen or so
fine people I've worked with off and on over the years has ever expressed
that it is odd or impeding or complicated.

--Bob
 
D

David

The visual clues from moderate usage of Hungarian makes code more
immediately self-evident and I think helps the developer to keep things
straight in his or her own mind while working on the code. That is the
sense in which it is "easier".

To use a concrete example:

for (i = 0;i < lastName.Length;i++) {
}

for (intCount = 0;intCount < strLastName.Length;intCount++) {
}
I would maintain that the first version (no Hungarian) forces you to either
make (potentially dangerous) assumptions about what i and lastName are, or
look it up and remember it as you read it.

How does Hungarian help in this regard? The only likely error here is
that somebody changes lastName to point to something other than a string
(say, to a custom class that maintained multiple last names, where
..Length referred to words). But how does Hungarian help you there? Do
you simply remove the prefix, then assume that the *lack* of a prefix
means that lastName isn't one of the special preferred types. Or do you
come up with some other prefix to refer to your new class? The first
doesn't sound very useful, while the second doesn't sound manageable.

I'm trying to understand your point here, so could you give specifics
about your example? What kind of (dangerous) assumptions could a
programmer make about lastName that would be cured by the presence of an
Hungarian prefix.

The intCount issue isn't a good example probably, since intCount is such
a bad variable name in so many ways (I realize these are just examples
you're tossing out off the top of your head). I do feel that Hungarian
tends to lead to bad variables names, largely because the mixing of
abstraction levels confuses the issue, but it's probably unfair to blame
that on the notation itself.
 
B

Bob Grommes

David said:
On 2004-08-16, Bob Grommes <[email protected]> wrote:
But how does Hungarian help you there? Do
you simply remove the prefix, then assume that the *lack* of a prefix
means that lastName isn't one of the special preferred types. Or do you
come up with some other prefix to refer to your new class? The first
doesn't sound very useful, while the second doesn't sound manageable.

If I see "strLastName" I instantly know it's a string. I don't have to hunt
for where it was defined or passed in, or trust my memory of having seen it
earlier.

If I see lastName I know it's some other sort of object reference (as
personally I only prefix strings and StringBuilders, all other reference
types are unprefixed). I don't attempt to come up with a prefix for every
reference type because, as you point out, it's not practical.

Basically I'm trying to avoid having to stop and ask myself "what the heck
is that referring to?" even if the answer would come pretty quickly from the
context. I'm trying to get lines of code (even printed lines of code, where
you don't have access to IntelliSense) to read quickly and smoothly, yet
accurately. For me at least, prefixing is helpful in that regard; however,
it has the limitation that you have to work with a finite set of prefixes --
so I've chosen value types and a couple of common reference types and left
it at that. Half a loaf being better than none.

Again though, it's better that one uses prefixes (or not) with conscious
intent and consistency than it is that you agree with my preferences (or
anyone else's). We are really debating fine points here. I wouldn't think
the less of an applicant coming to me with sample code that doesn't use
prefixes; I wouldn't chafe significantly doing work for a customer who wants
me to adhere to a coding standard that prohibits them. What I *would*
disrespect is the randomness I see far too often in code where there seems
to be no plan and no consistency at all. The same remarks apply to
bracketing, indenting, commenting, etc.

--Bob
 
S

Steve Barkto

for (i = 0;i < lastName.Length;i++) {
}

for (intCount = 0;intCount < strLastName.Length;intCount++) {
}
I would maintain that the first version (no Hungarian) forces you to either
make (potentially dangerous) assumptions about what i and lastName are, or
look it up and remember it as you read it.

The second version makes it clear what the types are.

I have seen a _lot_ of code where the prefix is incorrect. If your
variables are named well, there is no need for HN.
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top