Why has hungarian notation been abandoned?

P

Phill

Does anyone know the reasoning for Microsoft abandoning Hungarina
Notation in C#?

I have found it very usefull in C++. I like this style:

constant: MY_CONSTANT
methos: myMethod()
class: MyClass
variable: iMyInteger

Anyway I'm not crazy about the new style but wonder if there is a good
reason for them capitalizing all functions and classes and not using
hungarian notation?
 
D

David Browne

Phill said:
Does anyone know the reasoning for Microsoft abandoning Hungarina
Notation in C#?
IMO Hungarian notaion (or otherwise decorating variable names with
indications of their type and scope) is helpful in procedural languages,
which tend to use a lot of global variables, and in languages which are less
than type-safe. It is necessary to rememeber, when programming in C,
whether a variable is a pointer or an integer, and if a pointer, a pointer
to what.

Consider an array of doubles.

A common C idiom like

*(pDblLocation++)

is very hard to read without hungarian notation. In contrast, in C#

locations

is quite readable as is.

David
 
D

Derek Harmon

Phill said:
Anyway I'm not crazy about the new style but wonder if there is a good
reason for them capitalizing all functions and classes and not using
hungarian notation?

Developers can now take any identifier into the Immediate Window and
write,

? identifier.GetType()

and voila, you get the type of the variable. Unnatural, unpronounceable
prefixes add no value. I don't know what the Hungarian prefix for an
AutomotivePartSKU-typed object should be (it's probably something
ridiculous like 'obj', which doesn't help me tell it apart from an Aircraft-
PartSKU one bit), and I don't care. All objects in the CLR know what
Type they are, unlike in C.

My advice is not to dogmatically cling to Hungarian notation; recognize
why it was necessary in the past, and why it's no longer necessary in the
present.


Derek Harmon
 
J

Joerg Jooss

Phill said:
Does anyone know the reasoning for Microsoft abandoning Hungarina
Notation in C#?

I have found it very usefull in C++. I like this style:

constant: MY_CONSTANT
methos: myMethod()
class: MyClass
variable: iMyInteger

Anyway I'm not crazy about the new style but wonder if there is a good
reason for them capitalizing all functions and classes and not using
hungarian notation?

Funny -- this was asked in the ASP.NET group just some days ago, so I'll
repost what I've written back then:


Well, HN uses prefixes in an identifier's name depending on the identifier's
type. In a true OO environment featuring polymorphism, the exact type is
quite often unknown (just think of object factories that simply return
interface types). Actually, most of the time you don't even care abut the
real type, as long as the object complies to some sort of interface contract
(by implementing an interface or inheriting from some other class).

Next, there are potentially thousands of types -- which could mean using *a
lot* of prefixes. Oh, and once I need to change a type, I have to rename all
identifiers using that type. Hey, even worse, if it's a "visible" change
like

public int Foo(int someInt)
to
public long Foo(long someLong),

you would want to change your code as well, if you're using my Foo()
method -- any variable in your code holding Foo's return value will use the
wrong prefix otherwise.

Not to mention all those classic HN prefixes that make no sense in .NET at
all -- l, p, sz, ...

Cheers,
 
B

Bob Grommes

Hungarian is arguably much less useful in a strongly-typed environment where
the IDE typically tells you instantly what the type of a variable is.

On the other hand, if you have the goal to make code readable and
self-explanatory, I think that even in .NET the judicious use of Hungarian
prefixes is a good thing. If you just want to read code -- particularly
someone else's, or your code months after you wrote it -- a little Hungarian
frees you up to read it quicker and comprehend it better. This is true
whether you are scrolling a display or reading a printout.

I still use prefixes for all the value types and for string and
StringBuilder, which are probably the most commonly used reference types.
Beyond that, the laws of diminishing returns kick in with a vengance; there
is an infinitude of possible reference types. And consider -- if you have,
say, a Hashtable, it might be a mistake to use a reference variable like
htPeople. Suppose that you decide in some later refactoring that you need
to use a SortedList instead? The interface is virtually the same; it seems
that having to change htPeople to slPeople is a lot of work (and chance for
error) for very little benefit.

Hungarian can be used not only to show type, but scope. In an environment
like .NET there is really no reason to use prefixes for scoping of
variables; all variables are local to the method they are declared in, there
is no other scope.

Some people use an underscore or "m_" to prefix names of fields; personally
I use the same name for fields and their corresponding properties except
that the first letter is capitalized for the property and lower-case for the
field. Also, I don't prefix for variable type in fields or properties.

Another use of Hungarian can be to show the intended use of a variable.
This is probably the lease useful and most problematic application of
Hungarian in any environment; but in an OOP environment I see no real place
for it at all. OOP inherently associates values with objects, and
well-designed methods are small enough that they don't have that many local
variables. So, the usage of a given variable is always quite self-evident.

All of these are somewhat personal decisions and really it's more important
that you settle on a standard and use it consistently, than exactly what
that standard is.

--Bob
 
C

clintonG

You know I just tried using the GetType( ) method in the
Command Window on several different objects and all
that is returned is 'not valid.'

--
<%= Clinton Gallagher, "Twice the Results -- Half the Cost"
Architectural & e-Business Consulting -- Software Development
NET (e-mail address removed)
URL http://www.metromilwaukee.com/clintongallagher/
 
J

Julie

Phill said:
Does anyone know the reasoning for Microsoft abandoning Hungarina
Notation in C#?

I have found it very usefull in C++. I like this style:

constant: MY_CONSTANT
methos: myMethod()
class: MyClass
variable: iMyInteger

Anyway I'm not crazy about the new style but wonder if there is a good
reason for them capitalizing all functions and classes and not using
hungarian notation?

One of the main reasons is due to the capabilities of modern development
environments, and the ability for type information to be instantly available
(in theory for some IDEs!) for any type, definition, function/method, etc.

Hungarian notation is from the early (PC) days where type information wasn't
readily available, and such prepended type 'decorations' assisted the
programmer in 'knowing' type information simply by looking at the name.

Another reason is that w/ each successive language (or library), the number of
types has (probably) exponentially increased. Hungarian notation just couldn't
keep up w/ the massive number of types being introduced.

In short, Hungarian was very useful for its time, but its time has past, and
its utility value has been steadily diminishing for the past few years...

By the way, what you have as an example above is not Hungarian notation, with
the exception of 'variable: iMyInteger'.
 
J

Jon Skeet [C# MVP]

Derek Harmon said:
Developers can now take any identifier into the Immediate Window and
write,

? identifier.GetType()

Or, rather more easily, just hover over the identifier.

My advice is not to dogmatically cling to Hungarian notation; recognize
why it was necessary in the past, and why it's no longer necessary in the
present.

Agreed.
 
M

Mark Rae

You know I just tried using the GetType( ) method in the
Command Window on several different objects and all
that is returned is 'not valid.'

Loads of stuff often doesn't work in the command window - ToString(), for
example, only works part of the time...
 
P

Phill

I agree classes instances don't need hungarian notation and its not
practical to keep comming up with them for each class.

But for primitives I think it makes sense.

I think between:

Savings = (OrigPrice * Discount);

And

fSavings = (fOrigPrice * fDiscount);

The 2nd is better because I know fDiscount is a float and its not like
the integer value 25 for 25%. Maybe that's not the greatest example
but I think things are ambiguous otherwise. If you have a decent sized
method that you haven't seen in a while, are you saying you'd rather
determine the variable's type using the debugger than by just looking
at it?

And what about telling the difference between constants classes and
methods. The .Net Naming schema treats them all the same. Each
Beginning word is capitalized. Why not CAPS ALL CONSTANTS, or just
capitalize the 1st word in classes.

Note I'm not really arguing with you guys, I'm kinda hoping someone
can show me this newer scheme is better, because I can't seem to see
it as an improvement myself.
 
J

Jon Skeet [C# MVP]

Phill said:
I agree classes instances don't need hungarian notation and its not
practical to keep comming up with them for each class.

But for primitives I think it makes sense.

I think between:

Savings = (OrigPrice * Discount);

And

fSavings = (fOrigPrice * fDiscount);

The 2nd is better because I know fDiscount is a float and its not like
the integer value 25 for 25%.

I disagree:

1) Just as someone gave an example where the type of a variable which
was a Hashtable could change to SortedList, so the discount might need
to be changed to a double, or a decimal. Are you definitely, definitely
going to change all your code? If you don't, it's worse than not having
the type there at all.

2) It makes it harder to read, IMO. You either have to teach your brain
to skip over the letters, or you have to mentally read them every time,
which breaks up the flow (for me, anyway).
Maybe that's not the greatest example
but I think things are ambiguous otherwise. If you have a decent sized
method that you haven't seen in a while, are you saying you'd rather
determine the variable's type using the debugger than by just looking
at it?

Yes, if that means greater readability when I'm looking at the code
when I *am* familiar with what things are. You don't really need to use
the debugger - just hover over a variable to see its type.
And what about telling the difference between constants classes and
methods. The .Net Naming schema treats them all the same. Each
Beginning word is capitalized. Why not CAPS ALL CONSTANTS, or just
capitalize the 1st word in classes.

Readability. ALL_CAPS is harder to read than CapitalizingWords (it
feels like it's shouting) and ReadingCapitalizedWords is easier than
reading Lotsofwordswithnothingtosaywheretheystart.

It's very easy to tell the difference by context though - unless you're
using a delegate, a method name will always have brackets at the end of
it anyway. When you're using a delegate, it should be obvious that
you're doing so.
Note I'm not really arguing with you guys, I'm kinda hoping someone
can show me this newer scheme is better, because I can't seem to see
it as an improvement myself.

Maybe you will when you've used it for a while and then look back at
some old code. Anything new takes a little while to get used to.
 
C

clintonG

VS.NET > View > Other Windows > Command Window
-- or --
Debug > Windows > Immediate

Both load the same Command Window but open it in a
different mode and neither function the way it was claimed
in this discussion as two of us have learned and have noted
herein.

Some get lucky. Some do not.

--
<%= Clinton Gallagher, "Twice the Results -- Half the Cost"
Architectural & e-Business Consulting -- Software Development
NET (e-mail address removed)
URL http://www.metromilwaukee.com/clintongallagher/



Phill said:
I'm not familiar w/ the immediate window. Where is it, and how do you use
it?
 
S

Stu Smith


Well I'm not a fan of Hungarian myself but I have to say that as originally
conceived, it wasn't simply a method of repeating the static type of a
variable. You can see when something is a float or an integer by looking at
the variable declaration.

Rather, it's to make the semantic type of the variable plain. For example, a
float could represent a price (as in your above example), or a ratio.

So these are fine:

amtSavings = amtOriginalPrice - amtDiscount
amtSavings = amtOriginalPrice * ratioDiscount

but these are not fine:

amtSavings = amtOriginalPrice - ratioDiscount
amtSavings = amtOriginalPrice * amtDiscount

The canonical example is indexes and counts:

for( int iWhatever = 0; iWhatever < nWhatevers; iWhatever++ )
{
// This is fine: iXXX = index
arrWhatevers[ iWhatever ] =

// This is bad: nXXX isn't an array index
arrWhatevers[ nWhatevers ] =
}

The compiler does the static type-checking (array indexer must be an
integer). The Hungarian is there to do semantic type-checking (array indexer
should be an index and not a count... or an age... or a character... etc,
all of which are integers).

So people who religiously prefix all integers with 'i' or 'n' or whatever
are really missing the point (and the power) of Hungarian.

Having said all that I still wouldn't advocate it.

Stu
 
J

Jon Skeet [C# MVP]

Stu Smith said:
Well I'm not a fan of Hungarian myself but I have to say that as originally
conceived, it wasn't simply a method of repeating the static type of a
variable. You can see when something is a float or an integer by looking at
the variable declaration.

<snip>

Agreed, that's the original meaning of Hungarian notation. MS has used
it over the years to pretty much be the declared type, unfortunately,
and that's what a lot of people think of it as. I believe the OP meant
it in this way, so that's what I was addressing :)
 
J

Jim Cooper

On the other hand, if you have the goal to make code readable

That's one of the main arguments for **not** using Hungarian notation.

It's not called Hungarian notation because it's easier to read! :)

Users of stringly typed OO languages generally think HN is a big step
backwards.

That said, MS saw fit to keep it for interface declarations... :)

Cheers,
Jim Cooper

_______________________________________________

Jim Cooper (e-mail address removed)
Falafel Software http://www.falafelsoft.co.uk
_______________________________________________
 
S

Stu Smith

Jon Skeet said:
<snip>

Agreed, that's the original meaning of Hungarian notation. MS has used
it over the years to pretty much be the declared type, unfortunately,
and that's what a lot of people think of it as. I believe the OP meant
it in this way, so that's what I was addressing :)

Oh, absolutely.

I can't decide whether to feel sorry for Charles Simonyi for being so
comprehensively misunderstood, or whether to be cross because he tried to
shoe-horn what should have been an automatic system into a language like C,
and spawned what can best be described as a ruddy great mess.

I think what he was trying to say was something like this:

Given:

abstract class Integer;

class Index : Integer { ... }
class Count : Integer { ... }

class Array
{
X operator [] ( Index index ) { ... } // NB not Integer index.
}

Then:

Count nWhatevers = arrWhatevers.Length;

for( Index iWhatever = 0; iWhatever < nWhatevers; iWhatever++ )
{
// Fine
arrWhatevers[ iWhatever ] =

// Compiler error!
arrWhatevers[ nWhatevers ] =
}

....

The one that annoys me the most was the old MFC CWhatever. Surely the point
of C++ is that I shouldn't care if it's a class or a built-in?

Anyway...
Stu
 
B

Bob Grommes

Jim,

There is nothing the least bit difficult about Hungarian notation unless
someone has produced a byzantine naming standard that is undocumented, tries
to do too much, or both.

The visual clues from moderate usage of Hungarian makes code more
immediately self-evident and I think helps the developer to keep things
straight in his or her own mind while working on the code. That is the
sense in which it is "easier".

To use a concrete example:

for (i = 0;i < lastName.Length;i++) {
}

for (intCount = 0;intCount < strLastName.Length;intCount++) {
}

for (lCntrIntCount = 0;lCntrIntCount <
lDatStrLastName.Length;lCntrIntCount++) {
}

I would maintain that the first version (no Hungarian) forces you to either
make (potentially dangerous) assumptions about what i and lastName are, or
look it up and remember it as you read it.

The second version makes it clear what the types are.

The third version takes Hungarian over the top, making it harder rather than
easier, by adding scoping and intended usage info that has little or no
value in a managed app.

Of course, everyone has different ways of coping with complexity; your
mileage may vary. It's your prerogative to hate Hungarian and not use it;
but I think that in moderation it's got some real upside and it is perhaps
dismissing entirely it as useless complexity is a little too out-of-hand.

Best,

--Bob
 

Ask a Question

Want to reply to this thread or ask your own question?

You'll need to choose a username for the site, which only take a couple of moments. After that, you can post your question and our members will help you out.

Ask a Question

Top