What has managed code achieved?

  • Thread starter Thread starter John
  • Start date Start date
J

John

Hi

What are the advantages actually achieved of managed code? I am not talking
of theory but in reality.

Thanks

Regards
 
Hi

What are the advantages actually achieved of managed code? I am not talking
of theory but in reality.

Thanks

Regards

Check these out:
http://en.wikipedia.org/wiki/Managed_code
http://www.datadirect.com/developer/net/dot-net-managed-code/managed-code-advantage/index.ssp
http://en.wikipedia.org/wiki/Microsoft_.NET

As your managed code runs under supervision of CLR, it's well-
maintained for .NET framework and the concepts what .NET includes such
as performance, compatibilty and security.

HTH,

Onur Güzel
 
If you think about VB 6.0, which was not managed, developers had to write
lots of extra code that didn't necessarially have anything to do with the
programming problem. Developers had to write extra code to ensure memory
was managed correctly (set x = Nothing) and if they didn't, the program
would develop memory leaks. Developers also had to write addional code to
handle certain security and performance issues as well.

In the managed environment of the .NET CLR, much of this work is
automatically managed by the CLR through its code access security, garbage
collection, & MSIL.

The real, tangible advantages are that less code is written and that the
resulting programs are more robust and cross-language compatible.

-Scott
 
re:
!> What are the advantages actually achieved of managed code?

Imho, the greatest achievement for managed code is: it gets rid of "dll hell".

There's also automatic memory management, platform-neutrality, and cross-language integration.

Performance benefits are gained from executing all code in the CLR.
Calling unmanaged code decreases performance because additional security checks are required.

Other performance advantages are available through the use of the Just-In-Time compiler,
with gains in built-in security by using code access security and the avoidance of buffer overruns.




Juan T. Llibre, asp.net MVP
asp.net faq : http://asp.net.do/faq/
foros de asp.net, en español : http://asp.net.do/foros/
======================================
 
But have we traded one kind of DLL hell for another? How many versions of
the Framework are loaded on your system? How is COM-based DLL management any
different than GAC-cached modules that can be replaced without retesting the
consumer applications? Since we now must wait while the code is compiled
before it can be executed, the performance argument might not hold water for
some applications. Notice how long it takes to launch the Report Manager...
I expect that managed code has managed to disenfranchise a lot of perfectly
good COM developers...

--
__________________________________________________________________________
William R. Vaughn
President and Founder Beta V Corporation
Author, Mentor, Dad, Grandpa
Microsoft MVP
(425) 556-9205 (Pacific time)
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
____________________________________________________________________________________________
 
Juan T. Llibre said:
re:
!> What are the advantages actually achieved of managed code?

Imho, the greatest achievement for managed code is: it gets rid of "dll
hell".


I think this depends on whether or not your regard the problems with the GAC
as another type of DLL hell,as it sounds like both I and Bill do. ;-)

For most, however, this is a definite advantage, as most do not use the GAC
much and those who do know how to version (maybe?).

--
Gregory A. Beamer
MVP, MCP: +I, SE, SD, DBA

Subscribe to my blog
http://feeds.feedburner.com/GregoryBeamer#

or just read it:
http://feeds.feedburner.com/GregoryBeamer

********************************************
| Think outside the box! |
********************************************
 
Scott said:
If you think about VB 6.0, which was not managed, developers had to
write lots of extra code that didn't necessarially have anything to
do with the programming problem. Developers had to write extra code
to ensure memory was managed correctly (set x = Nothing) and if they
didn't, the program would develop memory leaks. Developers also had
to write addional code to handle certain security and performance
issues as well.


The case for erasing references hasn't changed much between VB6 and the
latest version of .NET.... keeping an object reachable longer than needed
will still leak memory. Ok, gc handles the case of circular references,
sort of -- you get random cleanup order so they're still a bad idea.
 
Less Blue Screens by junior developers. ;-)

Realistically, from my COM days, the biggest real world advantage is not
having to register everything and the pain associated with registering
development builds to properly test.

Of course, if you use MTS, you had the option of dropping the running
process and droppping a new COM DLL over the old DLL. But this was thinking
WAY outside the box. You also ended up having to add the weight of MTS to
your app. Not too bad with web apps, which already had most of the weight.
Not as exciting for other apps.

With .NET, however, you can do this without a kludge. So that is a real
world advantage.

--
Gregory A. Beamer
MVP, MCP: +I, SE, SD, DBA

Subscribe to my blog
http://feeds.feedburner.com/GregoryBeamer#

or just read it:
http://feeds.feedburner.com/GregoryBeamer

********************************************
| Think outside the box! |
********************************************
 
Juan said:
re:
!> What are the advantages actually achieved of managed code?

Imho, the greatest achievement for managed code is: it gets rid of
"dll hell".

Replaces it with assembly-hell, and since most assemblies still end in .dll
....
There's also automatic memory management, platform-neutrality, and
cross-language integration.
Ok.


Performance benefits are gained from executing all code in the CLR.
Calling unmanaged code decreases performance because additional
security checks are required.

That's "enabling partial trust scenarios", not improving performance.
Native performance is still better.
Other performance advantages are available through the use of the
Just-In-Time compiler,

Like? Sure, the JIT *could* use information about the hardware environment,
such as cache size or availability of extended instruction sets. But that's
more theory than reality.
with gains in built-in security by using code access security and the
avoidance of buffer overruns.

JIT doesn't avoid buffer overruns, bounds checking does. And precompilation
does a lot better job of reasoning about bounds checks at compile time and
optimizing them away than the JIT.

Almost all runtime performance gains from .NET are the fact that memory
allocation from a stack is a LOT cheaper than from a heap. In most cases
that actually pays for all the inefficiencies of .NET so managed code comes
out even with native on average, on many apps managed is a little faster,
for some native is a lot faster.
 
The case for erasing references hasn't changed much between VB6 and the
latest version of .NET.... keeping an object reachable longer than needed
will still leak memory. Ok, gc handles the case of circular references,
sort of -- you get random cleanup order so they're still a bad idea.

I'm sorry, but that is absolutely wrong.

There is a difference between an object remaining in memory longer than is
absolutely necessary and an object remaining in memory for the duration of
the application's lifetime because it was never explicitly de-referenced.
In the former, memory may be taken up longer than is needed, but the object
can still sit there causing no harm, if it has been disposed (when needed),
which .NET offers the Using statement so that this can be done
automatically. In the latter, not only is memory taken up, but external
resources are tied up as well, because the class's terminate method is not
called.

In .NET, you are likely to adversely affect the performance of your
application by explicitly dereferencing your object (x = Nothing)! The GC
is optimized to look at running methods when collecting and to determine if
objects that still have application roots are actually going to be used in
the remainder of the method running. If you were to be cleaining up your
objects by setting them to Nothing, but hadn't reached that point of the
code yet, the GC would actually NOT mark your object that isn't going to be
used for any meaningful purpose for collection, now that you've got another
reference to it (the clean up code) later in the code.

In short, there is really no compelling reason to set object variables to
Nothing in .NET. The only thing you gain (and it's not even certain that
you will gain this) is that the object *may* become elligible for collection
sooner than when its variable falls out of scope, but as I've pointed out,
you could actually cause that to take even longer by setting the variable to
Nothing!

-Scott
 
William,

You awake me.

I agree with you that it is strange that we load all kind of Net framework,
or better Microsoft does that on our computers.

As we are not using Net 1.x then the latest version should in fact be
enough.

Cor
 
Scott,

With full respect

What is the benefit from this (in bits and computer time), I don't know how
it is in the USA but a piece of memory from 1Gb cost for a modern computer
here 20 Euro.

I full agree with your sentence about cleaner code, that are in fact all my
discussions about uneeded writting of dispose as well all about.

However, all discussions about the use of some more or less bytes in memory
or programming speed in nanoseconds are in my idea really from the time VB6
was created.

Cor
 
re:
!> But have we traded one kind of DLL hell for another?

Not exactly.

Did you ever develop in Classic ASP ? "Dll Hell" was quite evident there.

1. Different dll versions *prevented* your application from running in IIS.
2. Updating a dll meant manually stopping IIS, and all applications, so a single app could be updated

re:
!> How many versions of the Framework are loaded on your system?

That's irrelevant.
The fact that several versions of the .Net Framework can coexist hardly qualifies as "dll hell".




Juan T. Llibre, asp.net MVP
asp.net faq : http://asp.net.do/faq/
foros de asp.net, en español : http://asp.net.do/foros/
======================================
 
You're making my point Cor.

In VB 6, if you didn't explicitly destroy your object references, you not
only wasted memory, but also potentially tie up external resources. So, to
solve those two problems the developer HAD to dereference objects.

Ben's last message seems to indicate that we should still do this in .NET
for optimal object collection. I was correcting him, as doing what he
suggests can actually delay object de-referencing.

In .NET, as you know, as long as you are disposing of your objects, which is
built in using Using, you are all set.

The point being that in .NET, the developer doesn't write memory management
code, as was required in VB 6, and that is one advantage of working in a
managed environment.

-Scott
 
Yes, but this attitude is myopic. In a client system in a business you might
be able to restrict the application configuration and remove unnecessary
Framework installations but as I understand it, some versions of the
Framework depend on earlier versions. In addition, in a typical system I
expect that even the OS draws on more than one version of the Framework for
its own utilities as do the utilities and applications that are supplied by
the hardware vendor. I expect that we're stuck with any number of
Frameworks for the next decade.

As to memory, I think it's arrogant to assume that memory is cheap so it's
ok to just load up the system and take all you need for as long as you like.
This is what kills system are applications that consume every byte of memory
in sight forcing other applications to be swapped out. Consider that 32-bit
systems (Vista or XP) can only use 3.5GB of RAM. Take away the OS footprint,
a couple .NET Frameworks and the memory consumed by the ancillary utilities
like Anti-Virus, Anti-spyware, Anti-spam, SQL Server Express instances,
Adobe, Office and other "helper" DLLs and you don't have much memory left to
load up that set of pictures, documents and "real" applications. Consider
that most (by far) of the systems out there are owned and run by consumers
or offices that permit their employees to treat their systems as their
own--thus they get loaded up with a lot of memory-hungry applications and
wallpapers. I see this attitude toward memory (and disk space) like the US's
approach to cheap oil. We built an entire infrastructure around it with no
eye to the future when oil is $150/barrel.

No, IMHO developers still need to be cognizant about how much memory they're
consuming and holding. Given that the .NET Framework GC only runs when the
system is memory stressed only exacerbates the problem.

--
__________________________________________________________________________
William R. Vaughn
President and Founder Beta V Corporation
Author, Mentor, Dad, Grandpa
Microsoft MVP
(425) 556-9205 (Pacific time)
Hitchhiker's Guide to Visual Studio and SQL Server (7th Edition)
____________________________________________________________________________________________
 
Scott - most of the time explicit managment of references was not
necessary in
VB6 either. About the only time that setting an object reference to
nothing
amounted to anything is if the object was a class or module level value -
which is about the same as VB.NET. In VB6 local variables were
automatically
claimed when the method exited - well, assuming there wasn't a bug in the
underlying COM objects implementation :)

I don't know how you can make that statement when different VB 6
applications had object references scoped differently. I could just as
easily say that most applications did have module scoped object variables.
In those situations, you (the developer) were required to manage the
object's lifetime and by association, any external resources used by that
object.

In .NET, using Using, the developer need not do anything.

This is one aspect of what working with managed code buys us.

-Scott
 
You make it sound as if it was an onerous task on VB6 developers to manage
object lifetimes. It was not.
All you are doing is persisting the old myth that you must do object
cleanup
on all objects in vb6, which is utterly false.
Tom Shelton

Well, there we have our disagreement. In fact, it was an onerous task in VB
6 to take care of object lifetime. That is a fact, not a myth. You may
have not had particular issues dealing with it, but the fact that VB 6 was
notorious for memory leaks and the fact that simply not setting an object
reference to Nothing was most likely the culprit tell us this. This is not
my opionion. VB 6 was well know for these issues.

Contrary to your assertion, simply letting a variable fall out of scope was
not the same thing as setting that variable reference to nothing before it
did. This makes all object variables vulnerable to memory leaks.

I don't expect that we'll wind up agreeing on this, but I'm pretty sure I
can find a couple of million VB 6 developers who will tell you that the need
to do object cleanup was not a "myth" in VB 6.

-Scott
 
Respectfully, that sounds like your personal experience and an exception
(pardon the pun) and not the rule. Just because you never had issues with
this doesn't make it a non-issue or a myth.

If you've ever done much COM automation in VB 6, you know exactly how not
setting object variables to Nothing is reality.

Setting object variables to Nothing in VB 6 was absolutely essential in
order to ensure proper object clean up as well as decrementing the reference
count.

The fact remains, sloppy coding or not, that managing memory in VB 6 was the
responsibility of the developer and in .NET the CLR manages this for us to a
much larger degree.

-Scott
 
Back
Top