.NET SUCKS --- READ FOLLOWING. MICROSOFT IS A SUCKY CO

  • Thread starter Thread starter consumer62000
  • Start date Start date
Further, I encourage you to adopt a less religious, more analytical
attitude
toward making technology value judgements. There is an appropriate
technology
for a given problem, and one size definitely does not fit all. Work to
develop the skills necessary to know what fits what; your customers and
ultimately your paycheck will thank you.
very appropriate remark ^_^
 
Well, this question can be rased in a more realistic way, from people that
really write code:
I am developing an application witch is not real-time (my real time
applications run on PIC, 8052, ARM ect), but has a QoS requirment: max
responce should be 1 sec!!!!
I use mixed-mode: .NET and C++, managed and unmanaged code (I don't like C#,
but it is my age: I'm an old dog)
The questions:
- How long garbage collection takes? milions of objects per second or
hundrents? Does anybody has an estimation? (not your beliefs, your
measurments!)
- If I don't release objects (in critical times) does the garbage collector
run at all?
- the code gets compiled at application start, or on first execution of a
function?

The reason I use .NET is not really gc: mainly I like the extra runtime
checks. To me, at least, its easier to find the nasty pointer-gone-wild bugs.
The second reason is the presentation layer: it is easier to use the
libraries. If I really blaim microsoft of something (does anybody there
hear?) is that not all libraries have been imported in .NET.
 
So I hope another sample from this area will be interesting also.

In 2003 Quake II was ported to managed C++. And runs "without noticeable
performance delay" (Ralf Arvesen, Vertigo Software, Inc.)

"One second", hah
 
Q: How long does the garbage collection take?

A: That's going to depend on a number of factors, including what code
was placed in the Dispose method of objects that the GC has to dispose
of on its own. You can't nail it down to a finite number, because you
never know what someone ELSE is doing. However, the collector is highly
optimized, and should be able to efficiently release upwards of 40
million objects per second. Again, that number can be reduced if a
Dispose method does something processor intensive (e.g., commit a
transaction).

------------------------------------------------------------

Q: If I don't release objects (in critical times) does the garbage
collector run at all?

A: The garbage collector runs regardless of what your code does. The GC
runs on a low-priority thread, and checks for unreferenced dynamically
allocated memory space. If any piece of dynamically allocated memory
can't be referenced from a variable currently on the stack, that object
is collected. If the object has a Dispose method, it's invoked in the
first pass. It's actually deleted on the 2nd pass.

Note that you can force the garbage collector to run by invoking
System.GC.Collect(). It's not recommended that you do that too often,
however, and never without good reason. It's a low-priority thread for
a reason.

------------------------------------------------------------

Q: Does the code get compiled at application start, or on first
execution of a function?

A: I've seen no documentation that indicates that code is JIT-compiled
on a per-function basis. The documentation that I have seen states then
when you compile a program in Visual Studio, the .EXE file actually
contains MSIL, which is (supposedly) platform independent. That MSIL
gets JIT compiled to native code when you start the application. Each
assembly referenced by the application is JIT compiled as it is needed
(and only if it hasn't been NGENed); however, once an assembly has been
JIT compiled, the native version is used to execute it. If your restart
the application, it's JITted again.

It is important to note that you can bypass the JIT step and improve
performance by using NGEN, which ships with .NET. A quick article about
how to use NGEN can be found here:
http://visualbasic.about.com/od/usingvbnet/a/FWTools2.htm

------------------------------------------------------------

I'm right there with you about why I prefer to use .NET: I am a huge
proponent of typesafe languages, and the CLR is *extremely* type-safe.
That makes my code more efficient and far more robust, and makes me do
more upfront thinking than I would with typeless languages (like
JavaScript) or relatively dynamic languages (like Visual Basic 1-6).

Then there's the fact that it's truly object-oriented, which is far
better (IMHO) than object-BASED VB6.

Cheers!
 
A: I've seen no documentation that indicates that code is JIT-compiled
on a per-function basis. The documentation that I have seen states then
when you compile a program in Visual Studio, the .EXE file actually
contains MSIL, which is (supposedly) platform independent. That MSIL
gets JIT compiled to native code when you start the application. Each
assembly referenced by the application is JIT compiled as it is needed
(and only if it hasn't been NGENed); however, once an assembly has been
JIT compiled, the native version is used to execute it. If your restart
the application, it's JITted again.
Actually it's written somewhere....
The JIT does compile a function (and only this function) the first time it's
accessed.
 
Let me tell you a scenario and you will see what I mean.
There is a large application that has communication with a real time
system . The app has to respond to the requests in no more than 1 s.
The app is a C# .NET app and everything is fine and everyone at
Microsoft is happy that they forced their "new" platform down someone's
throat.

Now imagine a scenario where the GC has to collect the memory. Well,
when GC runs all the threads are suspended and there is no response to
the incoming requests and application fails a critical requirement.

Well,any MS people here who can defend their sucky product,
I know they will say "don't use .NET for this or that...use C or C++
etc"
My q to them is why did you create .NET then?

This is one time I have to rescind my earlier post asking that the troll not
be fed.
The information provided by responders has been very interesting and
enlightening to me.
Thanks very much to you all!
It's too bad that the troll has neither the intellect nor the interest to
benefit from this :)
 
Tyrant Mikey said:
Q: How long does the garbage collection take?

A: That's going to depend on a number of factors, including what code
was placed in the Dispose method of objects that the GC has to dispose
of on its own. You can't nail it down to a finite number, because you
never know what someone ELSE is doing. However, the collector is highly
optimized, and should be able to efficiently release upwards of 40
million objects per second. Again, that number can be reduced if a
Dispose method does something processor intensive (e.g., commit a
transaction).

That is a little off center. The GC does not invoke Dispose, it invokes
finalizers. Objects that are finalizable are promoted and placed in a
finalization queue, the GC doesn't wait around while every object finalizes.
------------------------------------------------------------

Q: If I don't release objects (in critical times) does the garbage
collector run at all?

A: The garbage collector runs regardless of what your code does. The GC
runs on a low-priority thread, and checks for unreferenced dynamically
allocated memory space. If any piece of dynamically allocated memory
can't be referenced from a variable currently on the stack, that object
is collected. If the object has a Dispose method, it's invoked in the
first pass. It's actually deleted on the 2nd pass.

That is incorrect as well, in addition to the difference between Dispose and
Finalize, the GC is tied to the allocation scheme, thus if you never
allocate, you never trigger the GC. Running a low priority thread constantly
looking for memory that is no longer referenced would be terribly CPU and
memory bandwidth intensive, since the GC has to crawl through virtually
every byte of memory allocated by your program.

Also, you didn't mention that you cannot release objects, just cease to
reference them. The GC still has to come around and clean them up.
 
Vadim Indrikov said:
So I hope another sample from this area will be interesting also.

In 2003 Quake II was ported to managed C++. And runs "without noticeable
performance delay" (Ralf Arvesen, Vertigo Software, Inc.)

"One second", hah

Are you suggesting it's impossible to come up with a situation which
takes over a second to garbage collect? I suspect I could do that,
given a bit of time.

Anyone else up for a "be cruel to the garbage collector" contest?
 
Are you suggesting it's impossible to come up with a situation which
takes over a second to garbage collect? I suspect I could do that,
given a bit of time.

Anyone else up for a "be cruel to the garbage collector" contest?

That sounds...interesting to say the least. I'm in.
 
Q: How long does the garbage collection take?
That is a little off center. The GC does not invoke Dispose, it invokes
finalizers. Objects that are finalizable are promoted and placed in a
finalization queue, the GC doesn't wait around while every object finalizes.

True. I stand corrected. From the MS documentation:

"Another method supported by some classes, Finalize, runs automatically
when an object is released and can be used to perform other cleanup
tasks. The Finalize method is similar to the Class_Terminate() method
used in previous versions of Microsoft Visual Basic. *Unlike the
Dispose method,* the CLR automatically calls the Finalize method
sometime after an object is no longer needed."

However, now we've simply moved the potential bottleneck from the
Dispose method and into the Finalize method. So my point, remains
relatively true: the amount of time required to perform garbage
collection can't be pinned to a finite measure, because you don't know
what's going on in everyone else's cleanup code.
That is incorrect as well, in addition to the difference between Dispose and
Finalize, the GC is tied to the allocation scheme, thus if you never
allocate, you never trigger the GC. Running a low priority thread constantly
looking for memory that is no longer referenced would be terribly CPU and
memory bandwidth intensive, since the GC has to crawl through virtually
every byte of memory allocated by your program.

This sounds sensible, but I haven't read anything that delves into THAT
particular level of detail. Could you please tell me where you found
that? I'm one of those wackos who gets off reading about the nitty
gritty implementation details of stuff like garbage collectors, and I'd
love to hone my knowledge on how it works. (Example: I used to think
that reference counting was a pretty clever solution. What the hell was
*I* thinking?)

Also, it seems to me that the GC will always run, because the framework
itself allocates managed bojects that must be collected. Unless the
system allocates a separate GC thread per application, it would have to
run all the time albeit on a low priority thread.

Then again, I could just be yanking that right outta my hoohaa. :)
Also, you didn't mention that you cannot release objects, just cease to
reference them. The GC still has to come around and clean them up.

It didn't seem relevant at the level of the discussion. I've just
assumed (<-- operative word) that the poster knew that managed code
doesn't delete any memory objects, but that it simply dereferences
them. (That is, you don't invoke dealloc or anything similar; the GC
does that when it determines the object isn't referenced.)
 
Daniel O'Connell said:
That sounds...interesting to say the least. I'm in.

Okay, fancy coming up with any "rules"? My suggestion is that you can
do what you like, then time a call to GC.Collect() (using DateTime.Now
is likely to be accurate enough to start with, IMO).

I suggest we *don't* specify anything about what's required to be
present in memory by the end of the test - after all, a server could
well have an expensive garbage collection cycle that doesn't actually
end up managing to collect much.

On the other hand, requiring more than 500MB of physical memory would
probably be a bad thing. (But hey - it's probably quite easy to end up
with very slow response times if the application ends up swapping...)

..NET 1.1 okay with everyone?
 
Jon Skeet said:
Okay, fancy coming up with any "rules"? My suggestion is that you can
do what you like, then time a call to GC.Collect() (using DateTime.Now
is likely to be accurate enough to start with, IMO).

I suggest we *don't* specify anything about what's required to be
present in memory by the end of the test - after all, a server could
well have an expensive garbage collection cycle that doesn't actually
end up managing to collect much.

On the other hand, requiring more than 500MB of physical memory would
probably be a bad thing. (But hey - it's probably quite easy to end up
with very slow response times if the application ends up swapping...)

.NET 1.1 okay with everyone?

Sounds good. Lets see what one can dream up. I'm curious what can be done
using relativly little memory but many references.
 
Daniel O'Connell said:
Sounds good. Lets see what one can dream up. I'm curious what can be done
using relativly little memory but many references.

Okay, here's my first entry. It takes 370MB (according to task manager)
and the garbage collection takes between 1.5 and 2 seconds on my
(fairly fast) box. Oh, and don't worry - I'd never normally write code
like this :)

using System;

class Node
{
public object parent;
public Node[] sub = new Node[9];
}

public class GcNemesis
{
static object hook;

static Random rng = new Random();

static void Main()
{
Prepare();
Console.WriteLine ("Collecting...");
DateTime start = DateTime.Now;
GC.Collect();
DateTime end = DateTime.Now;
Console.WriteLine (end-start);
GC.KeepAlive(hook);
}

static void Prepare()
{
Node top = new Node();
Console.WriteLine ("Populating...");
Populate(top, 6);
GC.Collect();
Console.WriteLine ("Filtering...");
Filter(top);
hook = top;
}

static void Populate (Node node, int depth)
{
for (int i=0; i < node.sub.Length; i++)
{
node.sub = new Node();
node.sub.parent = node;
}
if (depth != 0)
{
foreach (Node sub in node.sub)
{
Populate (sub, depth-1);
}
}
}

static void Filter (Node node)
{
foreach (Node sub in node.sub)
{
if (sub != null)
{
Filter (sub);
}
}
for (int i=0; i < node.sub.Length; i++)
{
if (rng.Next(16)==0)
{
node.sub=null;
}
}
}
}
 
Sure, feel free to borrow it; just bring it back the way you found it. :-)

--
Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA

***************************
Think Outside the Box!
***************************


Alvin Bruney - ASP.NET MVP said:
You may have also found a scenario where you need a
screwdriver instead of a hammer. It does not mean the hammer sucks.

that's a great line right there. wonder if i could use it?

--
Regards,
Alvin Bruney [MVP ASP.NET]

[Shameless Author plug]
The Microsoft Office Web Components Black Book with .NET
Now Available @ www.lulu.com/owc
Forth-coming VSTO.NET - Wrox/Wiley 2006
-------------------------------------------------------



Tyrant Mikey said:
Cowboy pontificated:


That answer was FAR better than any I've read. Short, sweet, to the
point.
 
There ARE things .NET is not good for. Knowing how the garbage collector
works, it should not take 1 sec to clean up memory. But, even if it did, use
another tool. Don't tell me .NET sucks because it is not the best platform
for EVERYTHING.

Given enough time, you can build an entire house with a screwdriver.
Granted, your neighbors will think you are an idiot, but it can be done. When
you are finished, however, don't tell me the screwdriver sucks just because
you lost your toolbox.

--
Gregory A. Beamer
MVP; MCP: +I, SE, SD, DBA

***************************
Think Outside the Box!
***************************
 
IT works!
================
Populating...
Filtering...
Collecting...
00:00:01.7031250
================
And it was a simple exemple though.....
Mhh.. not really representatitve of variable usage in most case I think,
that's probably why....

Hey, fun anyway!!!

Jon Skeet said:
Daniel O'Connell said:
Sounds good. Lets see what one can dream up. I'm curious what can be done
using relativly little memory but many references.

Okay, here's my first entry. It takes 370MB (according to task manager)
and the garbage collection takes between 1.5 and 2 seconds on my
(fairly fast) box. Oh, and don't worry - I'd never normally write code
like this :)

using System;

class Node
{
public object parent;
public Node[] sub = new Node[9];
}

public class GcNemesis
{
static object hook;

static Random rng = new Random();

static void Main()
{
Prepare();
Console.WriteLine ("Collecting...");
DateTime start = DateTime.Now;
GC.Collect();
DateTime end = DateTime.Now;
Console.WriteLine (end-start);
GC.KeepAlive(hook);
}

static void Prepare()
{
Node top = new Node();
Console.WriteLine ("Populating...");
Populate(top, 6);
GC.Collect();
Console.WriteLine ("Filtering...");
Filter(top);
hook = top;
}

static void Populate (Node node, int depth)
{
for (int i=0; i < node.sub.Length; i++)
{
node.sub = new Node();
node.sub.parent = node;
}
if (depth != 0)
{
foreach (Node sub in node.sub)
{
Populate (sub, depth-1);
}
}
}

static void Filter (Node node)
{
foreach (Node sub in node.sub)
{
if (sub != null)
{
Filter (sub);
}
}
for (int i=0; i < node.sub.Length; i++)
{
if (rng.Next(16)==0)
{
node.sub=null;
}
}
}
}
 
If I do very simple and approximate/rough calculation
you've got about 9^6 ~ 1M nodes.
so that's about 370 byte / node
each node knows about 11 reference so that 30 byte per reference.
hey, .NET is very greedy!

Jon Skeet said:
Daniel O'Connell said:
Sounds good. Lets see what one can dream up. I'm curious what can be done
using relativly little memory but many references.

Okay, here's my first entry. It takes 370MB (according to task manager)
and the garbage collection takes between 1.5 and 2 seconds on my
(fairly fast) box. Oh, and don't worry - I'd never normally write code
like this :)

using System;

class Node
{
public object parent;
public Node[] sub = new Node[9];
}

public class GcNemesis
{
static object hook;

static Random rng = new Random();

static void Main()
{
Prepare();
Console.WriteLine ("Collecting...");
DateTime start = DateTime.Now;
GC.Collect();
DateTime end = DateTime.Now;
Console.WriteLine (end-start);
GC.KeepAlive(hook);
}

static void Prepare()
{
Node top = new Node();
Console.WriteLine ("Populating...");
Populate(top, 6);
GC.Collect();
Console.WriteLine ("Filtering...");
Filter(top);
hook = top;
}

static void Populate (Node node, int depth)
{
for (int i=0; i < node.sub.Length; i++)
{
node.sub = new Node();
node.sub.parent = node;
}
if (depth != 0)
{
foreach (Node sub in node.sub)
{
Populate (sub, depth-1);
}
}
}

static void Filter (Node node)
{
foreach (Node sub in node.sub)
{
if (sub != null)
{
Filter (sub);
}
}
for (int i=0; i < node.sub.Length; i++)
{
if (rng.Next(16)==0)
{
node.sub=null;
}
}
}
}
 
"harrykouk"
I am developing an application witch

Well thats just reinventing the wheel isn't? Why not just use the
Application Wizards? I rather a "staff" over a wand and a broomstick anyday.

:=)
 
Is it just me or Microsoft, DevelopMentor and others were taken over by
marketing people. One argues that some programs don’t need speed (since when
I wonder). The other is argues that you need “. you need a real-time
operating system..†for resolution of seconds. Windows is pretty good system
if you do not use s..ty .Net, Java etc., it is not unheard of a program on
Windows measuring their operations in millisecond and even microseconds (.Net
excluded). Maybe Microsoft engineers should pull their heads out of .Net ass
and try to save the great system they have. Yes it is true Microsoft is
dropping their support for ANSI C++ (see “sprtintf depreciated†warning in
VC8) – too bad I really liked Windows last 10 years or so. Oh well there’s
always Linux.
 
Lloyd Dupont said:
If I do very simple and approximate/rough calculation
you've got about 9^6 ~ 1M nodes.
so that's about 370 byte / node
each node knows about 11 reference so that 30 byte per reference.
hey, .NET is very greedy!

No, actually over 5 million nodes are generated - the nodes are
generated before recursion.

So, that's about 5 million nodes, each with 2 references, and about 5
million arrays of nodes, each with 9 references. I would guess at each
of the arrays taking (16+4*9)=52 bytes, and each of the nodes taking
(8+4*2)=16 bytes.

Checking the maths with the actual numbers, that's about right - so the
object overhead is 8 bytes for a plain object, and 16 bytes for an
array. Hardly that greedy...
 
Back
Top