Optimizations

  • Thread starter Thread starter Lance
  • Start date Start date
L

Lance

I have a couple of optimization questions:

1. Is there any way to force array bounds checks to be
disabled as you could do in VB6? I know that using
something like "For i as Integer = 0 to
myArray.GetUpperBound(0)" is supposed to automatically
disable array bounds checks for that loop, but this
doesn't do any good if you are looping through part of an
array.

2. Is there any way to locally disable integer overflow
checks? For example, I have a method that converts an
array of doubles to an array of integers and I know with
absolute certainty that none of the values in the double
array exceed the min and max values of an integer. It
sure would be nice to disable integer overflow checks for
just that method.

Thanks,
Lance
 
Hi Lance,

1. No, I dont think so.
2. Yes, but only for the entire assembly... see project
options/configuration/optimizations.

No, it would not be nice to remove overflow checks. VB.NET is fast enough
for most applications. JIT compiler performs about the same as C# and
managed C++. Top speed code can't be created with high-level languages. If
you go to unamaged C and Assembly, you have to pay the trade off, large
code, hard debug, advanced knowledge about what happens behind the scenes
required (registers, pointers, stack, heap, etc.).

If you find your code slow, there may be some bottlenecks created by
unefficient coding. No build options can save poor code. It's easy to find
info about optimizations, from MSDN to general books; avoiding late-binding,
unrolling loops, constant folding, etc.

Regards,
Mario
 
Hi Lance,

Is this a just-for-curiosity principle-of-the-thing question or is it
crucial? I ask this because my just-for-curiosity questions are : How large
are these arrays? and How much time are you looking to save?

Regards,
Fergus
 
Some parts of my app deal with very large arrays. In
other areas, it is sometimes necessary to loop through an
array for each member of the array (i.e., if an array has
a length of 10,000, then I would need to loop through the
array 10,000 times). In many cases it is only necessary
to loop through part of an array. Therefore, it makes
sense to write the methods such that they are able to
utilize start and end indexes. But, according to the
literature, doing so prevents the compiler from disabling
array bounds checks. The actual array size is set by the
user, but time saved by disabling array bounds checks
generally ranges from seconds to hours.

In another area of my code I need to convert a large
array of doubles to integer values. The integer values
are eventually displayed as an image. Disabling integer
overflow checks improves the overall performance by a
factor of ~15x. With integer overflow checks enabled the
process takes ~1-2 seconds which means that the user is
presented to an unimpressive slide show. With integer
overflow checks disabled the process only takes a
reasonable ~0.05-0.1 seconds.

I'd say these cases are pretty crucial, wouldn't you?
 
Hi Lance,

Crucial seems a good word here. I don't know whether to ;-) that or :-(
it.!! But my curiousity has become interest.

That first part that you described. Are you talking about something with
an O(N^2) cost where N=c10000?

Is using C# an option?

You say that loops are optimised only if they start at 0 and go to the
end. Can you fool the compiler by having an If inside which skips those
outside the section?

Is there any chance that you could put together a sample solution that I
can play with. Something that demonstrates the two sorts of slowdownness. A
zipped solution would be best. (I'm using VS2002)

Regards,
Fergus
 
Lance,
In addition to Fergus's comments about moving to C#.

Why don't you move the time critical functions to their own VB.NET assembly
that has integer overflow turned off?

This would allow you to get the speed boost of no integer overflow checks,
while staying in VB.NET.

I don't think there is currently anything you can do with the index
checking, per se. There may be ways however that you can structure your code
so it performs better.

The following articles provide information on writing .NET code that
performs well.

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/fastmanagedcode.asp

http://msdn.microsoft.com/library/d...y/en-us/dndotnet/html/highperfmanagedapps.asp

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/vbnstrcatn.asp

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dv_vstechart/html/vbtchperfopt.asp

http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndotnet/html/dotnetperftechs.asp

Hope this helps
Jay
 
Why don't you move the time critical functions to their
own VB.NET assembly
that has integer overflow turned off?

That occured to me. Right now everything in my app is
highly organized. I'm afraid that things will get
confusing if I start breaking the app appart
by "optimized" and "unoptimized". But, if that is my
only option then it might just have to be.

BTW: Thanks for the links!

Lance
 
That first part that you described. Are you talking
about something with
an O(N^2) cost where N=c10000?

Probably? What do the "O" and the "c" stand for?
Is using C# an option?

I was hoping you weren't going to ask that. I have 100%
VB.NET right now and no experience with C# (except for
converting simple examples to VB.NET). Are you refering
to using pointers or something else?
You say that loops are optimised only if they start at 0 and go to the
end. Can you fool the compiler by having an If inside which skips those
outside the section?

That is an interesting idea, but it would probably result
in worse performance if you only needed a small section
of a big array.
Is there any chance that you could put together a sample solution that I
can play with. Something that demonstrates the two sorts of slowdownness. A
zipped solution would be best. (I'm using VS2002)

Thanks for your interest! I'll see what I can do.

Lance
 
Hi Lance,

Here's more than you really want to know about O(x)!! ;-)

O(N) means 'has the same order as N" - where 'order' is in the sense of
'order of magnitude'.

For example: Bubble Sort (bless, it) has an O(N^2) cost - highly
exponential - double N and you quadruple the cost (because every element is
compared (near enough) with every other. Quicksort on the other hand, has an
O(N log2 (N)) cost and rises exponentially but at a much lower rate. The idea
sort routine has an O(N) cost which increases linearly - double N, the cost
goes up by 2.
N O(N) O(N log N) O(N^N)
1024 1024 1024x10 1M
2048 2048 2048x11 4M
4096 4096 4096x12 16M

In your situation, if you have an array of 1000 items and for each item
you have to do something against an array (another or same) of, say, 200
items - but with an array of 10,000, each must be processed against 2000,
then you have an O(N^2) process. Very costly. If your 10,000 still only
require 200, then it's O(N).

c1000 stands for 'circa' which means 'about'. It's used for approximations
in maths and history. America was founded c200 years ago, for instance. 'c' is
shorthand for 'let's not quibble over the exact number - it's near enough'.

But that's just background, I think your issue is worth investigating
either way.

I mentioned C# in an offhand way, just to see whether you had explored
that side of things. I think the only real solution is to write this in native
assembler.**

The loop-fooling idea could be along the lines of
If I >= StartIndex And I <= EndIndex Then DoIt
which was my original thought but
S = ""
For I = 0 To 10000
If I = 0 Then I = 4444
S = S & I & ", "
If I = 4448 Then Exit For
Next
produces "4444, 4445, 4446, 4447, 4448, " for me. I haven't checked to see
what code this produces yet. I'll wait for your zip before I delve.

Regards,
Fergus

** Native assembler - just joking ;-)

===============================
Even more than you ever wanted to know about O(x):
A rough guide to big-oh notation
http://www.cs.strath.ac.uk/~mdd/teaching/alg&comp/big_oh.html
 
Back
Top