Computational Complexity

  • Thread starter Thread starter Joris van Lier
  • Start date Start date
J

Joris van Lier

Given two implementations of an algorithm how do I determine the relative
computational complexity of each?

Are there tools that can determine the relative performance of two
algorithms or implementations?

Joris van Lier
 
Joris van Lier said:
Given two implementations of an algorithm how do I determine the relative
computational complexity of each?

Are there tools that can determine the relative performance of two
algorithms or implementations?

Joris van Lier

Let me explain this a bit more:

I'm working on a process where multiple copies of data may exist,
new data is supplied to the process with a location where to store it,
the process stores it in the designated location and
replaces all copies with links to the new location

For the purpose of this example I'll assume that the storage is the
filesystem , the data is contained in files and all files are of equal
length, location is a path, and all locations where data can be stored are
known beforehand


With my limited math skills and assuming the program runs as a single thread
I tried to resolve this as below

D= data unit (file)
L = number of locations
F = average number of files in a location
B = average number of bytes in a file

O(fx(D))= L*F*B

is this correct?

How do I account for multiple concurrent threads of execution?
 
Let me explain this a bit more:

I'm working on a process where multiple copies of data may exist,
new data is supplied to the process with a location where to store it,
the process stores it in the designated location and
replaces all copies with links to the new location

For the purpose of this example I'll assume that the storage is the
filesystem , the data is contained in files and all files are of equal
length,  location is a path, and all locations where data can be stored are
known beforehand

With my limited math skills and assuming the program runs as a single thread
I tried to resolve this as below

D= data unit (file)
L = number of locations
F = average number of files in a location
B = average number of bytes in a file

O(fx(D))= L*F*B

is this correct?

How do I account for multiple concurrent threads of execution?

It looks like you were working through the calculations for
determining space complexity, but you are asking about runtime
complexity right? What is it that you are wanting to do to the data?
The only way to work through how long it will take your process to run
is to understand what it needs to do and how it will do that. Can you
give us a better description?
 
Given two implementations of an algorithm how do I determine the relative
computational complexity of each?

You do a Big-Oh analysis of the algorithms.
Are there tools that can determine the relative performance of two
algorithms or implementations?

Well, you can use a profiler to assist in determining the runtime
differences of the algorithms. However, you must hold the problem
size constant to be able to make a reasonable comparison between the
algorithms. That comparison is only valid for the specific problem
size chosen. Given enough information you may be able to make some
general conclusions about all problem sizes, but they won't be as
reliable as doing the Big-Oh analysis.
 
Hi Brian, thanks for responding
You do a Big-Oh analysis of the algorithms.

I know but.... How?
Well, you can use a profiler to assist in determining the runtime
differences of the algorithms. However, you must hold the problem
size constant to be able to make a reasonable comparison between the
algorithms. That comparison is only valid for the specific problem
size chosen. Given enough information you may be able to make some
general conclusions about all problem sizes, but they won't be as
reliable as doing the Big-Oh analysis.

My first hunch was to do measuring , but when measuring I would have to make
really sure that the environment is stable in terms of performance / load,
other processes running at the same time might influence the tests, either
negatively by lowering the available resources or positively by causing data
used by my process to be cached, and this measurement would then not reflect
the real world in which the process will be operating,

Can you recommend any sources for learning Big-Oh analysis for
non-mathematicians ?

Joris van Lier
 
Can you recommend any sources for learning Big-Oh analysis for
non-mathematicians ?

Joris van Lier

Google around a bit. Take your time. It can be quite confusing and
complex so start out simple. Avoid the math in the middle and just
concentrate on the Big-Oh notation of concrete examples. Use
complexities of known sub-algorithms to help derive the Big-Oh for the
main algorithm. Understand the differences between best, average, and
worst case scenarios. Understand what coding patterns yield the these
common complexities: O(1), O(n), O(n^2), O(log(n)), O(n*log(n)). Once
you understand those it gets easier to combine them.
 
Back
Top