Jon said:
In which case Big-Oh notation isn't the right tool for the job.
Whatever base it is, it'll be constant - and O(n) = O(kn) for any
constant k, so it doesn't matter at all.
Big-Oh is certainly not the tool for the job of determining the number
of operations an algorithm must perform. That's the job of the initial
equations that were used to get the Big-Oh in the first place. If the
initial equation contains a lg(n) or log(n) component then why would
you change that notation by saying the complexity is O(ln(n))?
I prefer using lg(n) in the initial equations because it is my belief
that there is less ambiguity in the base. I naturally carry over that
notation when writing the Big-Oh complexity because I see no compelling
reason to change it.
For example, consider the AVL tree. We want to know the runtime
complexity of a search and we know that it is a function of the height
h. For a AVL tree h <= 1.44*lg(n+1). The base is important here
because the tree is binary (it has at most 2 branches at each node).
You wouldn't want to use ln(n) because the tree doesn't have ~2.718
branches. So why would you suddenly change lg(n) to ln(n) when writing
the Big-Oh?
Brian