So the constant is the number of loops as before? |
The constant is just a constant. If you know that your loop will iterate twice per array value for example, then you have O(2n), and then you simplify to O(n).
I need to be told this because it's not the only interpretation. |
As far as I know, this is the only interpretation.
On the same wiki:
An algorithm is said to be O(1) constant time if the value of T(n) is bounded by a value that does not depend on the size of the input |
If you have a program that runs the same no matter the input, its O(1). Time complexity is about how many iterations per input. You can have 1 iteration per input O(n), or squared iterations per input O(n^2), etc..
The constants have nothing to do with any particular elementary operations.
The wiki article is simply misleading, as looping can be considered an elementary operation (goto).
I think k is actually the length (i.e. the number of digits) of the largest number |
Yes, I said the largest "sized" number.
So if the largest number is 1000 |
I didn't mean the largest number is 1000, but that the largest number contains 1000 digits.
But using larger radix means it has to do more work for each "digit" so if you want you could probably think about it as O(k×(n+r)) where r is the radix. |
I'm not sure what you're talking about.
Radix sort, as I know it, sorts the first place digits of all the numbers, then the 2nd place, and so on. So it iterates n times to sort, but it also does that for every digit - hence k*n.
Again, the k value is usually very small, but since its entangled with the input, it's part of the time complexity. Every bit of information is important when accessing algorithms.