add the condition and remove the if should be slightly faster here:
inv_count += arr[i]>arr[j];
the computer is better at adding unnecessary zeros than jumping over statements.
if you want to get fancy, then if the next value is >= the previous value, the previous value's inverts are the same.
that is if you had
56241 instead, you get 0111 for the 5. moving to the 6, you get 111 and can skip over the whole inner loop. I think this is always true, anyone see a flaw?
I don't see any other simple shortcuts though.
you can extend the above for all previous values but the logic gets complicated and may be slower checking that than it gains you in the end.
Since you are dealing with a vector, working with the size of the vector in the function as a for loop condition, there is no need to pass n into the function. Instantiate n in the function from the passed vector's size.
Passing the number of elements in the vector to the function is not wrong, though. Using a C++ container such as a vector makes doing it redundant.
Using jonnin's suggestion of replacing the if condition the function might look like this:
7 8 9 10 11 12 13 14 15 16 17 18 19 20 21
int getInvCount(std::vector<int> arr)
{
size_t n { arr.size() };
int inv_count { };
for (size_t i { }; i < n - 1; ++i)
{
for (size_t j { i + 1 }; j < n; ++j)
{
inv_count += arr[i] > arr[j];
}
}
return inv_count;
}
It is evident the function was originally written to work with a C style array, and then rewritten to work with std::vector.
If you were working on a subset of your vector then passing a size would make sense.
In general, little changes such as suggested by jonnin are not enough to solve such problems.
Instead, an entirely different and far more clever algorithm is in order.
Using mergesort will make it much much faster, like perhaps thousands of times for large n.
See https://www.geeksforgeeks.org/counting-inversions/
For 'reasonable' C++ code (ie no unnecessary container copies etc), unless you are dealing with millions, millions of iterations then micro-optimisation is unlikely to provide the looked-for magnitude time efficiency. Using a faster/more efficient algorithm is most likely to provide such a time efficiency.
Adding a condition is tiny, agreed.
I would think that skipping the loops would be more than a minor tweak. But the mergesort looks like the way to go; I do not google the answers to these kinds of issues unless its my problem and I need to get on with it, I prefer to see what I can see myself before looking at the answer.