An array A consisting of N different integers is given. The array contains integers in the range [1..(N + 1)], which means that exactly one element is missing.
Your goal is to find that missing element.
Write a function:
int solution(vector<int> &A);
that, given an array A, returns the value of the missing element.
For example, given array A such that:
A[0] = 2
A[1] = 3
A[2] = 1
A[3] = 5
the function should return 4, as it is the missing element.
Write an efficient algorithm for the following assumptions:
N is an integer within the range [0..100,000];
the elements of A are all distinct;
each element of array A is an integer within the range [1..(N + 1)].
I wrote this with time complexity O(N):
1 2 3 4 5 6 7 8 9
#include <algorithm>
#include <numeric>
int solution(vector<int> &A) {
auto N = A.size() + 1;
return N * (N + 1) / 2 -
std::accumulate(A.begin(), A.end(), 0);
}
1) Any comments, please?
2) std::accumulate exists in <algorithm>. Does including <numeric> too, have any effect on the code, please?
unsigned long is only 32 bit - with a max value of 4,294,967,295 - which is still less than 5,000,050,000. A 64-bit value is needed - uint64_t which has a max value of 18,446,744,073,709,551,615
long might be 32 or 64 bits depending on your compiler/platform.
Note that it's not just the type of N that you need to watch out for. Your call to std::accumulate will return an int because the third argument is an int.
Unfortunately there are no fixed-width integer literals. You would have to cast the value to the correct type or create your own user-defined literals if you don't want to mix types. Not that I think it really matters here since both types are large enough and conversion between them won't lead to any surprises.
unsignedlonglong and uint64_t are usually the same size (and even the same type).
According to the C++ standard unsignedlonglong is only guaranteed to be at least 64 bits. It means it might be larger than 64 bits, but not sure if that's the case anywhere though.
uint64_t is guaranteed to be exactly 64 bits. If the hardware makes it impossible to support this type (which would be extremely unusual) then it will simply not be available.
Thank you Peter for your info.
Do you have a "complete" table to show what you expressed in your post? That may be very helpful for me to learn more in this case?
>If the hardware makes it impossible to support this type
Could you please explain this section a little more?
You're assuming int is 32-bits! And even if it is 32-bits, it should be unsigned since half of 5000050000 is 2500025000 which is larger than than MAX_INT (if int is 32-bits), which is 2147483647.
char is never bigger than short. short is never bigger than int. int is never bigger than long. long is never bigger than longlong.
>If the hardware makes it impossible to support this type
Could you please explain this section a little more?
The C++ standard gives quite a lot of room for weird hardware that are rare or might not even exist. The C++ standard doesn't even require a byte to be 8 bits, it could be more. If for example a byte was 9 bits there would be no way to have a type that is exactly 64 bits. Seven 9-bit bytes would give you 63 bits. Eight 9-bit bytes would give you 72 bits. That's why the fixed-width integer types are "optional" and not guaranteed to be available.
#include <iostream>
#include <vector>
usingnamespace std;
int solution( vector<int> &A )
{
int result = A.size() + 1;
for ( int i = 0; i < A.size(); i++ ) result ^= A[i] ^= ( i + 1 );
return result;
}
int main()
{
vector<int> V = { 1, 2, 4, 5, 6 };
cout << solution( V );
}