This is purely a conceptual question. I am aware I can use dynamic containers like std::vector, but I want to know if this is possible. Compiler is c++14.
If I have some array size:
1 2
int array_size = 4;
std::array <int, array_size> arr; // error, array size is a non-const
The major problem is that array sizes must be compile time constants, not variables. If you must have a runtime variable set the size, you should consider std::vector instead of std::array.
Thank you for your response, @jlb. I understand that, but I was wondering if I can take a variable of type int and convert into to const int to initialize my arrays?
many compilers do allow this if not put into stricter compile modes. It is frowned upon, though, as it is a language extension that is not to be trusted in serious code.
it is legal in C, so "A" way to do it is to use a C file in your project to do this one thing.
C++17 added deduction guides to the C++ containers, including std::array. That way you don't need to specify the data type or the size of the container, only an initializer list when instantiating. https://en.cppreference.com/w/cpp/container/array/deduction_guides
#include <iostream>
int main()
{
int size {};
std::cout << "What size do you want? ";
std::cin >> size;
int* arr { newint[size] };
for (int i { }; i < size; ++i)
{
arr[i] = i;
}
for (int i { }; i < size; ++i)
{
std::cout << arr[i] << ' ';
}
std::cout << '\n';
delete[ ] arr;
}
A std::vector is still a better choice for a generic array-like runtime variable sized/sizeable container. The container manages its memory without you having to worry about it.
Thank you for your response @GeorgeP! The reason I want to stick to statically sized arrays is because they seem to be more efficient when it comes to memory allocation. Vectors seem to push efficiency down (increases runtime) slightly... however slight it might be, the little inefficiencies add up if the software is running a large number of iterations.
an aggravation of OOP, which means also the STL containers, is that it is not always cheap to allocate and deallocate small temporary objects. You can mitigate this in your code with some memory management of your own, such as having the temporary become a private class variable that can be used by any of its functions and always exists rather that create and destroy nonstop, or similar tricks that keep the item around, trading memory wasted for speed. However be sure to profile that the creation/destruction cycle is actually a problem. Another way is a tight loop in a function that does this, factor it out of the loop, or a loop spamming a function, pass the item in rather than create it inside, and so on. Just moving the create/destroy pair around a little in key places, often just a small # of places, can have big rewards.
“Premature optimization is the root of all evil.” - Donald Knuth
One of the biggest problems new or overzealous programmers run into is trying to write code that is as fast as possible at the expense of things like code readability. This is almost always a bad idea. It IS a good idea to pick an algorithm that’s right for the problem you’re trying to solve -- for example, if you’re doing lots of element insertions and deletions, a linked list is probably going to be a better choice than an array. But that doesn’t mean you have to design an algorithm that squeezes out every last bit of performance out of the linked list. Efficiency generally comes at the expense of legibility, and honestly, with a few exceptions, legibility is more important, because at some point, you’re going to have to fix a bug, or expand your code, and code that’s tricked-out to be as efficient as possible isn’t going to be conducive to either of those things.
Once your code is written, you can always profile it to find out where the ACTUAL bottlenecks are, rather than prematurely act on where you perceive the bottlenecks may be (emphasis added). With properly implemented code that utilizes concepts such as encapsulation, swapping out one algorithm for a better one when needed is often no problem.
AND
Don’t be too clever by half.
This is sort of along the same lines as [previous]. It’s almost always a better idea to write code that is clean, straightforward, and legible than code that is as efficient as possible.
The efficiency you think you are creating by doing things manually is more than likely a mirage. Current C++ compilers are damned good at creating code that works well for all but the most peculiar edge cases. Lots of really smart programmers have spent years creating/updating the C++ standard specifications to not be a sluggard.
The reason I want to stick to statically sized arrays is because they seem to be more efficient when it comes to memory allocation. Vectors seem to push efficiency down (increases runtime) slightly... however slight it might be, the little inefficiencies add up if the software is running a large number of iterations.
It is possible that std::vector is the root cause of a performance problem, but experience shows this to be quite unlikely. Instead, it is more likely that inefficiencies surrounding std::vector are caused by less-than-ideal usage patterns.
Some less-than-ideal usage patterns include
- allocating many vectors in a loop when one or several could be reused instead;
- incurring unneeded copies of elements;
- forgetting to reserve memory space for elements ahead-of-time;
- forgetting to mark the element type's move constructor noexcept;
- not making use of allocators.
Don't follow anyone's advice blindly. Just do what's measurably best for your product.
Oh right, and you have to turn on the compiler optimizer. Else the compiler may insert extra code to help catch bugs, which can slow things by an order of magnitude.