How do I know what dimensions and subsequent memory load wont cause overflow.
Trial and error is probably the best way. Keep increasing the memory used from a "this works" amount to one the OS can't allocate.
Personally I'd use a C++ container so all the messy memory management is done for me. All I have to deal with then is if my app successfully gets the memory allocated I want. Properly allocating a 3D array on the heap can be very easy to screw up IMO.
The syntax of creating for instance a 3D vector can be a bit more to type than a 3D regular array, the ease of it being run-time dynamic without manual memory management over-comes that issue IMO:
One nitpick, a multi-dimensional vector is probably a whole lot slower and cumbersome because it's not a simple, contiguous array; for a 3D vector, you have n^2 array allocations instead of just one. I've done some stuff with vectors of vectors in the past for image processing, and it can be like a magnitude slower depending on how it's being used (probably because of caching). Would be nice if C++ automatically did the math for 1D -> nD vector offsets, but using a single vector is still probably your best bet.
If you could put the multidimensional array on a linear single contiguous array I would like to know how to do that. It makes more sense and reduces the search overhead. But isn't that saying then this array never belonged in a multi dimensional array in the first place if it can be in a single?
Hi George P. I'm going to try the 3 dimensional vector. I'll compare this with one lastchance posted just for giggles. I'll try to find a decent algo to see which one goes quickest.
With multi-dimensional C-arrays, this math is done for you, but not for containers like std::array/vector.
Actually -- there might be something you could do with a clever pointer cast to emulate the same behavior, but I'm hesitant to go down that path.
Personally I'd never create a 3D (or more) vector*, the most I'd do is 2D. Or do simulated xD in 1D as you point out.
I normally don't work with gobs and gobs of data on the order of multi-GBs that have be in memory at any one time. So performance hits wouldn't be noticeable for the most part.
YMMV.
*Displaying a 1D/2D vector/array as if it were a native C++ type is IMO easy to achieve:
#include <iostream>
#include <vector>
#include <numeric>
template <typename T>
std::ostream& operator<<(std::ostream& os, const std::vector<T>& v);
template <typename T>
std::ostream& operator<<(std::ostream& os, const std::vector<std::vector<T>>& v);
int main()
{
// creating a sized vector
std::vector<int> a1DVec(5);
std::cout << "Displaying a sized 1D vector:\n";
std::cout << a1DVec.size() << '\n' << a1DVec << "\n\n";
std::cout << "Creating a 2-dimensional vector, enter row size: ";
int row_size;
std::cin >> row_size;
std::cout << "Enter column size: ";
int col_size;
std::cin >> col_size;
std::cout << "\n";
std::vector<std::vector<int>> a2DVec(row_size, std::vector<int>(col_size));
// initialize the vector with some values other than zero
int start = 101;
int offset = 100;
// step through each row and fill the row vector with some values
for (auto& itr : a2DVec)
{
std::iota(itr.begin(), itr.end(), start);
start += offset;
}
std::cout << "Displaying the filled 2D vector:\n";
std::cout << a2DVec;
}
template <typename T>
std::ostream& operator<<(std::ostream& os, const std::vector<T>& v)
{
for (autoconst& x : v)
{
os << x << ' ';
}
return os;
}
template <typename T>
std::ostream& operator<<(std::ostream& os, const std::vector<std::vector<T>>& v)
{
for (autoconst& x : v)
{
os << x << '\n';
}
return os;
}
Displaying a sized 1D vector:
5
0 0 0 0 0
Creating a 2-dimensional vector, enter row size: 4
Enter column size: 5
Displaying the filled 2D vector:
101 102 103 104 105
201 202 203 204 205
301 302 303 304 305
401 402 403 404 405
Ramping up to 3D requires manual looping -- I haven't found a way to automate -- and the looping be done "out of order" to get IMO the proper 3D display.