SubZeroWins wrote: |
---|
So I guess there is no need for sizeof to report the full memory allocation of the dynamically allocated pointer, since you would have known the number to begin with & you simply do the math * sizeof(type). |
Yes, but note that there might be some overhead, so the memory usage might be slightly more. This is probably nothing you need to worry about unless you do a lot of very small allocations.
SubZeroWins wrote: |
---|
I did not know it can falsify assignment. It actually did not return a NULL, so it must be the case. |
Not necessarily. Maybe you have 2 GB of free memory. Note that it is not just about the RAM memory that is available. It's common to have a small area on disk that can be used if you run out of RAM. This is sometimes called a "swap partition" or "swap file". It's slower but it means the program doesn't need to fail just because you happen to use slightly more memory than the amount of RAM that you have.
Compare the two following programs. Look at the memory usage on your computer.
1 2 3 4 5 6 7
|
#include <iostream>
int main()
{
int* pointer = new int [0x1fffffff]; // uninitialized (unused memory)
std::cin.get();
}
|
1 2 3 4 5 6 7
|
#include <iostream>
int main()
{
int* pointer = new int [0x1fffffff](); // initialized (used memory)
std::cin.get();
}
|
For me (on Linux) the memory usage for the first program is neglectable while the second program makes the memory usage go up by 2 GB.
SubZeroWins wrote: |
---|
But it does not make sense why, since nothrow is designed to prevent any crashes & this can then lead to a crash. |
It's not really how the standard describe that things should work, but it's a trade-off. If you don't allow "overcommitting" then you might instead waste memory.
I wouldn't say the purpose of nothrow is to prevent crashes. I think the purpose is to avoid that an exception is thrown. A program can catch exceptions so if an exception gets thrown it doesn't necessarily mean the program has to "crash". Even if you do not catch the exception it's not like a segfault or std::abort because destructors of local and global variables will still get called.
SubZeroWins wrote: |
---|
Do you guys really assign a pointer to NULL & back to NULL after delete for safety measures, religiously? |
If I create a pointer that is not pointing to anything I always initialize it to null.
I don't use new/delete very often. Normally I prefer to just create objects as regular variables, and when I need to dynamically allocate the objects I would normally use std::unique_ptr which would automatically ensure the pointer is set to null appropriately. To store many objects of the same type I would normally use a container such as std::vector.
If I used new and delete manually (which was common before C++11) then I would not set the pointer to null if it was destroyed right after. For example, if I had a class that had a pointer as a member variable, and if I used delete on the pointer in the destructor I would not set it to null after because the object (and the pointer) would get destroyed and there was no chance someone used the pointer afterwards. However, if I had to use delete in some of the member functions I would set it to null because other member functions (including the destructor) that might be called afterwards need to know that the pointer doesn't point to anything.
SubZeroWins wrote: |
---|
Do you always check for "if (pointer)" before dereferencing it? |
No. Not if I "know" that it should not be null. It might be a class invariant or it might be a precondition of the function. In that case I might
sometimes use assert instead but not all over the place, it depends...
https://en.wikipedia.org/wiki/Precondition
https://en.wikipedia.org/wiki/Invariant_(mathematics)#Invariants_in_computer_science