#include <iostream>
int main()
{
signedint s { -1 };
unsignedint u { 1 };
if (s < u)
std::cout << "-1 is less than 1\n";
else
std::cout << "1 is less than -1\n";
}
At least my compiler is kind enough to give a warning, but the usual expectation is that you would convert to the more general type before doing an operation.
The other cogent point made in the article is that you shouldn't subtract unsigned ints (well, not 50% of the time, anyway).
Languages like Fortran and Python don't have unsigned integers. The latter aren't as "safe" as you think.
So, to stick with the thread, I suggest that you use int and MPI_INT throughout. If your number of nodes exceeds what can safely be stored in an int then your computer setup probably isn't going to be solving that CFD problem anyway.
I've certainly seen this 'brilliant' piece of code:
1 2 3 4 5 6 7 8 9 10 11 12
#include <iostream>
int main() {
constexpr size_t NoElem {10};
constint arr[NoElem] {1,2,3,4,5,6,7,8,9,10};
for (size_t i = NoElem - 1; i >= 0; --i)
std::cout << arr[i] << ' ';
std::cout << '\n';
}
which gives a good impression of an infinite loop... as i will always be >= 0 as it is unsigned!
Instead of:
1 2 3 4 5 6 7 8 9 10 11 12
#include <iostream>
int main() {
constexpr size_t NoElem {10};
constint arr[NoElem] {1,2,3,4,5,6,7,8,9,10};
for (size_t i = NoElem; i > 0; --i)
std::cout << arr[i - 1] << ' ';
std::cout << '\n';
}
The first example works, of course, if i is of type int instead of an unsigned type...
The .size() member functions (and std::size() ) return a type of size_t (unsigned). Which is why since C++20 we have std::ssize() which returns a signed type - std::ptrdiff_t
Ranges adds a bit too much complexity for my tastes for such a simple task. Given a real-world app dealing with gobs and gobs of data might make using ranges worthwhile.
Ranges adds a bit too much complexity for my tastes for such a simple task. Given a real-world app dealing with gobs and gobs of data might make using ranges worthwhile.
I don't see the following as being complex, mainly because there are no iterators or de-referencing at all. This code is taken from my previous post, as it is the simplest IMO of the 3 methods shown there.
1 2 3 4 5 6 7 8 9 10 11
#include <ranges>
#include <iostream>
int main()
{
staticconstexprauto il = {3, 1, 4, 1, 5, 9};
for (int i : il | std::views::reverse)
std::cout << i << ' ';
std::cout << '\n';
}
The other really handy thing is the variety of range adapters, such as filters and various ways of obtaining range views.
> But why so wordy? Why introduce yet another namespace? ... Why not just: ...
The alternatives, and the rationale for a separate namespace (extract from the proposal DD4128):
3.3.6 Algorithm Return Types are Changed to Accommodate Sentinels
... most algorithm get new return types when they are generalized to support sentinels. This is a source-breaking change in many cases. ... Merely accepting the breakage is clearly not acceptable. We can imagine three ways to mitigate the problem:
1. Only change the return type when the types of the iterator and the sentinel differ. This leads to a slightly more complicated interface that may confuse users. It also greatly complicates generic code, which would need metaprogramming logic just to use the result of calling some algorithms. For this reason, this possibility is not explored here.
2. Make the new return type of the algorithms implicitly convertible to the old return type. Consider copy, which currently returns the ending position of the output iterator. When changed to accommodate sentinels, the return type would be changed to something like pair<I, O>; that is, a pair of the input and output iterators. Instead of returning a pair, we could return a kind of pair that is implicitly convertible to its second argument. This avoids breakage in some, but not all, scenarios. This subterfuge is unlikely to go completely unnoticed.
3. Deliver the new standard library in a separate namespace that users must opt into. In that case, no code is broken until the user explicitly ports their code. The user would have to accommodate the changed return types then. An automated upgrade tool similar to clang modernize can greatly help here.
We, the authors, prefer (3). Our expectation is that the addition of concepts will occasion a rewrite of the STL to properly concept-ify it. The experience with C++0x Concepts taught us that baking concepts into the STL in a purely backward-compatible way is hard and leads to an unsatisfactory design with a proliferation of meaningless, purely syntactic concepts. ...
OK - In other words a camel... I'm not liking ranges - probably because I have trouble getting the **** things to compile properly and do what I expect. Does anyone know a link to a good 'idiots guide to ranges' site - or preferably a book that covers the topic in detail as a beginner (not Josuttis or Grimm)?
It'd be nice if they backported compiler updates to previous IDE versions. I'd like to be able to use newer features, but I also have other stuff that I need my IDE to interface with that doesn't work (yet) with 2022.
I haven't found anything C++20 specific that won't work in VS2019 that works in VS2022.
Getting modules to work in VS2019 required a while back enabling C++ standard experimental libraries setting, not required with VS2022. Maybe that has changed, I haven't tried it recently.
M'ok, just tested VS2019 with some modules, .cpp & .cppm files. As with VS2022 any module code .cpp file needs to be set as a module internal partition, .cppm files set to interface.
No need to muck around with the experimental C++ standard libraries. Huzzah! :D
VS2019's Intellisense is a bit more "twitchy" when it comes to modules compared to VS2022. 2019 shows more "errors" than 2022, yet both compile the code.