To be more precise, a char stores an integer. That integer corresponds to an ASCII character, and various aspect of C++ will treat it as a character where appropriate.
The code you've posted will:
- declare a variable c, of type char
- read in a single character on the command line, typed by the user, and store it in c
- display the value as a character
- display the value as an integer
char is one byte. It is a signed integer from -127 to 128 in value.
unsigned char is 0-255 in value.
It is used both as a "RAW BYTE" data type and for storing ascii characters (strings, text) (but it is not for unicode text) and in some older code, it has been used as boolean as well (before the type bool was added to the language).
The operating system knows how to print specific pixels on the screen for specific integer values. It knows to that when I type A, to print those pixels, even though the 'value' is a number (65, if memory serves, but don't quote me on that).
The letters are arranged in a sensible way in the ascii lookup table, so that B > A, 2 > 1, etc to allow alphabetical sorting of strings, etc.
[char] is a signed integer from -127 to 128 in value.
Not necessarily. char is different than both unsignedchar and signedchar, and as a result the signedness of sign-unqualified char is implementation-defined.
You should prefer unsignedchar in the case where a raw byte array is required: doing bit math or arithmetic on signed values is more difficult (because the representation of signed values is implementation defined) and they are more prone to implementation-defined, introducing portability problems or undefined behavior (e.g., signed overflow is undefined behavior, but unsigned overflow isn't).