I was told (or rather I read from tons of tutorials and other forum board topics) to always use TCHARS because that allows one to compile in multiple different IDEs and compilers. |
The compiler/IDE has absolutely nothing to do with it. It's a question of WinAPI, and WinAPI has 3 forms (TCHAR, char, wchar_t). Always. You can look at any msdn page for an applicable function and it will confirm:
http://msdn.microsoft.com/en-us/library/windows/desktop/ms645505%28v=vs.85%29.aspx
that msdn page wrote: |
---|
Unicode and ANSI names
MessageBoxW (Unicode) and MessageBoxA (ANSI)
|
TCHARs are dumb. They're basically a macro to be either a normal char or a wchar_t depending on whether or not the UNICODE macro is defined.
Really... all Windows.h does is [something like] this:
1 2 3 4 5 6 7 8 9 10 11
|
#ifdef UNICODE
#define TCHAR wchar_t
#define MessageBox MessageBoxW
#else
#define TCHAR char
#define MessageBox MessageBoxA
#endif
|
There's no real magic. 'MessageBox', or 'CreateFile' are not actually functions... they just get #define'd to either the W or A version depending on whether or not TCHARs are wide.
All you're doing by not using TCHARs is picking which on you want and using it directly, rather than using an obfuscated macro layer.
The thing that makes TCHARs particularly stupid is that they're variable size. Trying to use TCHARs with data that is a fixed size will result in you having to write 2 different blocks of code: one for use when TCHAR is char and one for use when TCHAR is wchar_t.
The only advantage TCHARs give you is they allow you to flip a switch to enable/disable Unicode support in your program (provided you write your code to be properly TCHAR aware -- most people don't). But that's dumb because if you're going through all the trouble to make your program Unicode friendly there's no real reason to ever disable it.
Although if I had to pick one, I suppose it'd always be to support Unicode. I'm not sure how relevant ASCII is anymore but if Microsoft decided to enable UNICODE as a default, it must be the way of the future. |
That's a good stance to take.
Windows has operated with Unicode (UTF-16) "under the hood" since
at least Win2k. So not only is it the
future, but it's pretty much the standard in the present... and has been for the past 13+ years as well.
There are some reasons why you might not want to do it sometimes.... but if you want to do it, that's great.
So yeah, is ASCII relevant? |
If you have a narrow ASCII string (like read from a file or something) that you want to pass to WinAPI... you don't have to be Unicode friendly for that specific function call.
What I mean is... there's no point in manually widening an input string just to call the W version of a WinAPI function. It's easier to just call the A version with the ASCII string and let Windows do the widening for you.
The reason I hesitate following your advice is because I'm 90% ASCII is used for intercommunication between devices because it allows for faster processing. |
Anything you pass to WinAPI gets widened if it isn't widened already. In fact it's probably slower to use the A functions over the W functions because Windows has to look up the user's locale settings in order to widen the string. (this is speculation as I haven't tested, but I would be very surprised if I was wrong).
Windows uses UTF-16 for everything. Even in programs that don't. So anything you pass to WinAPI is going to get widened.
That said... if you're doing a lot of text processing in your program
without passing to/from WinAPI... then yeah wide strings might be slower. But you would have to be doing a
whole lot of text processing for it to make any significant speed difference.
EDIT: ninja'd by modoran!