Boost logo

Boost-Build :

From: Andrey Melnikov (melnikov_at_[hidden])
Date: 2005-09-15 10:41:02

Reece Dunn wrote:
> Andrey Melnikov wrote:
>>Reece Dunn wrote:
>>>Hi All,
>>>[1] warnings support (designed for genericity, only supported in msvc
>>>for now).
>>># builtin.jam
>>>feature warnings : default on off all strict : propagated ;
>>I wonder if there's a reason to have unified warning names?
>>E.g. <warning-disable>dtor-not-virtual or
> I am not sure, as there could potentially be hundreds of named warnings,
> some of which would be added/removed from version to version of a
> compiler. This would complicate the option mapping and maintenance of
> the feature. I agree, it would be nice in some circumstances.

Just major widely supported warnings like mentioned by me would be enough.

> I designed the warnings to be as generic as possible, without
> compromising too much on control. I haven't used warning levels wither
> as I was unsure how generic this would be.

I agree. Warning levels and warnings themselves are
implementation-specific, aren't they? What the standard says?

> feature warnings # the options in () are for msvc
> :
> default # use the compilers default warning settings (-W1)

Actually no switch will be added. -W1 is just for illustration here
because 1 is the default warning level in VC.

> on # turn warnings on (-W3)
> off # turn warnings off (-W0)

I agree.

> all # enable all warnings (-W4 -Wp64)
> strict # same as all, but be stricter about warnings
> # (-W4 -Wp64 -WX)
> : propagated ;

In most cases warnings provide useful hints and point to actual minor
bugs in the code that can be easily fixed.

Sometimes warnings should be disabled locally using compiler-specific

But 64-bit warnings and deprecated/security warnings from Microsoft are
specific ones. A lot of code has been written prior to appearance of
these warnings so it's often desired to disable them globally and
unconditionally (like in case of Boost).

So I'd like to have:

all # enable all warnings but disable "deprecated" and 64-bit ones
(-W4 -wd0000)
strict # same as all, but be stricter about warnings
# (-W4 -Wp64)

-WX can be a separate feature (<warnings-as-errors>on)

>>>[2] support for specifying the character set.
>>># builtin.jam


> I like this, but have a few minor points:
> * ansi should come before unicode in win32-charset, to use it as the
> default option;

I agree.

> * stdlib-tchar should really be stdlib-charset to be consistent;
> * I would prefer to use the names "narrow wide mbcs" in the
> stdlib-tchar feature.

This will increase the existing confusion. We need to provide clean,
unambigous names, which are much better than Microsoft's UNICODE, _MBCS

Here is the rationale I have:

#define UNICODE

It configures MS Platform SDK. It isn't used neither by the Standard
Library, nor by the Microsoft extensions to the Standard Library.

When defined/not defined:

- Charset-independent function aliases are #define'd to be the UNICODE
(UCS-16?) versions of API functions: MessageBox (macro) -> MessageBoxA
or -> MessageBoxW
- LPTSTR (PTSTR, PCTSTR etc) are typedef'ed to PSTR (char*) or PWSTR
(wchar_t *)

You are still free to use explicit full names. It's not a problem to
call MessageBoxA and MessageBoxW in the same program. So actually there
are no link compatibility problems. Only inline functions that use
aliases like MessageBox may cause problems if used in non-portable code
(I mean the portability between Ansi and Unicode APIs).

so the feature should be named something like

#define _UNICODE, #define _MBSC or neither

Switches TCHAR type between char and wchar_t. Switches
Microsoft-specific _t* CRT function aliases between narrow, wide and
narrow-multibyte versions.

E.g. MS-specific _tcslen macro maps to strlen or wcslen standard functions.

MS-specific _tcscpy macro maps to standard strcpy or wcscpy or to
MS-specic _mbscpy.

The Platform SDK and standard functions aren't affected.

The standard functions doesn't use ANSI/UNICODE terms. They use
"wide/narrow characters" instead (wcslen/strlen).

So the feature should be named something like

> Thus, we now have:
> feature win32-charset : ansi unicode : composite link-incompatible ;
> feature stdlib-charset : narrow wide mbcs : composite
> link-incompatible ;
> feature.compose <win32-charset>unicode : <define>UNICODE ;
> feature.compose <stdlib-charset>mbcs : <define>_MBCS ;
> feature.compose <stdlib-charset>wide : <define>_UNICODE ;
>>This defines 6 possible configurations:
>>I think they are all valid.
> Agreed. However, you would usually define compatible character types.
> What happens if you have:
> <win32-charset>unicode <stdlib-charset>narrow
> and:
> TCHAR * ch = _T("To _T, or not to _T...");
> ?

It looks like there's a problem in the design. The result depends on the
#include order. The following example doesn't work. But if you swap the
includes, it does.

#define UNICODE
#include <wtypes.h>
#include <tchar.h>

void my()
TCHAR * ch = _T("To _T, or not to _T...");

The following code is order-insensitive:

#define UNICODE
#include <wtypes.h>
#include <tchar.h>

void my()
PTSTR * ch = TEXT("To _T, or not to _T...");

Also the original code works if you don't include <wtypes.h> at all.
Because UNICODE is used only by wtypes :)

The problem is because TCHAR is defined in both headers :( The first
definition depends on UNICODE and the other - on _UNICODE

>>>[3] support for targetting versions of windows:
>>># builtin.jam
>>>feature windows : default 95 98 me nt 2k xp 2k3 vista : composite ;
>>This is purely an MS PSDK feature. So it's better to put it into the
>>separate Platform SDK module if we decide to implement PSDK support
>>using an external module.
> I totally agree. It's just that a PSDK feature doesn't exist yet, so I
> put it in builtin.jam. I have a few questions:
> [1] What if you don't specify a PSDK in user-config.jam, but want to
> specify <windows>xp for a project? Or even:
> tlb myidl : myidl.idl : <windows>2k ;

In user-config you only configure the paths to all installed versions of
PSDK and default versions to use with each toolset. What SDK to use is
specified in your Jamfile or on the command line.

> [2] What if you are building with a non-msvc compiler (e.g. Borland or
> Metrowerks CodeWarrior), or evevn the default PSDK shipped with the
> specified compiler?

MSPSDK module will support all PSDK flavours, even the ones adapted for
Borland compilers.

>>>[4] support for managed (CLR) and Java bytecode
>>>feature clr :
>>> cppcli # Use the new C++/CLI (VC8-style) style CLR
>>> managed # Use old-style (VC7.x-style) managed code syntax
>>> pure # Use C++/CLI (VC8-style) pure CLR
>>> safe # Use C++/CLI (VC8-style) safe (validated) CLR
>>> : propagated ;
>>Is this a MC++-specific, a CLR-specific feature? It looks like it's a
>>MC++-specific feature. Does it affect code generation, compile-time
>>checks, C++ language extensions or both?
> MC++ is the C++ way to target to .NET (i.e. the CLR).

Not exactly. IMO "MC++" means the extension to the C++ language that allows:
1) to mix native and managed code
2) to use all the features of CIL and CLI in C++

I think it's possible to write a compiler that will produce CIL from

class CMyManaged
int x()
return 2;

without a need for language extensions.

If it's possible (and especially if GCC-CIL goes this way) the feature
won't be Microsoft-specific but will be CLR-specific.

Also IMO "pure" means "no native code" and "safe" means "portable, no
indirect PInvoke calls". I haven't written a line for the CLR in my
life, so my comments can be lame here :)

> I think (but would
> have to check) that VC8 has pure/safe CLR options for its C#/VB/*.NET
> languages.
> VC7.x defines a style of managed C++ where you have things like:
> __gc class managed ...;
> VC8 allows this (as -clr:oldSyntax), but uses the new syntax by default.
> The new (C++/CLI) syntax allows you to write:
> enum class values ...;
> I chose the name "managed" for the "old" syntax because <clr>old and
> <clr>new are confusing and meaningless w.r.t. VC7.x.



Boost-Build list run by bdawes at, david.abrahams at, gregod at, cpdaniel at, john at