How to disable _SECURE_ATL macro

  • Thread starter Thread starter Guest
  • Start date Start date
G

Guest

Dear All,
I have upgraded my source code from VS 2003 to VS 2005, and my code is
breaking due to the _SECURE_ATL macro in some methods of ATL. As it is
executed all the times though there is also a non-secure block of code
defined alongwith it. Is there any way of disabling this macro...as mere
#define _SECURE_ATL 0, in the stdafx.h or somewhere between the code is not
working.

Regards,
 
from atldef.h:
#ifndef _SECURE_ATL
#define _SECURE_ATL 1
#endif // _SECURE_ATL

atldef.h is included in (among others) atlbase.h
you can add "_SECURE_ATL=0" to the preprocessor definitions of your project
to define that macro before atldef.h is included. that, however leads to a
large number of warnings because then you are using depreceated functions.

you can get rid of those by adding the following to your preprocessor defs.
_CRT_SECURE_NO_DEPRECATE;_SECURE_SCL=0;_SECURE_ATL=0

to get rid of the warnings for other depreceated functions, add at the top
of your StdAfx.h file

#pragma warning(disable : 4996)

of course, it would be MUCH better to port your code to use the new, secure
functions because doing the things mentioned above will leave you open to
lots of potential problems.

kind regards,
Bruno.
 
Bruno van Dooren said:
you can get rid of those by adding the following to your preprocessor
defs.
_CRT_SECURE_NO_DEPRECATE;_SECURE_SCL=0;_SECURE_ATL=0

to get rid of the warnings for other depreceated functions, add at the top
of your StdAfx.h file

#pragma warning(disable : 4996)

of course, it would be MUCH better to port your code to use the new,
secure functions because doing the things mentioned above will leave you
open to lots of potential problems.
FWIW, I completely, entirely and utterly disagree with that statement.
Almost all these come at a price. There are some things that are indeed
reasonable in the secure C library:

- functions that return internal buffers (e.g. tmpnam) or otherwise
require internal buffers (notably qsort)
- functions that write unpredictable amounts of data to a buffer
(scanf with certain format strings)

However, there are places where you're typically perfectly aware
of the buffer size. For instance, deprecating memcpy in favor of
memcpy_s is a quite stupid motion, IMHO.

Things are worse for the Standard C++ Library (VC++ uses the
acronym SCL for it - how do you know the don't mean Standard
C Library? I don't know).
They can have significant impact on performance and working set.
Worse, some of the "secure functionality" violate the requirements
of the C++ standard, which could in fact introduce security issues
not present in your original code (granted, this is quite unlikely).
And then there's _SECURE_SCL_THROWS, which is probably
worse than the disease.
Of course, the "Secure SCL" does not catch all security relevant
bugs - in fact I don't think I've ever seen an issue in an app we
shipped where _SECURE_SCL had helped.

Bottom line: The "Secure SCL" comes at a price and I believe
the tradeoffs don't fit nicely with the philosophy of the C++
language. And never ever defined _SECURE_SCL_THROWS=1
unless you absolutely know what you're doing.

-hg
 
However, there are places where you're typically perfectly aware
of the buffer size. For instance, deprecating memcpy in favor of
memcpy_s is a quite stupid motion, IMHO.

the problem with memcpy is that if you are interacting with code that you
didn't write
yourself, you cannot be sure that using it is safe.
in those cases it is better to always use memcpy_s because you have to
verify with all external buffers anway.

unless we are talking about some piece of code where performance is
critical, i would advise to use the new libaries because they are much
safer, and in 99.9 % of the cases, the added CPU cycles are un-important.

if performance really is critical then that could be a reason for using the
un-safe functions, but that shouldn't be the default choice. security and
stability issues are much more commonplace than performance issues for most
(but not all) code.

kind regards,
Bruno.
 
the problem with memcpy is that if you are interacting with code that you
didn't write
yourself, you cannot be sure that using it is safe.
in those cases it is better to always use memcpy_s because you have to
verify with all external buffers anway.

unless we are talking about some piece of code where performance is
critical, i would advise to use the new libaries because they are much
safer, and in 99.9 % of the cases, the added CPU cycles are un-important.

if performance really is critical then that could be a reason for using the
un-safe functions, but that shouldn't be the default choice. security and
stability issues are much more commonplace than performance issues for most
(but not all) code.

I'm pretty much with Holger on this one. The "safe" functions don't provide
any additional safety for people who use the "unsafe" functions correctly.
As I see it, the primary value in deprecating the "unsafe" functions in
favor of "safe" ones that have size parameters is to force people to think
a little harder about what they're saying. The thing about the "safe"
functions is that without deprecating the traditional, "unsafe" functions,
no one would use the "safe" functions, and you couldn't force people to
think a little harder. Unfortunately, the effect it has on those who've
thought long and hard for years is to irritate them. :) And make no mistake
about the "safe" functions, mistakes will still be made; I reported at
least two bugs in which someone "fixed" MSDN examples by supplying size
arguments in the form of sizeof(ptr) when he really needed to provide an
array size.

Note also the compiler doesn't require you to check the return value of the
"safe" functions. However, MS did get the parameter validation default
behavior right. Unlike what Windows did with lstrcpy, having it swallow
access violations and return an error code which no one checks because
everyone expects lstrcpy to succeed, the CRT invokes an invalid parameter
handler which raises an access violation by default. The question is, do
these functions treat all errors they detect in this way? Offhand, I don't
know, but I hope they do. I also hope those who "want to keep my program
from crashing" never discover _set_invalid_parameter_handler.
 
I'm pretty much with Holger on this one. The "safe" functions don't
provide
any additional safety for people who use the "unsafe" functions correctly. that 5s tr4e
As I see it, the primary value in deprecating the "unsafe" functions in
favor of "safe" ones that have size parameters is to force people to think
a little harder about what they're saying. The thing about the "safe"
functions is that without deprecating the traditional, "unsafe" functions,
no one would use the "safe" functions, and you couldn't force people to
think a little harder. Unfortunately, the effect it has on those who've
thought long and hard for years is to irritate them. :) And make no
mistake
about the "safe" functions, mistakes will still be made; I reported at
least two bugs in which someone "fixed" MSDN examples by supplying size
arguments in the form of sizeof(ptr) when he really needed to provide an
array size.

Note also the compiler doesn't require you to check the return value of
the
"safe" functions. However, MS did get the parameter validation default
behavior right. Unlike what Windows did with lstrcpy, having it swallow
access violations and return an error code which no one checks because
everyone expects lstrcpy to succeed, the CRT invokes an invalid parameter
handler which raises an access violation by default. The question is, do
these functions treat all errors they detect in this way? Offhand, I don't
know, but I hope they do. I also hope those who "want to keep my program
from crashing" never discover _set_invalid_parameter_handler.
 
I am sorry if i voiced my opinion too strongly. I understand your position.
I also would not update a known good code base just for fun, but in a recent
large project where i was lead programmer, I was far too often a victim of
people not doing correct buffer checks.

The team i was working with was not experienced, and 'forgot' range checking
a lot of times. (that and the fact that pointers have to point somewhere).
if i could have forced everyone then to use the safe functions, it would
have saved the whole team lots of debugging time during the integration
phase.

if nothing else, it would have forced them to wonder why they had to supply
a size parameter, just like you said.

kind regards,
Bruno.
 
Bruno van Dooren said:
the problem with memcpy is that if you are interacting with code that you
didn't write
yourself, you cannot be sure that using it is safe.
in those cases it is better to always use memcpy_s because you have to
verify with all external buffers anway.
I don't quite understand what you're trying to say. Memcpy's contract
clearly defines the size of the buffer accessed.
unless we are talking about some piece of code where performance is
critical, i would advise to use the new libaries because they are much
safer, and in 99.9 % of the cases, the added CPU cycles are un-important.
I guess we are not talking about the same thing then. My main complaint
is about Secure SCL which most definitely is not "much safer".

Re memcpy_s et al.: What makes you believe that these functions are
much safer? Carrying the buffer size is tedious and I wouldn't be surprised
if folks just copied the buffer size to memcpy as the fourth paramter.
E.g.
memcpy( foo, bar, size );
becomes
memcpy_s( foo, bar, size, size );

There is obviously no value in this transformation.
Additionally, there are already superior solutions to this problem
by using C++ wrappers. E.g. std::string or std::valarray or any
of the standard containers.
if performance really is critical then that could be a reason for using
the un-safe functions, but that shouldn't be the default choice. security
and stability issues are much more commonplace than performance issues for
most (but not all) code.
I'm afraid I disagree. You don't just throw a switch at the compiler and
your program is magically secure. There is a lot of work involved that
will render your code slower, more complex and less portable. Deprecating
standard functions by default is an extrembly poor choice IMHO.

If you consider buffer management too hard, don't want to use a
managing wrapper and don't care about performance, working set etc.,
you'd probably be better off with another programming language.
Java & C# come to mind.

Obviously, this is a matter of philosophy and maybe personal taste.
However, I definitely do not concur with your initial statement
of course, it would be MUCH better to port your code to use the new, secure
functions because doing the things mentioned above will leave you open to
lots of potential problems.

OTOH, that doesn't necessarily mean you should never use the secure
libraries. But that's what I've done and probably will in the future.

Just my two cents
-hg
 
I am sorry if i voiced my opinion too strongly. I understand your position.
I also would not update a known good code base just for fun, but in a recent
large project where i was lead programmer, I was far too often a victim of
people not doing correct buffer checks.

The team i was working with was not experienced, and 'forgot' range checking
a lot of times. (that and the fact that pointers have to point somewhere).
if i could have forced everyone then to use the safe functions, it would
have saved the whole team lots of debugging time during the integration
phase.

if nothing else, it would have forced them to wonder why they had to supply
a size parameter, just like you said.

I'm glad you posted this, because I think it's exactly the scenario the
"safe" functions can help. I just wish there had been a good alternative to
deprecating a bunch of standard functions by default.
 
Hi Guys,
Things are working in the release builds but the problem is still persisting
with the debug ones, any ideas about that.

Regards
 
make sure that _ATL_DEBUG_INTERFACES is not defined.

if you open the atl headers (atlbase.h for example) with VS2005, the ide
will gray out the parts of the headers which are disabled by conditional
compilation statements.

this way, you can easily check which macros are defined or not defined.

note that if you change preprocessor defs, intellisense needs to update
first before you see the results. this may take a few seconds.

kind regards,
Bruno.
 
Bruno van Dooren said:
of course, it would be MUCH better to port your code to use the new,
secure functions because doing the things mentioned above will leave you
open to lots of potential problems.

kind regards,
Bruno.

.... in addition to the other points made, it can't really be done if you
want portable (cross-platform) code.
 
I had similar problem while building debug and release versions of my
applica.tion that uses ATL/WTL.
I fixed the issue by changing release build configuration:
Project Property Page | Configuration Properties | General | Minimize CRT
Use in ATL : No

following code fragment explains the problem:
Excerpt from <atlapp.h>
#if _SECURE_ATL && !defined(_ATL_MIN_CRT) && !defined(_WIN32_WCE)
//MSVC++ v8.0 Use secure version string handlers
#else
//Use deprecated unsecured versions of string handlers
//Compiler Warning (level 3) C4996 will be issued
//See also: MSDN, Safe Libraries: Standard C++ Library,
http://msdn.microsoft.com/en-us/library/aa985872.aspx;
//MSDN, Security Enhancements in the CRT,
http://msdn.microsoft.com/en-us/library/8ef0s5kh.aspx
//work around: #pragma warning(disable : 4996)
#endif
 
Back
Top