Shouldn't implicit conversion to a built-in type be checked at compile time?

Nov 23, 2008 at 2:34 PM
When you do an implicit conversion from long to short, the compiler will typically warn you that the conversion might be unsafe. For example (tested on MSVC 9 SP1):

  long myBuiltinLong = SHRT_MAX + 1L;
  short myBuiltinShort1 = myBuiltinLong; // warning: possible loss of data!

On the other hand, when SafeInt<long> is implicitly converted to short, the compiler doesn't give any warning:

    SafeInt<long> mySafeLong(SHRT_MAX + 1L);
    short myBuiltinShort2 = mySafeLong;  // No warning!

The safety net comes only at runtime. Wouldn't it be preferable to avoid implicit conversion whenever loss of data is possible.  In those cases, people could still use the SafeCast function, right?

Clearly some conversions are guaranteed without loss of data at compile time. For example, SafeInt<unsigned char> to short, SafeInt<short> to long, and SafeInt<short> to double.  In those cases, I still find it very convenient to have implicit conversion, of course!

Nov 24, 2008 at 7:52 AM

The typical programmer will just cast away the warning, so the runtime protection is going to be a stronger approach. I think you also have the warning level turned up. I would suppose it would be nice to have the best of both worlds, perhaps have something you could put inline that would trigger the warning, but not something I'd want to see by default. The most serious problem is that GetCastMethod happens on a total of 92 lines, each of which represent a possible warning. Then there's the problem of this:

long a, b;
long result c;
long long tmp = (long long)a * (long long)b;
if(SafeCast(tmp, result))

Which is essentially what happens inside any of the 32-bit multiplication checks, and at compile time, it would appear to be a possible truncation that we'd like to warn about, but we're using it to widen our range. Another thing I prefer about the runtime protection is that if you do something like the above, you'll quickly get exceptions and those will point you to the line that's causing the trouble and you can go fix it. Better yet, the stack trace even tells you the values that made you get off the tracks.

It would be nice if there was some way to do some diagnostics - for example, someone did this:

SafeInt<unsigned long> cch;
// lots of code
// check for cch too big (or that was the plan anyway)
if((int)cch < 0)
  return E_FAIL; // doh! now not reached

The graceful error handling just turned into:
if((int)cch < 0) RaiseException(...);


I take your point, but I don't know how to get both.

Getting rid of the warnings can also have huge benefits in a lot of cases. For example:

char* szIn = something;

unsigned long cchIn = strnlen(szIn, cchMax); // cchMax easily fits in a ulong, so size_t cast is safe

char* szTmp = strchr(szIn, '@');

if(szTmp != NULL)


   unsigned long offset = szTmp - szIn; // warning! truncation from ptrdiff_t to unsigned long

  unsigned long offset2 = SafeInt<ptrdiff_t>(szTmp - szIn); // No warning, and if something gets bad and this is negative, we throw


Of course, if you really want to check at runtime, that's what the non-throwing functions are for.

Nov 24, 2008 at 7:54 AM
Oh, and we promote to int and check the cast all over the place, too. GetCastMethod isn't the only time we do this. We'd have a lot of false positives.
Nov 26, 2008 at 9:26 PM

Thank you for the SafeInt update, David!

Thanks also for your examples. I can imagine that someone who's doing if((int)cch < 0) might be surprised to get an exception.  What's even worse IMhO, such a conversion might currently also take place implicitly, for example by passing a SafeInt<long> to a function that accepts a short as argument.  So a change of a function signature might unexpectedly yield exceptions! I think that that people would be less surprised if the conversion would be by an explicit function call, for example, SafeCast<int>(cch).

So I think it might be useful to try to have exception-throwing implicit conversions trigger a warning (preferably while keeping the warning disabled for the rest of SafeInt.hpp). But I'd find it even more interesting to just disable those implicit conversions that might cause an exception.This could be implemented by replacing your SafeInt conversion operators by a single member function template:

    template < typename U >
    operator U() const
        return m_int;

operator U() converts to any type, as long as the conversion doesn't need runtime checking. 

C_ASSERT_ConversionDoesNotNeedRuntimeCheck is implemented using the type traits from TR1, which is included with the latest versions of both MSVC and GCC.  But I guess you can do the same by using SafeInt's own type traits.  I haven't yet extensively tested the class, but basically this is the idea:


#include <type_traits>  // TR1

  template <typename T, typename U>
  class C_ASSERT_ConversionDoesNotNeedRuntimeCheck
      fromSigned = std::tr1::is_signed<T>::value,
      fromUnsigned = std::tr1::is_integral<U>::value &&
      toSigned = std::tr1::is_signed<U>::value,
      fromFloat = std::tr1::is_floating_point<T>::value,
      toFloat = std::tr1::is_floating_point<U>::value
      value =
        ( sizeof(T) <= sizeof(U) &&
          fromSigned == toSigned &&
          fromFloat == toFloat ) ||
        ( sizeof(T) < sizeof(U) &&
          (toFloat || (fromUnsigned && toSigned)))

What do you think?  Would it be feasible to only allow implicit conversion when there's no chance of exceptions?

Nov 29, 2008 at 8:07 PM
Checking implicit casts on entry to a function is a major feature. The problem with asking for a cast is that people will mess it up. That's the nice thing about SafeInt - the programmer has to make fewer decisions. For example, here's a bit of Office code:

TK tk = SelfObfuscatingFunctionName(args);

This produces a warning that we're truncating an IRUL to a TK. So just what's an IRUL? And a TK is a lot of fun to try and search, since it's just 2 letters. Here's the SafeInt fix:

TK tk = SafeInt<IRUL>(SelfObfuscatingFunctionName(args));

The IntSafe library attempts to solve integer overflows using C functions. I've seen a huge number of instances where someone passed a pair of ints to a function designed to do a checked multiplication of 2 unsigned ints.

A big problem with your proposal is that internally, we just let the throw happen all over the place. The casting operators aren't low enough level, and there are too many places we check casts internally. If the cast fails, then some other operator is actually the problem.