Portable C++: Unpacking integers from binary buffers

As C++ code is so close to the metal, we often make dodgy assumptions that hurt portability. One of the ‘simplest’ problems that I’ve seen repeatedly is packing and unpacking binary data.

The C++ works hard to eliminate definitions that would tie us into a particular hardware architecture, and this area invites a desire to throw caution to the wind and make assumptions as to exactly what’s going on.

The new college grad (and old-hat that views this as all theoretical anyway) might write:

There’s a handful of problems here:

  1. We’re assuming bit size of ‘int’ – it may be anywhere from 8 to 64 bits on common platforms.
  2. We’re assuming that we’re safe to read a char aligned buffer to an integer.
  3. We’re assuming the buffer is packed with appropriate byte order for our processor.
  4. We’re breaking the strict aliasing rule.

Can we write a new version of the function to take care of these challenges? Well, with a little care:

This version was tuned to work with GCC 5 and higher. This function is highly portable – it should operate on any architecture providing 8-bit chars and 32-bit int32. Indeed, the C++ standard definitions for conversion to/from std::uint32_t even handle the mode of twos complement arithmetic vs not. Using bit-shifts and or defines the exact expected behavior of the construction of the 32 bit integer.

And there was much rejoicing… sortof… There’s many a blog┬ápost out there that support this method of formatting.

Now, let’s say that this particular call is fairly performance critical (perhaps we’re doing some pixel or image manipulation – use your imagination). In my application, I was processing large data files. Modifying from the first style to the second fixed issues with ARM portability, but slowed down performance.

Most compilers see the above pattern and recognize – “hey, I can just load a 32bit word and return, no harm / no foul.” Sadly, Visual C++ does not. No combination of optimization flag and type manipulation get the optimizer to recognize the pattern. Even GCC is fairly sensitive in situations where it can (hence the std::uint8_t casts throughout). To faciliate portability and performance on all my desired targets, the end result was using std::memcpy to a temporary integer. The ARM compiler happily recognizes we may be accessing unaligned memory, and all the other toolchains optimize away the memcpy to a simple load. Of course, now we’re back to handling byte order again. Ugh!

At the end of the day, maybe the grouch has it right – just worry about the processor you’re running on (hopefully just 1). It’s all fun and games until you find yourself porting to that random platform you’d never worry about.

Leave a Reply

Your email address will not be published. Required fields are marked *