How to write endian agnostic C/C++ code? How to write endian agnostic C/C++ code? c c

How to write endian agnostic C/C++ code?


The only time you have to care about endianness is when you're transferring endian-sensitive binary data (that is, not text) between systems that might not have the same endianness. The normal solution is to use "network byte order" (AKA big-endian) to transfer data, and then swizzle the bytes if necessary on the other end.

To convert from host to network byte order, use htons(3) and htonl(3). To convert back, use ntohl(3) and ntohs(3). Check out the man page for everything you need to know. For 64-bit data, this question and answer will be helpful.


What should I watch out for when implementing an app that I want to be endian-agnostic?

You first have to recognize when endian becomes an issue. And it mostly becomes an issue when you have to read or write data from somewhere external, be it reading data from a file or doing network communication between computers.

In such cases, endianess matters for integers bigger than a byte, as integers are represented differently in memory by different platforms. This means every time you need to read or write external data, you need to do more than just dumping the memory of your program, or read data directly into your own variables.

e.g. if you have this snippet of code:

unsigned int var = ...;write(fd, &var, sizeof var);

You're directly writing out the memory content of var, which means the data gets presented to wherever this data goes just as it is represented in your own computer' memory.

If you write this data to a file, the file content will be different whether you run the program on a big endian or a little endian machine. So that code is not endian agnostic, and you'd want to avoid doing things like this.

Instead focus on the data format. When reading/writing data, always decide the data format first, and then write the code to handle it. This might already have been decided for you if you need to read some existing well defined file format or implement an existing network protocol.

Once you know the data format, instead of e.g. dumping out an int variable directly, your code does this:

uint32_t i = ...;uint8_t buf[4];buf[0] = (i&0xff000000) >> 24;buf[1] = (i&0x00ff0000) >> 16;buf[2] = (i&0x0000ff00) >> 8;buf[3] = (i&0x000000ff);write(fd, buf, sizeof buf);

We've now picked the most significant byte and placed it as the first byte in a buffer, and the least significant byte placed at the end of the buffer. That integer is represented in big endian format in buf, regardless of the endian of the host - so this code is endian agnostic.

The consumer of this data must know that the data is represented in a big endian format. And regardless of the host the program runs on, this code would read that data just fine:

uint32_t i;uint8_t buf[4];read(fd, buf, sizeof buf);i  = (uint32_t)buf[0] << 24;i |= (uint32_t)buf[1] << 16;i |= (uint32_t)buf[2] << 8;i |= (uint32_t)buf[3];

Conversely, if the data you need to read is known to be in little endian format, the endianess agnostic code would just do

uint32_t i ;uint8_t buf[4];read(fd, buf, sizeof buf);i  = (uint32_t)buf[3] << 24;i |= (uint32_t)buf[2] << 16;i |= (uint32_t)buf[1] << 8;i |= (uint32_t)buf[0];

You can makes some nice inline functions or macros to wrap and unwrap all 2,4,8 byte integer types you need, and if you use those and care about the data format and not the endian of the processor you run on, your code will not depend on the endianess it's running on.

This is more code than many other solutions, I've yet to write a program where this extra work has had any meaningful impact on performance, even when shuffeling 1Gbps+ of data around.

It also avoids misaligned memory access which you can easily get with an approach of e.g.

uint32_t i;uint8_t buf[4];read(fd, buf, sizeof buf);i = ntohl(*(uint32_t)buf));

which can also incur a performance hit (insignificant on some, many many orders of magnitude on others) at best, and a crash at worse on platforms that can't do unaligned access to integers.


This might be a good article for you to read: The byte order fallacy

The byte order of the computer doesn't matter much at all except to compiler writers and the like, who fuss over allocation of bytes of memory mapped to register pieces. Chances are you're not a compiler writer, so the computer's byte order shouldn't matter to you one bit.

Notice the phrase "computer's byte order". What does matter is the byte order of a peripheral or encoded data stream, but--and this is the key point--the byte order of the computer doing the processing is irrelevant to the processing of the data itself. If the data stream encodes values with byte order B, then the algorithm to decode the value on computer with byte order C should be about B, not about the relationship between B and C.