我正在尝试将以下 Python 代码翻译成 C++:
import struct
import binascii
inputstring = ("0000003F" "0000803F" "AD10753F" "00000080")
num_vals = 4
for i in range(num_vals):
rawhex = inputstring[i*8:(i*8)+8]
# <f for little endian float
val = struct.unpack("<f", binascii.unhexlify(rawhex))[0]
print val
# Output:
# 0.5
# 1.0
# 0.957285702229
# -0.0
这段代码读取了16进制编码字符串的32位,并使用unhexlify
方法将其转换为字节数组,再将其解释为小端浮点值。
下面的代码几乎可以工作,但是代码有些糟糕(最后一个00000080
解析不正确):
#include <sstream>
#include <iostream>
int main()
{
// The hex-encoded string, and number of values are loaded from a file.
// The num_vals might be wrong, so some basic error checking is needed.
std::string inputstring = "0000003F" "0000803F" "AD10753F" "00000080";
int num_vals = 4;
std::istringstream ss(inputstring);
for(unsigned int i = 0; i < num_vals; ++i)
{
char rawhex[8];
// The ifdef is wrong. It is not the way to detect endianness (it's
// always defined)
#ifdef BIG_ENDIAN
rawhex[6] = ss.get();
rawhex[7] = ss.get();
rawhex[4] = ss.get();
rawhex[5] = ss.get();
rawhex[2] = ss.get();
rawhex[3] = ss.get();
rawhex[0] = ss.get();
rawhex[1] = ss.get();
#else
rawhex[0] = ss.get();
rawhex[1] = ss.get();
rawhex[2] = ss.get();
rawhex[3] = ss.get();
rawhex[4] = ss.get();
rawhex[5] = ss.get();
rawhex[6] = ss.get();
rawhex[7] = ss.get();
#endif
if(ss.good())
{
std::stringstream convert;
convert << std::hex << rawhex;
int32_t val;
convert >> val;
std::cerr << (*(float*)(&val)) << "\n";
}
else
{
std::ostringstream os;
os << "Not enough values in LUT data. Found " << i;
os << ". Expected " << num_vals;
std::cerr << os.str() << std::endl;
throw std::exception();
}
}
}
(在 OS X 10.7/gcc-4.2.1 上编译,只需使用简单的
g++ blah.cpp
命令)特别地,我想摆脱
BIG_ENDIAN
宏定义,因为我确信有更好的方法来解决这个问题,如this post所讨论的那样。其他一些随机细节 - 我不能使用 Boost(对于该项目而言过于庞大)。字符串通常包含 1536(83*3)至 98304 浮点值(323*3),最多不超过 786432(643*3)。
(更新2:添加了另一个值,
00000080
== -0.0
)
tolower
的原因。对于字节序,是的,尽管它变得稍微棘手一些(对于单个字节,数字不会交换)。 - George Skoptsov