为什么 int 在 64 位编译器上通常是 32 位?
为什么 int
在 64 位编译器上通常是 32 位?当我开始编程时,我被教导 int 通常与底层架构具有相同的宽度.而且我同意这也是有道理的,我发现未指定宽度的整数与底层平台一样宽是合乎逻辑的(除非我们谈论的是 8 位或 16 位机器,其中 int
几乎不适用).
Why is int
typically 32 bit on 64 bit compilers? When I was starting programming, I've been taught int is typically the same width as the underlying architecture. And I agree that this also makes sense, I find it logical for a unspecified width integer to be as wide as the underlying platform (unless we are talking 8 or 16 bit machines, where such a small range for int
will be barely applicable).
后来我了解到 int
在大多数 64 位平台上通常是 32 位的.所以我想知道这是什么原因.对于存储数据,我更喜欢明确指定的数据类型宽度,因此这留下了 int
的通用用法,它不会提供任何性能优势,至少在我的系统上我具有相同的性能32 位和 64 位整数.这样就剩下二进制内存占用了,虽然不会减少很多...
Later on I learned int
is typically 32 bit on most 64 bit platforms. So I wonder what is the reason for this. For storing data I would prefer an explicitly specified width of the data type, so this leaves generic usage for int
, which doesn't offer any performance advantages, at least on my system I have the same performance for 32 and 64 bit integers. So that leaves the binary memory footprint, which would be slightly reduced, although not by a lot...
推荐答案
实施者的错误选择?
说真的,根据标准,普通整数具有执行架构建议的自然大小环境",这确实意味着 64 位 int
在 64 位机器.人们很容易争辩说其他任何东西都是不符合.但在实践中,问题更为复杂:从 32 位 int
切换到 64 位 int
将 not 允许大多数程序来处理大型数据集或其他任何东西(不像从 16 位切换到 32 位);大多数程序可能是受其他因素的限制.而且会增加数据集的大小,从而减少局部性并减慢程序下来.
Seriously, according to the standard, "Plain ints have the
natural size suggested by the architecture of the execution
environment", which does mean a 64 bit int
on a 64 bit
machine. One could easily argue that anything else is
non-conformant. But in practice, the issues are more complex:
switching from 32 bit int
to 64 bit int
would not allow
most programs to handle large data sets or whatever (unlike the
switch from 16 bits to 32); most programs are probably
constrained by other considerations. And it would increase the
size of the data sets, and thus reduce locality and slow the
program down.
最后(可能也是最重要的),如果 int
是 64 位,short
必须是 16 位或32 位,你将无法指定另一个(除了使用 <stdint.h>
中的 typedef,其目的是这些只应在非常特殊的情况下使用).我怀疑这是主要动机.
Finally (and probably most importantly), if int
were 64 bits,
short
would have to be either 16 bits or
32 bits, and you'ld have no way of specifying the other (except
with the typedefs in <stdint.h>
, and the intent is that these
should only be used in very exceptional circumstances).
I suspect that this was the major motivation.
相关文章