为什么微秒时间戳使用(私有)gettimeoftheday()重复,即纪元

2022-01-13 00:00:00 time performance timestamp epoch c++

我正在使用 gettimeofday() 连续打印微秒.正如程序输出中给出的那样,您可以看到时间不是更新的微秒间隔,而是某些样本的重复时间,然后增量不是以微秒为单位,而是以毫秒为单位.

I am printing microseconds continuously using gettimeofday(). As given in program output you can see that the time is not updated microsecond interval rather its repetitive for certain samples then increments not in microseconds but in milliseconds.

while(1)
{
  gettimeofday(&capture_time, NULL);
  printf(".%ld
", capture_time.tv_usec);
}

程序输出:

.414719
.414719
.414719
.414719
.430344
.430344
.430344
.430344

 e.t.c

我希望输出按顺序递增,

I want the output to increment sequentially like,

.414719
.414720
.414721
.414722
.414723

.414723, .414723+x, .414723+2x, .414723 +3x + ...+ .414723+nx

当我从 capture_time.tv_usec 获取微秒时,它似乎没有刷新.

It seems that microseconds are not refreshed when I acquire it from capture_time.tv_usec.

==================================//完整程序

================================= //Full Program

#include <iostream>
#include <windows.h>
#include <conio.h>
#include <time.h>
#include <stdio.h>

#if defined(_MSC_VER) || defined(_MSC_EXTENSIONS)
  #define DELTA_EPOCH_IN_MICROSECS  11644473600000000Ui64
#else
  #define DELTA_EPOCH_IN_MICROSECS  11644473600000000ULL
#endif

struct timezone 
{
  int  tz_minuteswest; /* minutes W of Greenwich */
  int  tz_dsttime;     /* type of dst correction */
};

timeval capture_time;  // structure

int gettimeofday(struct timeval *tv, struct timezone *tz)
{
  FILETIME ft;
  unsigned __int64 tmpres = 0;
  static int tzflag;

  if (NULL != tv)
  {
    GetSystemTimeAsFileTime(&ft);

    tmpres |= ft.dwHighDateTime;
    tmpres <<= 32;
    tmpres |= ft.dwLowDateTime;

    /*converting file time to unix epoch*/
    tmpres -= DELTA_EPOCH_IN_MICROSECS; 
    tmpres /= 10;  /*convert into microseconds*/
    tv->tv_sec = (long)(tmpres / 1000000UL);
    tv->tv_usec = (long)(tmpres % 1000000UL);
  }

  if (NULL != tz)
  {
    if (!tzflag)
    {
      _tzset();
      tzflag++;
    }

    tz->tz_minuteswest = _timezone / 60;
    tz->tz_dsttime = _daylight;
  }

  return 0;
}

int main()
{
   while(1)
  {     
    gettimeofday(&capture_time, NULL);     
    printf(".%ld
", capture_time.tv_usec);// JUST PRINTING MICROSECONDS    
   }    
}

推荐答案

你观察到的时间变化是 0.414719 s 到 0.430344 s.差异为 15.615 毫秒.数字的表示是微秒这一事实不意味着它增加了 1 微秒.事实上,我预计 15.625 毫秒.这是标准硬件上的系统时间增量.我仔细看看 这里 和 这里.这称为系统时间的粒度.

The change in time you observe is 0.414719 s to 0.430344 s. The difference is 15.615 ms. The fact that the representation of the number is microsecond does not mean that it is incremented by 1 microsecond. In fact I would have expected 15.625 ms. This is the system time increment on standard hardware. I've given a closer look here and here. This is called granularity of the system time.

Windows:

但是,有一种方法可以改善这一点,一种降低粒度的方法:多媒体定时器.特别获取和设置定时器分辨率 将公开一种增加系统中断频率的方法.

However, there is a way to improve this, a way to reduce the granularity: The Multimedia Timers. Particulary Obtaining and Setting Timer Resolution will disclose a way to increase the systems interrupt frequency.

代码:

#define TARGET_PERIOD 1         // 1-millisecond target interrupt period


TIMECAPS tc;
UINT     wTimerRes;

if (timeGetDevCaps(&tc, sizeof(TIMECAPS)) != TIMERR_NOERROR) 
// this call queries the systems timer hardware capabilities
// it returns the wPeriodMin and wPeriodMax with the TIMECAPS structure
{
  // Error; application can't continue.
}

// finding the minimum possible interrupt period:

wTimerRes = min(max(tc.wPeriodMin, TARGET_PERIOD ), tc.wPeriodMax);
// and setting the minimum period:

timeBeginPeriod(wTimerRes); 

这将强制系统以最大中断频率运行.作为结果系统时间的更新也会更频繁,系统时间增量的粒度在大多数系统上将接近 1 毫秒.

This will force the system to run at its maximum interrupt frequency. As a consequence also the update of the system time will happen more often and the granularity of the system time increment will be close to 1 milisecond on most systems.

当您应该得到超出此范围的分辨率/粒度时,您必须查看 QueryPerformanceCounter.但是,在较长时间内使用它时要小心使用.这个计数器的频率可以通过调用 QueryPerformanceFrequency.操作系统将此频率视为常数,并将始终给出相同的值.但是,某些硬件会产生此频率,并且真实频率与给定值不同.它有一个偏移量,并显示出热漂移.因此,应假定误差在几微秒到几微秒/秒的范围内.有关这方面的更多详细信息,请参见上面的第二个此处"链接.

When you deserve resolution/granularity beyond this, you'd have to look into QueryPerformanceCounter. But this is to be used with care when using it over longer periods of time. The frequency of this counter can be obtained by a call to QueryPerformanceFrequency. The OS considers this frequency as a constant and will give the same value all time. However, some hardware produces this frequency and the true frequency differs from the given value. It has an offset and it shows thermal drift. Thus the error shall be assumed in the range of several to many microseconds/second. More details about this can be found in the second "here" link above.

Linux:

Linux 的情况看起来有些不同.请参阅 this 了解一下.Linux使用函数 getnstimeofday 混合 CMOS 时钟的信息(自纪元以来的秒数)和来自高频计数器的信息(微秒)使用函数 timekeeping_get_ns.这不是微不足道的,并且在准确性方面值得怀疑,因为这两个来源都由不同的硬件支持.这两个源没有锁相,因此每秒可能多/少超过一百万微秒.

The situation looks somewhat different for Linux. See this to get an idea. Linux mixes information of the CMOS clock using the function getnstimeofday (for seconds since epoch) and information from a high freqeuncy counter (for the microseconds) using the function timekeeping_get_ns. This is not trivial and is questionable in terms of accuracy since both sources are backed by different hardware. The two sources are not phase locked, thus it is possible to get more/less than one million microseconds per second.

相关文章