Delay execution 1 second

二次信任 提交于 2019-11-29 14:09:34

To reiterate on what has already been stated by others with a concrete example:

Assuming you're using std::cout for output, you should call std::cout.flush(); right before the sleep command. See this MS knowledgebase article.

sleep(n) waits for n seconds, not n microseconds. Also, as mentioned by Bart, if you're writing to stdout, you should flush the stream after each write - otherwise, you won't see anything until the buffer is flushed.

So I am trying to program a simple tick-based game. I write in C++ on a linux machine.

if functioncall() may take a considerable time then your ticks won't be equal if you sleep the same amount of time.

You might be trying to do this:

while 1: // mainloop
   functioncall()
   tick() # wait for the next tick

Here tick() sleeps approximately delay - time_it_takes_for(functioncall) i.e., the longer functioncall() takes the less time tick() sleeps.

sleep() sleeps an integer number of seconds. You might need a finer time resolution. You could use clock_nanosleep() for that.

Example Clock::tick() implementation

// $ g++ *.cpp -lrt && time ./a.out
#include <iostream>
#include <stdio.h>        // perror()
#include <stdlib.h>        // ldiv()
#include <time.h>        // clock_nanosleep()

namespace {
  class Clock {
    const long delay_nanoseconds;
    bool running;
    struct timespec time;
    const clockid_t clock_id;

  public:
    explicit Clock(unsigned fps) :  // specify frames per second
      delay_nanoseconds(1e9/fps), running(false), time(),
      clock_id(CLOCK_MONOTONIC) {}

    void tick() {
      if (clock_nanosleep(clock_id, TIMER_ABSTIME, nexttick(), 0)) {
        // interrupted by a signal handler or an error
        perror("clock_nanosleep");
        exit(EXIT_FAILURE);
      }
    }
  private:
    struct timespec* nexttick() {
      if (not running) { // initialize `time`
        running = true;
        if (clock_gettime(clock_id, &time)) {
          //process errors
          perror("clock_gettime");
          exit(EXIT_FAILURE);
        }
      }
      // increment `time`
      // time += delay_nanoseconds
      ldiv_t q = ldiv(time.tv_nsec + delay_nanoseconds, 1000000000);
      time.tv_sec  += q.quot;
      time.tv_nsec = q.rem;
      return &time;
    }
  };
}

int main() {
  Clock clock(20);
  char arrows[] = "\\|/-";
  for (int nframe = 0; nframe < 100; ++nframe) { // mainloop
    // process a single frame
    std::cout << arrows[nframe % (sizeof(arrows)-1)] << '\r' << std::flush;
    clock.tick(); // wait for the next tick
  }
}

Note: I've used std::flush() to update the output immediately.

If you run the program it should take about 5 seconds (100 frames, 20 frames per second).

I guess on linux u have to use usleep() and it must be found in ctime

And in windows you can use delay(), sleep(), msleep()

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!