Note: For an in-depth discussion on using C++ for high-performance avr programming, see this post.
Good control of time can prove to be a valuable tool under our belts. Focusing on robotics, it can help us with a variety of things including, but not limited to:
- Control of servos and sensors
- Position estimation based on speed encoders
- Led flashing
- Task scheduling
- Code profiling
So proper timing can make our projects more powerful and reliable. Moreover, having an abstraction layer on top of timer hardware will help us prototype faster and will make our code more portable.
We're going to use an interrupt based system, trying to reduce both the number of cycles per interruption and the interruption frequency in order to improve performance. We must also take into account that if we get a low interrupt frequency, the time request duration comes into play as an important factor, because those requests may well be donre many times per interrupt (in fact, this is the most likely scenario for us).
Our approach will be:
Use a timer that interrupts once a millisecond. This will simplify the maths inside the interrupt and makes one interrupt each 16000 instructions, wich seems acceptable while giving great precission (microsecond precission).
On the atmega2560 that we're using, running at 16MHz, an 8-bit timer doesn't really suit for the work because it would either overflow before reaching the millisecond or lower the precission to 4 microseconds. This is actually affordable for most projects, so it's an option to take into account if your running low on 16-bit timers. Anyway, we're going with 16-bit Timer1.
unsigned char statusReg = SREG; // Save the status register
++gMillis; // Add one millisecond, cause we interrupt once per millisecond
gSeconds += (0==(gMillis%1000))?1:0; // Increase seconds each thousand milliseconds
SREG = statusReg; // Restore the status register
Note: gMillis and gSeconds are global variables in the example for the sake of clarity, but I suggest making them static variables of the Time class, or including them in a namespace in order to keep the code safe and clean.
As you can see, the mathematics inside the interruption are pretty simple and run fast in an AVR.
Also, it makes the reading of time values just immediate. Milliseconds and seconds are plain values in global variables (remember these variables should be volatile), and microseconds you can read as:
micros = 1000*millis + (Timer1 >> 2).
That "Timer1 >> 2" part makes sense when you configure the timer like this:
// Set up timer 1 to overflow once a millisecond
// WGM = 4, OCRA = 2000
TCCR1B = 0x08 | 0x02; // WGM=4, Clear on Timer Match | Set prescaler to 1/8th
OCR1A = 2000; // Overflow once per 1000 microseconds
// Enable interrupt
TIMSK1 = 0x02;
This makes the Timer1 counter to increase twice per microsecond, and overflow on the count of 2000.
Don't forget to enable interrupts once you're done with system configuration.
You can play around using a lower interrupt frequency at the cost of adding a little more math to time requests, or sacrifying some accuracy in timing in order to use an 8-bit timer if you need it.
Also, if you really need to squeeze every microsecond, you could try writting the interrupt code and request code in assembly, but I don't really think much can be gained here.
Creating controlled delays during execution can prove as a valuable tool in the development process (and even in production), so let me share with you a simple milliseconds delay rutine using this system:
void Time::waitMs(unsigned _ms)
// One millisecond is when millis = startMillis and target is larger than start mark
// Get the time mark.
unsigned startMark = TCNT1;
// Compute target millisecond
unsigned targetMS = gMillis + _ms;
while(unsigned( gMillis ) != targetMS)
// Keep waiting
// Once we're done with brute millisecond count, fine tune
unsigned oldMark = TCNT1;
unsigned mark = TCNT1;
if(mark > startMark || mark < oldMark)
oldMark = mark;