Page 1 of 2

accurate software time delay - microsecond range

Posted: Thu Dec 03, 2009 3:13 am
by mmk-tsm
Hello,
Has anyone got a good, simple way to do an accurate time delay in software. I need it for a 1-wire interface. I had been using the following in DevC

void DelayOneuS( void )
{
asm(" move.l #39,%d0"); //17
asm("DECLOOP1:");
asm(" subq.l #1,%d0");
asm(" bne DECLOOP1");
}

but when I ported over to Eclipse the loop variable had to increase from 17 to 39, presumably the compiler compiled differently. Is there any way to see listing of output?

Or has someone a better, more secure way using a timer perhaps. I can poll. The 1-wire inteface is only used at start-up, and I can put the code in an OS critical section.
TiA,
Mike.

Re: accurate software time delay - microsecond range

Posted: Thu Dec 03, 2009 3:20 am
by mmk-tsm
I probably should have added that I am using a MOD5270.

Re: accurate software time delay - microsecond range

Posted: Thu Dec 03, 2009 4:14 am
by yevgenit
The software delay cannot be accurate due to RTOS and application take non-deterministic quantity of CPU time.
Use any 5270 hardware timer. See related Netburner application notes.

Re: accurate software time delay - microsecond range

Posted: Thu Dec 03, 2009 4:38 am
by Ridgeglider
see the harware timer larry posted:
http://forum.embeddedethernet.com/viewt ... ?f=7&t=397

Re: accurate software time delay - microsecond range

Posted: Thu Dec 03, 2009 7:11 am
by Chris Ruff
I have been one-wiring for years now on NB at the C level

// **||***********************************************************************
void TDelay(int val)
// **||***********************************************************************
{
volatile int tester;
tester = val;
while(tester--);
}

// **||***********************************************************************
void TestTiming(int iTim)
// **||***********************************************************************
{
USER_ENTER_CRITICAL();
WriteButtonBit(0);
TDelay(200);
WriteButtonBit(1);
TDelay(20);
USER_EXIT_CRITICAL();
}


The whole trick is to turn off the interrupts so that you 'own' the processor. If you don't have the interrupts turned off the durations will change as you are being preempted by the scheduler in the OS


Chris

Re: accurate software time delay - microsecond range

Posted: Fri Dec 04, 2009 3:12 am
by mmk-tsm
Chris,
we also have been one-wiring as you put it for years. And we use the very same approach as you outlined, software time delay.

But on moving from DevC to Eclipse, the timing value had to change, by more than a factor of two. (17 -> 38).

I thought there might be an easy way to do the timing with a hardware timer, maybe polling a fast free-running timer.
Mike.

Re: accurate software time delay - microsecond range

Posted: Fri Dec 04, 2009 6:40 am
by Ridgeglider
It seems like the DMA timer interrupt would allow you to specify a duration and forget it with the assurance that it is what it says it is. You would need to hog the processor with USER_ENTER and EXIT_CRITICAL();

However if you really need only a uSec and care about the ISR service time overhead (sounds like you do) it should be easy to do a specified number of assembler nops:

asm(" nop");
I think these take three processor cycles where the clock is 73728000 hz on the 5270. This is a bit simpler and therefore easier to calculate than the assembler code you suggested.
Be sure to include the USER_ENTER_CRITICAL(); and USER_EXIT_CRITICAL();

Makes no sense that eclipse changed your timing from devC++ unless you had the optimizer on.

Re: accurate software time delay - microsecond range

Posted: Fri Dec 04, 2009 6:51 am
by Ridgeglider
Mike: forgot to add that you said you werte interested in reading a free running timer. You could read the DMA timer count register easily. These are the DTCNn registers.

See chapter 22 in C:\Nburn\docs\FreescaleManuals\MCF5270_5271RM.pdf

Re: accurate software time delay - microsecond range

Posted: Sat Dec 05, 2009 5:20 pm
by rnixon
One possible reason your asm delay changed is the fast buffer and fast sram task switching. Since you are running in a multi-task environment, if the tasks switch faster, then your code may run differently. Or, if at some later date someone modifies your code and moves task priorities around, the software delay may change as well, even if you didn't change to the latest compiler. The h/w timer is the only sure way to get an absolute delay. The problem with adding critical sections is that you increase the overall system latency because nothing else can run. If that is ok, then maybe its a solution, but remember there are network tasks, system timer tasks, etc. You are disabling all of those for the time period of your critical section. On the positive side of things, if the sram buffering is the reason, its nice to know the system runs that much faster!

Re: accurate software time delay - microsecond range

Posted: Tue Dec 15, 2009 2:36 am
by mmk-tsm
Hello,
thanks to all suggestions. The compiler did obviously compile my software loop differently.

Anyhow polling a timer is the correct approach. And using a DMA timer sounds good. I only use one-wire for reading an ID chip, only at power on. So I can use an OS Critical section.

Has anyone a little code snippet (Microchip forums are great for that) for setting up one of the timers, and then reading it?
I am a lazy bugger :D . Not really, saves a lot of time to get something that is known working.
TiA,
Mike.