Page 2 of 2
Re: accurate software time delay - microsecond range
Posted: Tue Dec 15, 2009 12:14 pm
by lgitlitz
I posted a timer driver in the application notes section:
http://forum.embeddedethernet.com/viewt ... ?f=7&t=397
There is a delay function in this driver but it uses the OS, it works like a more precise OSTimeDly. This is probably not the delay you will want to use in your application since there is a bunch of overhead when switching tasks and you will need many delays when communicating. Just strip all the interrupt stuff out and make a polling delay function.
There is also an application note on the NetBurner web page:
http://www.netburner.com/downloads/mod5 ... -timer.pdf
Re: accurate software time delay - microsecond range
Posted: Tue Dec 29, 2009 8:58 pm
by yevgenit
1. Think about using TOUT timer pin for exact delay in microsecond range.
In the other words, try to arrange your application with pure hardware delay.
2. Using timer interrupt is quite accurate solution due to tiny interrupt latency in uC/OS.
To check non-stability of the interrupt latency, you can insert toggling any microcontroller port pin by start of the interrupt service routine. Then, observe the pulse with a scope. Direct control of the related pin registers is recommended.
yevgenit wrote:The software delay cannot be accurate due to RTOS and application take non-deterministic quantity of CPU time.
Use any 5270 hardware timer. See related Netburner application notes.
Re: accurate software time delay - microsecond range
Posted: Tue Apr 10, 2012 9:32 am
by ScottM
I'm resurrecting this thread because I'm a newbie to Netburner, and considering using it for a project that requires both TCP and one-wire comm. I'd rather not involve additional hardware if I don't have to. But I'm not sure that I see here is going to work for me.
The overall idea is to talk to a bunch one of one-write devices, gather up their data and ship it over TCP. However, I may also want to read arriving TCP messages during all this. I'm worried that having to lock out interrupts for long enough to send a message or get a message from a one-wire device is going to mess up the OS'a handling of the TCP socket; reading one-wire data requires locking things down for maybe 75 microseconds at a time, and doing it many times in rapid succession. Is that going to mess up the OS? can it still handle buffering arriving TCP data in the background?
Any experience with this, anyone?
Re: accurate software time delay - microsecond range
Posted: Tue Apr 10, 2012 10:21 am
by Chris Ruff
DS2483
a i2c to 1-wire transceiver
Are you designing the hardware? For a task such as you are specifying, I would want to use a transceiver chip
Chris
Re: accurate software time delay - microsecond range
Posted: Tue Apr 10, 2012 2:39 pm
by ecasey
I completely agree with Chris. I use the DS2480B Uart to One-Wire Transceiver. Dallas has software that was easy to port to the NB platform. The transceiver takes care of all the One-wire timing and has a line-driver to maximize the developed length of the One-Wire network.
Ed