Yes, the actual scan period times will be long: mean cycle times will be about half a millisecond (or more*) longer than the nominal timer setting for the method chosen (single time, ping-pong (cascading) timers, STI) Using the internal clocks directly can provide better accuracy.
That said, a few ms out of a second long will is unlikely to affect the PID algorithm significantly; even if it did the mean fractional delay could be determined by calibration and simply be converted to offsets of the time-based PID parameters.
TL;DR
* The tests that gave those results had minimal processing e.g. < 1ms/scan; I expect the error of the timer- and STI-based means wrt the nominal target is of order the per-scan time, so if your scans take 10ms, then expect errors of order 5-10ms. This is not the case for the FRC and RTC methods; they will always have errors of means of order the resolution of the measuring technique.
e.g
- If you use a single self-resetting timer in the continuous running scan, the start of scans when the timer is detected as expired (.DN (done) bit is 1, in A-B jargon), which in turn would trigger an execution of the PID instruction, will be about 1000.5ms with a standard deviation of about half a millisecond.
- If you use an STI routine, the starts of the interrupt routines will be about 1000.2ms with a standard deviation of about a third of a millisecond.
Better methods to get closer to the targeted time involve either the free-running clock (FRC; MicroLogix 1xxx; target resolution 0.1ms) or the real-time clock (RTC; target resolution 1s); the mean time between timed scans using these methods will be 1000.02 or better, although the standard deviation will be similar to the other methods.
Cf
here; plotted data are in PNGs
here. The single timer case, with its double-humped frequency distribution, does not apply exactly to the PID case, but the cascading timer case does:
The image below is for a single repeating timer running