Scan time is the the PLC programmer's equivalent of the boy racer's 1/4 mile time...it means something, but it is only ONE of the engineering criterion that measures the performance of a system, and not always an important one.
In our dairy industry, where we run 5-10,000 I/O plus 20-50 PID loops in a single CLX L63 processor with the memory maxed, we routinely run all the device logic at a periodic task rate of 100 msec, and the sequencing logic at 200 msec. Works just fine for these applications.
On the other hand I have an SLC5/05 based Emergency Generator and Load control system that trundles along at a 30-50 msec scantime when nothing is happening, but when an MCB switching event occurs I can get throughput time of about 8 msec, with just a few tricks in the program organisation.
When I started programming I often placed too much emphasis on minimising total scantime, but to no real benefit. With time I have learnt to write processor efficient code, mostly by use of state engine sequencers and event based tasks...primarily with the goal of keeping the code well organised. As a result I finish up with deterministic code where I know everything will be processed in the time window it NEEDS to be actioned in; not longer, nor shorter.
In fact I would argue that many PLC programmers tend to OVERPROCESS their logic. I would hazard a guess that 95% of I/O out there in the real world doesn't need scanning more often than ten times a second (100msec). The processor resource that this frees up can sometimes be utilised in other ways, ie better comms, more features or more deterministic performance.