Story time !
Don't get mad about it or conclude that it's evidence of incompetence. Every single automated curriculum generation system I have ever seen has these anomalies, including the ones Rockwell uses.
You do have my permission to get mad about it or conclude that it is evidence of incompetence if it's not removed from your grading or taken seriously by your instructor and used as an opportunity to briefly discuss "rate" versus "percentage" or "limit".
15 or so years ago when RA was experimenting with computerized training courseware they asked me to do the industrial networking module and give them feedback.
I failed the module on industrial networking. You guys have met me, right ?
Because it only generated grades, not individual results, I did it a few more times and found where the answer keys were simply wrong, or the questions it composed included errors or non-sequiturs. At least 30% of the correct answers were marked wrong, evidence of a badly corrupted database or just a garbage-in/garbage-out subject matter set.
I detailed those and fed them back to the training manager, including my opinion that this product wasn't ready for release and I would be happy to work with the subject matter editors if they needed.
A week later was a big annual meeting, and the training manager proudly announced that the online computerized training system was going live right away.
I objected. "But it was still in beta testing last week. I did the industrial networking module and it generated a failing score."
Again, he's known me for a decade. Instead of "we're definitely going to continue to do development and internal testing", he said:
"Well, Ken, maybe you just need more training."