Haven't tried that exactly. Might do so later on - but so far, all the tests I've done seem to indicate that latency isn't changing with (even vastly) different input signals. There's been a few pitched sounds when their initial attacks were hard to decipher because of the way less than shiny quality of the HX Poly Pitch block, but we're perhaps talking 1-5 samples of a possible "grey area" here, accounting for 0.1ms or something.
Fwiw, even if I'm no programmer at all, I still don't completely buy the "each pitch in a chord has to be analyzed" statement.
When you think about traditional pitch shifting, as happening with, say, vinyl records that you speed/slow up/down or samplers, hence as in length and pitch being altered simultaneously, the quality of the pitch shifting itself is quite excellent, just that the tweaked audio is also stretched/squeezed. In my layman book, all it'd take would be a stretching algorithm to be involved simultaneously.
But as said, I have absolutely no idea about programming, I just don't know why pitch analysis would be required on something that is basically just multiplying/dividing frequencies by a fixed amount while as well multiplicating/dividing lengths.
Also, if that would be happening with chords as well, why can't we then as well pitch individual voices within that chord? Should as well be possible then.