As they say on TV, in no particular order.....
But by now you probably have piles of code that rely on the buffer being inverted.
Not particularly, there's just one EOR #$FF per byte when each is removed from the transient buffer.
I did read this somewhere : [snip] "...the few instructions are multiplied by over 200,000 and microseconds easily become seconds
To be fair, you're quoting that somewhat out of context. In that paragraph I specifically refer to my minor frame (the core RS232 byte receiver) and my major frame (the rest of the utility's function). In the case of the RS232 byte inversion falling into the major frame, that's 2 cycles (or 1us @ 2MHz) multiplied by 200k which equates to a mere 0.2s for one side of a disc. Lost in the noise....
And then the tech stuff.....
I've attached the code for the final RS232 Rx which has maybe changed a little but I think there are some aspects you are neglecting to take into account. (I have another version for compatibility with legacy ports which supports a >256 byte buffer and doesn't employ a Start Bit detection delay list but the key timing principle is the same.)
To remind us before I waffle on, 115200 Baud = 8.68us (~17 cycles) bit width, 10 bits per byte with 'No Parity' and whatever code we use, we cannot proceed any faster than the 86.8us that is required to Tx/Rx one byte at 115k Baud.
At 2MHz, the Beeb's interrupt latency is greater than a bit width and therefore we have to poll each byte for the Start Bit (SB). Using the fastest possible method (including a delay list to detect a <Break>), this polling has a potential variable latency of between 7 and 11 cycles. The pre-receive code which has to dynamically monitor buffer fill and respond with CTS must have equal paths through regardless of whether CTS is reset or not. (Remember that the receive routine and its CTS control has to cater for transmitter overruns, likely to be between 1 and 5 bytes.)
This code expends 15 cycles and therefore, with our SB latency, we will read b0 of the byte at somewhere between 22 and 26 cycles after receipt of the SB. In fact, due to variations in edge-detection response by the 6522, I sometimes see 27 or even 28 cycles.
The significance of the above is that we cannot gurantee the point at which we will sample b0 and, because 2MHz does not produce a cycle resolution that equates precisely to 115k Baud, we either gain or lose time every bit if we use a subsequent fixed sample interval. A fixed 17 cycles would retard our sample point by ~0.2us per bit, 16 cycles would retard by ~0.7us per bit and 18 cycles would advance by ~0.3us. There are potentially 7 further bits to sample per byte giving rise to potential sample point shifts of approximately -5us, -1.5us or +2us. Hopefully you can see that none of these is acceptable as a fixed interval when we we have such a range of potential SB detection latency because the cumulative error could easily (and does
) cause a slip into a neighbouring bit. This is the reason I jitter the sample point for b1-b7 and if you calculate the summation of the sample point offests, you will see that the SB detect latency is adequately catered for by jittering between 16 & 18 cycles.
and wrote:Oh, I agree. If it ain't broke, why fix it?
I hate that expression, it somehow implies that something works but isn't all it could be
. Hopefully the above helps to show that there's nothing to fix
I know, I know, you'll be back.......
Extra bit of info which may be causing you some confusion. A read or write to the User Port causes the clock to slow to 1MHz and I originally dived in without thinking assuming this meant that these instructions would therefore double-up from 4 to 8 cycles. However, only two of the four cycles are extended (the ram fetches stay at 2MHz) and so the total time extends from 4 to 6 cycles. Obvious when you think about it