From: fachat (afachat_at_gmx.de)
Date: 2005-04-26 00:31:38
Hi, I am curious about this problem because in those days I wrote a 9second format that I wanted to publish in the 64er, but it wasn't reliably enough.... First: the double "BVC *" occurs at the end of formatting a track. The bytes written before are #$55, so it just means that the gap has a length of one more byte than calculated. Second: Indeed in the write block routine ($f575) the last "BVC *" at $F5CA waits for the last data byte (from ($30),y) to be _loaded_ into the shift register. Immediately after that the PCR is set to input again, so there is no time to shift the last data byte out to the RW-head completely. Third, when you look at how the GCR bytes are computed (at $F78F), you see that the first set of 4 bytes to by converted to GCR are handled different from the other bytes, in that the first byte is taken from $47 (data block marker, value #$7) and not from the data buffer. As the data buffer has a multiple of 4 bytes (256 bytes of course), the last 4 bytes are also handled differently. Because the in the beginning one byte is missing, the last byte must be written in an extra set of 4 bytes to be converted to GCR. You can see this at the single "strange" "BEQ $F7D9" at $F7C6 that exits the loop after reading one byte, when the end of the buffer is reached. It escapes to F7D9, where the second byte is read from $3A, and more important for our problem, the last two bytes are zero! So in total 256+4=260 bytes are converted into 325 GCR bytes. This method is called from track format and from sector write. What is most important is that the last 2 byte = 16 bit data, i.e. the last 20 bit GCR are always the same, no matter what data is in the buffer. Now if always the same data is written to disk (at the end of the block), it is not important, when the write stops, because the data is already on the disk from the previous write. The only problem, however, is the tolerance from the half-cycle of the GCR bit clock (time quantization error) [The byte timing is determined by the SYNC pulse.Shifting starts when the SYNC pulse stops - maybe. Just had a look at the schematics again, but didn't see how it should work, Don't have my TTL docs at hand....] On Sat, Apr 23, 2005 at 10:41:18AM +0200, Nicolas Welte wrote: > Hi Spiro, > Spiro Trikaliotis wrote: > >Now, why don't we see a problem with that write routine if I am right? I > >state that the last (GCR) byte of the block is not necessary at all! > > > > > >The data block contains: > > > >- 1 byte $07 (data block marker) > >- 256 byte DATA > >- 1 byte CKS > > > >Thus, we have 258 byte, which sums up to 258 / 4 * 5 = 322,5 GCR byte. > > > >OTOH, we have > >$01BB-$01FF (69 byte) > >data block (256 byte) > > > >thus, 325 GCR byte are written to the disk per block (which is 260 byte > >data). Exactly. > >So, we see that the last two GCR bytes are not needed at all, they are > >just some dummy. Because of this, we do not get any read error when the > >bytes are wrong. We do not get a checksum error. And if you look at the decode code at $F8E0 you see that there is a) no decode error (I haven't seen one at least, so illegal GCR is ignored) b) the last two decoded data bytes are completely ignored. > GCR byte doesn't get written to the disk completely, I can imagine that > this can lead to problems! E.g. Joe introduced a GCR code checking into SC > some while ago, and I keep getting extra errors on disks that are otherwise > perfectly fine. While most disks read just fine, and others that usually > report checksum errors (code 23) in reality have decoding errors (code 24), > there are some that have decoding errors in each block, but the data is > just fine. So I think that the decoding of the extra filler bytes fails, This may be because the timing difference between the write clock and read clock was too large. > which isn't a problem at all for the normal DOS, because it simply ignores > decoding errors (unlike the older dual drives, BTW!). The idea of the Do you know how the older drives handle this situation? In the VC1541 the switch from write to read seems to immediately take the power from the RW head (does it really?), so the switch immediately stops writing. Do the older dual-drives latch the switch until the byte is completely written? > decode checking was to have some extra error detection for weak disks where > sometimes, after many retries, the checksum matches just by chance. But > even if the checksum matches, you probably still have a decode error. If > all this is true, then SC should probably be changed to only check the > decoding of the actually used bytes, and forget checking the extra two data > or two-and-half GCR bytes. > > Still I'd like to know what formatters create such "broken" disks. I can't > really believe the ROM formatter does this, otherwise much more disks would > give these errors. Something that needs to be checked I believe :/ Indeed. This is an interesting problem. The analysis didn't help why my formatter was so unreliable, but well,... :-) So long Andre Message was sent through the cbm-hackers mailing list
Archive generated by hypermail pre-2.1.8.