Briefly, when a disc is initialised for file server use, you feed it a value of sectors per cylinder which it then uses to write a sector of free-space-bitmaps at the first sector of each cylinder, one bit per sector, clear is used, set is free. When the file server starts, it reads all these sectors and works out how much space there is.
So, tell it there are 256 sectors per cylinder, and it will write its maps at sector 256, 512, 768, etc. all through the disc.
Except some larger values which I would expect to work just don't. Values up to 2040 work, above that they don't work. There are 2 bytes reserved for number of sectors per cylinder, so 65536 should be the maximum, though realistically 2048 is the maximum as this is the total number of "bits" which will fit in a 256-byte map sector.
With some Pi 1MHz SCSI debug tracing, I think I know what it's doing, and possibly why.
Take the case of 2040 sectors per cylinder. The file server should read sectors 2040, 4080, 6120, 8160 etc. in steps of 2040 all the way up to 2095080, if it's a 512MB ADFS disc image. It does this, so it works.
In the case of 2048 sectors per cylinder something goes wrong early on. It should read 2048, 4096, 6144, 8192 etc, but it doesn't. It reads 2048, then 4096, then jumps to 528384, then 1052672, then 1576960, which is the last one before the end of the disc.
I think this is because there is a calculation error when working out which is the next sector to read. More than likely Acorn never anticipated discs with 2048 sectors per cylinder, (16 heads x 63 maximum sectors per track = 1008 probably being a realistic maximum amount contemplated in the 1980s), so this bit of the code may never have been thoroughly tested.
This, then is what should happen:
Code: Select all
Byte>> 2 1 0 Bit >> 76543210 76543210 76543210 2048 00000000 00001000 00000000 4096 00000000 00010000 00000000 6144 00000000 00011000 00000000 8192 00000000 00100000 00000000 etc
Code: Select all
Byte>> 2 1 0 Bit >> 76543210 76543210 76543210 2048 00000000 00001000 00000000 4096 00000000 00010000 00000000 528384 00001000 00010000 00000000 1052672 00010000 00010000 00000000 1576960 00011000 00010000 00000000
At the next calculation, it's operating solely on byte 2, shifting a bit left, which is what it does on byte 1 when going from 2048 to 4096.
Could some simple error in the arithmetic code be causing this, like forgetting to clear the carry flag or somesuch, which would muck things up with values over 2040 but work for 2040 or lower?