Extending the MMB format beyond 511 disks

discuss both original and modern hardware for the bbc micro/electron
User avatar
hoglet
Posts: 12895
Joined: Sat Oct 13, 2012 7:21 pm
Location: Bristol
Contact:

Extending the MMB format beyond 511 disks

Post by hoglet »

Hello all,

I'm looking for some feedback on an idea I've been experimenting with this week: extending the MMB format beyond 511 disks.

I'm genuinely unsure whether this is a good idea or not, given that MMFSv2 dispenses with the MMB container entirely and effectively allows an unlimited number of disk images to be supported, as seperate SSD/DSD files on the SD card. And SmartSPI allows the use of multiple MMB files (with manual switching between them).

But for one reason or another I find myself still using MMFSv1, mostly because I have a much better memory for disk numbers than for 8-character filenames. But partly because I have found MMFSv2 a bit slow with large numbers of disks.

So for this reason I've been experimenting with larger MMB files.

As a quick refresher, the original 511-disk MMB format is:

Code: Select all

<16 byte header specifying the power on drive mapping>
<511 x 16 bytes disk table, caching the disk title and status>
<511 x 200KB disk images>
One of my main concerns in extending the MMB file is compatibility, both with existing MMB files, and with existing SD-Card file systems and MMB archive management software.

So lets talk about two types of compatibility:
- Backardwards compatibility: the ability of new software / file systems to work with existing MMB files
- Forwardwards compatibility: the ability of existing software / file systems to work with new (larger) MMB files (or degrade gracefully)

Backwards compatability is the easier of these to achieve; it just requires that new software can reliably distinguish between the old and new MMB file formats.

The existing 16-byte MMB file header is:

Code: Select all

00: Default disk to load into drive 0 (low byte)
01: Default disk to load into drive 1 (low byte)
02: Default disk to load into drive 2 (low byte)
03: Default disk to load into drive 3 (low byte)
04: Default disk to load into drive 0 (high byte)
05: Default disk to load into drive 1 (high byte)
06: Default disk to load into drive 2 (high byte)
07: Default disk to load into drive 3 (high byte)
08-0F: Unused, normally written as 00
I was thinking of using byte 08 in the header as a length indicator as follows:
08: 00 indicates a 511 disk MMB file
A1 indicates a 1023 disk MMB file
A2 indicates a 2047 disk MMB file
A3 indicates a 4095 disk MMB file
A4 indicates a 8191 disk MMB file
anything else is reserved, but would be treated as a 511 disk file
[/code]
This allows new software to easily determine the MMB file size.

Forwards compatibility - the ability for existing software to work with new (larger) MMB files (or degrade gracefully) - is more of a challenge. By degrading gracefully, I mean still having access to the first 511 disks, and potentially being able to do a full range of file operations on those disks.

One way to implement a larger MMB file would be to simply make the drive table and disk image sections larger:

Code: Select all

<16 byte header specifying the power on drive mapping>
<4095 x 16 bytes disk table, caching the disk title and status>
<4095 x 200KB disk images>
(I'm using 4095 disks here as an example; in practice I would like to be able to support multiple sizes.)

Unfortunately this breaks forward compatibility. Existing software will be unable to make any sense of the MMB file, because the position of the disk images has shifted. Worse still, it's shifted by a fraction of a disk image. This is bad, because the disk catalogs are no longer aligned between the old and new formats. So anything trying to list the contents of the disks is going to return garbage.

Instead, I think it's better to try to extend the exiting MMB format by adding data onto the end:

Code: Select all

<16 byte header specifying the power on drive mapping>
<511 x 16 bytes disk table, caching the disk title and status>
<4095 x 200KB disk images>
<200KB reserved>
<3584 x 16 bytes disk table, caching the disk title and status>
This gives existing software a better chance of degrading gracefully, as Disks 0..510 are in exactly the same position in the old and new formats. All of the disk images are contiguous, which is also attractive.

There's a good chance that existing SD Card filesystems will access the first 511 disks, and will simply ignore the rest.

The only down side is that the disk table is now split into two sections, so any code that manages the disk table (e.g. *DCAT, *DRECAT, *DFREE, *TITLE, etc) gets a bit more complicated.

The reserved section has two purposes:
- provides a place for future expansion
- simplfies the calculation off the offset within the MMB file of a given disk table sector

I'm a bit less confident of how existing MMB archive management software will cope with the new format. I've tried a few:
- Stephen's Perl MMBUtils (which I use) work fine, and it's possible to read/write disks 0.510
- Robcfg's MMBExplorer rejects the file as invalid
- Gerald's DiskImageManager rejects the file as invalid

This is not entirely unexpected, as I think MMBExplorer and DiskImageManager load the entire MMB file into memory (a 8191 disk file is now 1.7GB!)

So where I have ended up with the new MMB format is:

Code: Select all

<16 byte header specifying the power on drive mapping, and a new MMB size byte>
<511 x 16 bytes disk table, caching the disk title and status>
<(2^N-1) x 200KB disk images>
<200KB reserved>
<(2^N-2^9) x 16 bytes disk table, caching the disk title and status>
where N can be in the range 9 to 13, allowing for up to 8191 disks in a single MMB file.

I've had a go this week at implementing this in MMFS.

My main concern (for a while with MMFS) has been space, in that some of the builds are very tight indeed, with only a few spare bytes. Acorn did a pretty efficient job writing the core DFS code (on which MMFS is based), so there's not a lot of scope there for optimization. But during the week I've looked at the rest of MMFS, specifically the DUTILS commands (DCAT, DRECAT, DFREE, etc). With some effort, I've managed to claim back almost 200 bytes, without sacreificing any features.

I've been working in the [url=https://github.com/hoglet67/MMFS/commit ... mb_support]large_mmb_support[url] branch, and have added two new features (both optional at present)
- Large MMB support (exactly as outlined in this post), which cost 89 bytes of code
- *DONBOOT support, which costs 45 bytes of code

(The *DONBOOT command allows you to specify the default drive mapping. These are written back to the MMB file, and reloaded on a power up, and on a control-break. After using this for a while, I wish I'd added it sooner!)

Anyway, here are a few screen shots of it running (in b-em)...
Screenshot from 2021-09-23 17-37-45.png
As an experiment, I've manually built a 8191 entry MMB file, and added all 3,500 ~ish disk images from bbcmicro.co.uk onto it:
Screenshot from 2021-09-23 18-14-08.png
Screenshot from 2021-09-23 17-44-16.png
Screenshot from 2021-09-23 17-45-36.png
Screenshot from 2021-09-23 17-57-23.png
I'm looking for feedback on a few things:
- would anyone find larger MMB files useful?
- or would MMFS supporting multiple MMB files instead be more attractive (like SmartSPI does)?
- does the proposal in the post seem like a reasonble way to proceed?
- would any of the MMB Manager folk be interested in supporting larger MMB files (especially Stephen as I use MMBUtils daily)?

A can make a dev build of MMFS 1.4X available if anyone want to have a play. But be aware that current the only way to access disk images beyond 511 is from the Beeb.

Anyway, that's more than enough for now. I'm happy to talk about this more at the dev night tonight.

Dave
User avatar
tricky
Posts: 8160
Joined: Tue Jun 21, 2011 9:25 am
Contact:

Re: Extending the MMB format beyond 511 disks

Post by tricky »

My first thought was just to add it on the end.
You could support the extra .mmb files by selecting them if they are there and if not setting an offset to be used by the *DIN etc.
I only use MMFS, I have never used MMFS2.
I think in some builds of my menu, it could use more that 511 .SSDs
User avatar
sweh
Posts: 3503
Joined: Sat Mar 10, 2012 12:05 pm
Location: 07410 New Jersey
Contact:

Re: Extending the MMB format beyond 511 disks

Post by sweh »

I would also consider "appending" ('cos that's kinda what Solidisk did with their chained catalogues). An unused byte could be used as a flag to say "another catalogue exists after the 511 disks". That second catalogue could also have the unused byte indicate yet another copy 511 disks further on... and so on.

Code: Select all

<8 byte header specifying the power on drive mapping>
<1 byte indicating additional catalogue>
<7 bytes unused>
<511 x 16 bytes disk table, caching the disk title and status>
<511 x 200KB disk images>

<8 byte unused>
<1 byte indicating additional catalogue>
<7 bytes unused>
<511 x 16 bytes disk table, caching the disk title and status>
<511 x 200KB disk images>

<8 byte unused>
<1 byte indicating additional catalogue>
<7 bytes unused>
<511 x 16 bytes disk table, caching the disk title and status>
<511 x 200KB disk images>
A extended MMB of this type would still allow the first 511 disks to be read on older systems, so you have a level of backward and forward compatibility.

I can definitely modify my scripts to support whatever format is decided on.
Rgds
Stephen
User avatar
hoglet
Posts: 12895
Joined: Sat Oct 13, 2012 7:21 pm
Location: Bristol
Contact:

Re: Extending the MMB format beyond 511 disks

Post by hoglet »

sweh wrote: Thu Sep 23, 2021 9:40 pm I would also consider "appending" ('cos that's kinda what Solidisk did with their chained catalogues)
Let me have a think about that....

I can see three areas where this is more complicated:

1. On reset you need to determine the maximum disk number; to do that you need to walk the chain of disk tables, until you read one without the extension flag set.

2. The calculation for the start offset (in 256n sectors) of disk N gets more complex:

Currently it's:
disk_start(N) = 32 + 800 * N

I think it becomes:
disk_start(N) = 32 + 800 * (N MOD 511) + 832 * (N DIV 511)

The DIV/MOD could I guess be done by repead subtraction.

3. The calculation for the disk table address of disc table sector N also gets similarly more complex.

(And it's too late now to even think about this)

Dave
User avatar
sweh
Posts: 3503
Joined: Sat Mar 10, 2012 12:05 pm
Location: 07410 New Jersey
Contact:

Re: Extending the MMB format beyond 511 disks

Post by sweh »

hoglet wrote: Thu Sep 23, 2021 10:16 pm disk_start(N) = 32 + 800 * (N MOD 511) + 832 * (N DIV 511)

The DIV/MOD could I guess be done by repead subtraction.

3. The calculation for the disk table address of disc table sector N also gets similarly more complex.
I was thinking a counter of "what catalogue" was currently being processed and that'd be used as a consistent offset (100Mb*catnum); then existing routines could still work on the existing 511 chunks. But maybe that's not so easy in the space you have available.

The other advantage is that it makes it very easy to extend ("run out of disks? Add 511 more with this one simple trick" :-)). Obviously I can extend in the perl code your way (read the end-of-disk catalogue; add 512 extra blank images; write out the catalogue to the end; extend the catalogue with blank entries) but chained catalogues make it easier.
Rgds
Stephen
marcelaj1
Posts: 484
Joined: Wed Apr 29, 2020 5:07 pm
Location: Surrey
Contact:

Re: Extending the MMB format beyond 511 disks

Post by marcelaj1 »

You could go down the MS route when they switched from floppy boot installations to CD boot installations, and stating that the future is cd only installs and point to a download that created a boot install floppy for cds.
In this case create your magical skies the limit version and a tool to convert the new format back to the old one across a number of cards.
I know its the lazy way to go but it saves all manner of issues and time.
Ashley.
User avatar
BigEd
Posts: 6726
Joined: Sun Jan 24, 2010 10:24 am
Location: West Country
Contact:

Re: Extending the MMB format beyond 511 disks

Post by BigEd »

I think I quite like the idea of chaining in units of 511. It's a pity it doesn't all work out as powers of two though.
User avatar
hoglet
Posts: 12895
Joined: Sat Oct 13, 2012 7:21 pm
Location: Bristol
Contact:

Re: Extending the MMB format beyond 511 disks

Post by hoglet »

BigEd wrote: Fri Sep 24, 2021 9:36 am I think I quite like the idea of chaining in units of 511. It's a pity it doesn't all work out as powers of two though.
Yes, you end up with 8176 disks which is a wierd number.

Anyway, I've had a go at implementing the second proposal, extending in chunks of 511 disks (upto a maximum of 16 chunks).

I made one small tweak: there is a single length byte in the 16-byte header of the first chunk that indicates the total number of chunks present. I thought this was more straight-forward that following a chain of chunks until you hit one without an extended flag set. The header is still present in the additional chunks, it's just never read or written.

Both implementations are on github in seperate branches:

Proposal 1: Split drive table

- Branch: large_mmb_support
- Comparison to base version
- Cost of supporting Large MMBs: 86 bytes of code

Proposal 2: Extending in chunks of 511 disks

- Branch: large_mmb_support2
- Comparison to base version
- Cost of supporting Large MMBs: 124 bytes of code

I also ran a small disk read benchmark just to see if there was any performance difference. The benchmark tests read performance on 10, 100, 1000 and 10,000 byte files, using both *LOAD and BGET#. This is the benchmark:

Code: Select all

   10 CLOSE#0
   20 PRINT "Simple Disk Read Benchmark"
   30 SIZE%=10
   40 FOR A%=1 TO 4
   50 IF A%<4 N%=100 ELSE N%=10
   60 PRINT "File Size=";SIZE%;" bytes"
   70 OSCLI("SAVE XXXX 4000 +"+STR$~(SIZE%))
   80 TIME=0
   90 FOR I%=1 TO N%
  100 *LOAD XXXX
  110 NEXT
  120 PRINT TIME*10/N%;"ms using *LOAD"
  130 TIME=0
  140 FOR I%=1 TO N%/10
  150 A=OPENIN("XXXX")
  160 FOR J%=1 TO SIZE%
  170 X%=BGET#A
  180 NEXT
  190 CLOSE#A
  200 NEXT
  210 PRINT TIME*10/(N%/10);"ms using BGET#"
  220 SIZE%=SIZE%*10
  230 NEXT
Here's results on a real Master (with no second processor):

Base version (1.49)
capture4.png
Proposal 1: Split drive table (1.4X)
capture5.png
capture6.png
Proposal 2: Extending in chunks of 511 disks (1.4Y)
capture7.png
capture8.png
Overall, the performance difference is negligble. Both schemes have a very small (but just about mearurable) performance impact.

Interestingly, the second proposal suffers a small additional overhead for accessinging later chunks.

There are a couple of places where a calculation by chunk number happens by repeated addition/subtraction:

The first (calculate_div_mod_511_zp_x) calculates the diskno DIV 511 and diskno MOD 511:

Code: Select all

\\ Calculate:
\\    DD = D DIV 511
\\    DM = D MOD 511
\\
\\ By repeated subtraction of 0x1FF (511)
\\    DD = 0
\\    DM = D
\\    while (DM >= 0x1FF) {
\\       DM -= 0x1FF
\\       DD ++
\\    }
.calculate_div_mod_511_zp_x
{
	LDA 0, X
	STA dmret%
	LDA 1, X
	STA dmret%+1
	LDX #0
.rloop	LDA dmret%
	SEC
	SBC #&FF
	PHA
	LDA dmret%+1
	SBC #&01
	BCC rexit
	STA dmret%+1
	PLA
	STA dmret%
	INX
	BNE rloop	; always
.rexit
	PLA
	RTS
}
The second (add_chunk_sector) calculates sector offset to the required chunk:

Code: Select all

.add_chunk_sector
{
	TYA
	BEQ done
	\\ sec% += chunk * 0x63D00 by repeated addition
.loop
	LDA #&3D
	CLC
	ADC sec%+1
	STA sec%+1
	LDA #&06
	ADC sec%+2
	STA sec%+2
	DEY
	BNE loop
.done
	\\ Fall through to...
}
Each is called once per file operation in the benchmark (I checked this).

In the worst case (16 chunks) the overhead is a total of 469us:
- 16x32 = 512 cycles = 256us (calculate_div_mod_511_zp_x)
- 16x26 = 416 cycles = 213us (add_chunk_sector)

The measured difference in the benchmark is 0.5ms, so about the same.

Conclusions

So what's my current thinking? Here's a summary....

The first scheme gives you 2N-1 disks, resulting in 5 possible MMB sizes (511, 1023, 2047, 4095, 8192)

The second scheme gives you 511 * N disks, resulting in 16 possible MMB sizes (511, 1022, 1533, 2044, ..., 8176)

Both are 100% backwards compatible with the original MMB format.

The second scheme was definitely more difficult to implement and the code is longer by 38 bytes, but both fit in the available space (just!). The tightest build (U/SWMMFS2) has just 1 byte free now.

There were definitely more corner cases with the second scheme because you can't rely on the disk images being contiguous (which DRECAT did). Skipping over the disk tables cost some extra code. So possibly there is more risk of bugs.

The performance difference overall is negligble.

On aesthetic grounds I think I prefer the second scheme.

I'm happy to go with either scheme, and I'm very interested in feedback.

Dave

P.S. And if anyone spots any further opportunities for code optimization let me know. I've focussed my attention on the DUTILS code and the LARGE_MMB extensions. I've made very few changes to the original DFS code.
SteveF
Posts: 1862
Joined: Fri Aug 28, 2015 9:34 pm
Contact:

Re: Extending the MMB format beyond 511 disks

Post by SteveF »

This is some very nice work and I'm impressed you managed to squash the MMFS code so much to make room for this.

I'm not that active a user of MMFS (yet; if/when I ever make space for real hardware I will probably be), but with that disclaimer out of the way:
hoglet wrote: Sat Sep 25, 2021 3:29 pm The second scheme was definitely more difficult to implement and the code is longer by 38 bytes, but both fit in the available space (just!). The tightest build (U/SWMMFS2) has just 1 byte free now.
I probably have a warped perspective, but space in the MMFS ROM seems such a valuable commodity that this makes me prefer the first scheme. I don't know specifically what it might be useful for, but there's bound to be something.
dp11
Posts: 1800
Joined: Sun Aug 12, 2012 9:47 pm
Contact:

Re: Extending the MMB format beyond 511 disks

Post by dp11 »

Minor improvement the SEC and lda dmret% can come outside of rloop. As it is always set when you loop
Last edited by dp11 on Sat Sep 25, 2021 4:23 pm, edited 1 time in total.
dp11
Posts: 1800
Joined: Sun Aug 12, 2012 9:47 pm
Contact:

Re: Extending the MMB format beyond 511 disks

Post by dp11 »

Do you need to preserve Y in rloop? If not then using Y instead of the stack will be small and quicker.
dp11
Posts: 1800
Joined: Sun Aug 12, 2012 9:47 pm
Contact:

Re: Extending the MMB format beyond 511 disks

Post by dp11 »

If I understand correctly . loop can never over flow so CLC can come out of the loop.
User avatar
BigEd
Posts: 6726
Joined: Sun Jan 24, 2010 10:24 am
Location: West Country
Contact:

Re: Extending the MMB format beyond 511 disks

Post by BigEd »

Looking at the idea of chaining big sets of images, and taking advantage of now having a chain length byte in the first segment, it looks like the 2nd and subsequent segments could have 512 images - the first 16 bytes is no longer needed for anything. So maybe N*512-1 will make the arithmetic a bit easier than N*511? And it gives us 8191 images, I think?

It's possible that the difference for the first segment makes this untidy in some way.
User avatar
sweh
Posts: 3503
Joined: Sat Mar 10, 2012 12:05 pm
Location: 07410 New Jersey
Contact:

Re: Extending the MMB format beyond 511 disks

Post by sweh »

BigEd wrote: Sat Sep 25, 2021 4:43 pm Looking at the idea of chaining big sets of images, and taking advantage of now having a chain length byte in the first segment, it looks like the 2nd and subsequent segments could have 512 images - the first 16 bytes is no longer needed for anything. So maybe N*512-1 will make the arithmetic a bit easier than N*511? And it gives us 8191 images, I think?

It's possible that the difference for the first segment makes this untidy in some way.
Yeah, I'd thought of that, but the downside is the "jump" size will be different; although we now have space for 512 entries in the disk table, we normally only allocate 511*200Kb space for the image. If we make that 512 then we'd have 511*200Kb for the first entry in the chain but 512*200Kb for the second and subsequent entries in the chain. (It also possibly complicates code-re-use since the table parser would be different).

Also by keeping it at 511 entries we have the potential to "split" and "merge" extended MMBs pretty easily, which might be a useful feature (merge a games MMB file with a Z80 MMB file with a PanOS MMB file...).
Rgds
Stephen
User avatar
hoglet
Posts: 12895
Joined: Sat Oct 13, 2012 7:21 pm
Location: Bristol
Contact:

Re: Extending the MMB format beyond 511 disks

Post by hoglet »

dp11 wrote: Sat Sep 25, 2021 4:09 pm Minor improvement the SEC and lda dmret% can come outside of rloop. As it is always set when you loop
There is a minor code saving to be had here as well - the LDA dmret% can be dropped completely.

Code: Select all

.calculate_div_mod_511_zp_x
{
	LDA 1, X
	STA dmret%+1
	LDA 0, X
	STA dmret%
	LDX #0
	SEC
.rloop
	SBC #&FF
	PHA
	LDA dmret%+1
	SBC #&01
	BCC rexit
	STA dmret%+1
	PLA
	STA dmret%
	INX
	BNE rloop	; always
.rexit
	PLA
	RTS
}
dp11 wrote: Sat Sep 25, 2021 4:12 pm Do you need to preserve Y in rloop? If not then using Y instead of the stack will be small and quicker.
Unfortunately, Y does need to be preserved.

Dave
User avatar
sweh
Posts: 3503
Joined: Sat Mar 10, 2012 12:05 pm
Location: 07410 New Jersey
Contact:

Re: Extending the MMB format beyond 511 disks

Post by sweh »

hoglet wrote: Sat Sep 25, 2021 3:29 pm Anyway, I've had a go at implementing the second proposal, extending in chunks of 511 disks (upto a maximum of 16 chunks).
That's cool work! Very nice, indeed.
Rgds
Stephen
SteveF
Posts: 1862
Joined: Fri Aug 28, 2015 9:34 pm
Contact:

Re: Extending the MMB format beyond 511 disks

Post by SteveF »

How about this? I hope I haven't got confused, I haven't tested this change...

Where "JSR RememberAXY" is used in a subroutine, we know the last thing it does when returning will be PLA to restore the original value of A and set the flags accordingly.

PrintChrA starts with "JSR RememberAXY".

Therefore, at "prtstr_loop", we can change this:

Code: Select all

        LDA (&AE),Y                                                              
        BMI prtstr_return1              ; If end                                 
        JSR PrintChrA                                                            
        JMP prtstr_loop                                                          
to this, saving a byte:

Code: Select all

        LDA (&AE),Y                                                              
        BMI prtstr_return1              ; If end                                 
        JSR PrintChrA                                                            
        BPL prtstr_loop                 ; always branch, previous BMI not taken
User avatar
hoglet
Posts: 12895
Joined: Sat Oct 13, 2012 7:21 pm
Location: Bristol
Contact:

Re: Extending the MMB format beyond 511 disks

Post by hoglet »

BigEd wrote: Sat Sep 25, 2021 4:43 pm Looking at the idea of chaining big sets of images, and taking advantage of now having a chain length byte in the first segment, it looks like the 2nd and subsequent segments could have 512 images - the first 16 bytes is no longer needed for anything. So maybe N*512-1 will make the arithmetic a bit easier than N*511? And it gives us 8191 images, I think?

It's possible that the difference for the first segment makes this untidy in some way.
sweh wrote: Sat Sep 25, 2021 4:53 pm Yeah, I'd thought of that, but the downside is the "jump" size will be different; although we now have space for 512 entries in the disk table, we normally only allocate 511*200Kb space for the image. If we make that 512 then we'd have 511*200Kb for the first entry in the chain but 512*200Kb for the second and subsequent entries in the chain. (It also possibly complicates code-re-use since the table parser would be different).

Also by keeping it at 511 entries we have the potential to "split" and "merge" extended MMBs pretty easily, which might be a useful feature (merge a games MMB file with a Z80 MMB file with a PanOS MMB file...).
That's my feeling as well.

The big attraction of Nx511 disk chunk scheme is you can simply cat together seperate archives, then just tweak the first header with dd:

Code: Select all

$ cat 1.MMB 2.MMB > BEEB.MMB
$ printf '\xA1' | dd of=BEEB.MMB bs=1 seek=8 count=1 conv=notrunc
1+0 records in
1+0 records out
1 byte copied, 7.9702e-05 s, 12.5 kB/s
$ od -Ax -tx1 BEEB.MMB | head -1
000000 00 01 02 03 00 00 00 00 a1 00 00 00 00 00 00 00
If you alter the format of the chunks at all, then you might as well go back to proposal one, which is simpler

Dave
User avatar
BigEd
Posts: 6726
Joined: Sun Jan 24, 2010 10:24 am
Location: West Country
Contact:

Re: Extending the MMB format beyond 511 disks

Post by BigEd »

Good point about easy concatenation.
User avatar
hoglet
Posts: 12895
Joined: Sat Oct 13, 2012 7:21 pm
Location: Bristol
Contact:

Re: Extending the MMB format beyond 511 disks

Post by hoglet »

SteveF wrote: Sat Sep 25, 2021 5:02 pm How about this? I hope I haven't got confused, I haven't tested this change...
That should work.

So, with the tweaks in the last hour, the U/SWMMFS2 build has improved from 1 to 8 bytes free.

Honestly I'm not that concerned about space when picking between the two proposals. There's always more bytes to be had if you look hard enough. Also, we can selectively drop whole commands for particular builds. Dropping DFREE for example saves 88 bytes. I don't think I've ever used that command in anger.

Dave
User avatar
hoglet
Posts: 12895
Joined: Sat Oct 13, 2012 7:21 pm
Location: Bristol
Contact:

Re: Extending the MMB format beyond 511 disks

Post by hoglet »

I should also say, it's only the SWRAM build that is really right.

There are probably ways we could save a whole page of memory, for example by reducing the number of open files allowed.

Dave
User avatar
sweh
Posts: 3503
Joined: Sat Mar 10, 2012 12:05 pm
Location: 07410 New Jersey
Contact:

Re: Extending the MMB format beyond 511 disks

Post by sweh »

hoglet wrote: Sat Sep 25, 2021 5:24 pm There are probably ways we could save a whole page of memory, for example by reducing the number of open files allowed.
For the SWRAM build that's a not unreasonable compromise. (I think the Solidisk E00 DFS also did that, probably for the same reason!).
Rgds
Stephen
dp11
Posts: 1800
Joined: Sun Aug 12, 2012 9:47 pm
Contact:

Re: Extending the MMB format beyond 511 disks

Post by dp11 »

Opp
Last edited by dp11 on Sat Sep 25, 2021 5:40 pm, edited 1 time in total.
User avatar
hoglet
Posts: 12895
Joined: Sat Oct 13, 2012 7:21 pm
Location: Bristol
Contact:

Re: Extending the MMB format beyond 511 disks

Post by hoglet »

sweh wrote: Sat Sep 25, 2021 5:31 pm For the SWRAM build that's a not unreasonable compromise. (I think the Solidisk E00 DFS also did that, probably for the same reason!).
Actually, both *FREE and *DFREE are rarely used.
Screenshot from 2021-09-25 17-35-14.png
And *DRECAT can always be done offline; I don't think I've ever needed to use (as anything other than a test).

Dave
User avatar
sweh
Posts: 3503
Joined: Sat Mar 10, 2012 12:05 pm
Location: 07410 New Jersey
Contact:

Re: Extending the MMB format beyond 511 disks

Post by sweh »

hoglet wrote: Sat Sep 25, 2021 5:37 pm Actually, both *FREE and *DFREE are rarely used.
And *DRECAT can always be done offline; I don't think I've ever needed to use (as anything other than a test).
I never use those commands, that's for sure :-) (*FREE didn't exist on original DFS).
Rgds
Stephen
rharper
Posts: 730
Joined: Sat Sep 01, 2012 6:19 pm
Location: Dunstable, LU6 1BH
Contact:

Re: Extending the MMB format beyond 511 disks

Post by rharper »

I do use *FREE now and again but not DFREE.
Ray
Raycomp
User avatar
hoglet
Posts: 12895
Joined: Sat Oct 13, 2012 7:21 pm
Location: Bristol
Contact:

Re: Extending the MMB format beyond 511 disks

Post by hoglet »

A bit more theraputic code optimization this aftenoon.

I'm patricularly proud of spotting a BPUT/BGET optimization that saves 52 bytes:
https://github.com/hoglet67/MMFS/commit/78da4d18

It did check it sthey still work using Tom Seddon's file system tester.

Anyone know of any more killer tests for BPUT/BGET?

We have gone from 1-byte free (in U/SWMMFS2) yesterday to 84 bytes free now.

That's almost enough for another new feature. :D

Dave
User avatar
robcfg
Posts: 161
Joined: Sun Dec 30, 2018 6:23 pm
Contact:

Re: Extending the MMB format beyond 511 disks

Post by robcfg »

I'll be more than happy to support any extension to the format on my MMBExplorer.

Could you send me the file you tried that was rejected as invalid? I'm just curious as to see why it failed, as I only load the catalog and at most one full image in memory.
User avatar
hoglet
Posts: 12895
Joined: Sat Oct 13, 2012 7:21 pm
Location: Bristol
Contact:

Re: Extending the MMB format beyond 511 disks

Post by hoglet »

robcfg wrote: Sun Sep 26, 2021 10:51 pm Could you send me the file you tried that was rejected as invalid? I'm just curious as to see why it failed, as I only load the catalog and at most one full image in memory.
It's hitting this error:
https://github.com/robcfg/retrotools/bl ... le.cpp#L41
Screenshot from 2021-09-27 15-02-59.png
My test file was just two standard 104,660,992-byte BEEB.MMB files concatenated together.

Because of the second 8KB disk table, this will give a non-zero remainder in your code.

Dave
Coeus
Posts: 3718
Joined: Mon Jul 25, 2016 12:05 pm
Contact:

Re: Extending the MMB format beyond 511 disks

Post by Coeus »

sweh wrote: Sat Sep 25, 2021 5:31 pm For the SWRAM build that's a not unreasonable compromise. (I think the Solidisk E00 DFS also did that, probably for the same reason!).
Solidisk ADFS definitely had a similar feature, thought it was configurable, to limit the number of open files and thus reduce OSHWM. The default was just one open file bringing OSHWM down to &1900 just as for DFS.
Post Reply

Return to “8-bit acorn hardware”