Matt Godbolt wrote: ThomasHarte wrote: Matt Godbolt wrote:
Wow great stuff!! Looking forward to checking this out
Alas it is presently OpenGL 3.2 powered, so I will not shortly be asking you for Emscripten tips.
Haha no worries. The only emscripten I know is enough to get BeebAsm to work in BeedIDE
Matt Godbolt wrote:Built pretty easily - nice work. I sent some PRs to fix errors GCC 4.8 picked up. Will try building it against 7.2. There's some strict aliasing warnings which will probably bite on newer GCCs. If you're interested in help fixing those, let me know!
I've definitely made a few implementational missteps along the way: I started this emulator two or three years ago because I had never used modern C++ and had completely forgotten what of the '90s-style C++ I once knew. So a lot of the older sections look a lot more like a C programmer trying to find his way than do the new sections, especially the union that underlies static analyser results. I think I also initially leaned far too heavily on shared_ptr, being an Objective-C/Swift transferee. I've generally been going back and fixing such things as and when my knowledge proceeds and time allows but it's a learning process. The survival of some phoney aliasing assumptions would be no surprise.
That all being admitted, apparently I'm using GCC 7.2 over in Ubuntu world, so probably all is already well. On the Mac I'm on "Apple LLVM" (surely they haven't forked their own project?) version 9.0.0 with clang-900.0.38.
PRs merged by the way, thanks!
Matt Godbolt wrote:I'd love to try and port the composite output to jsbeeb though...will be digging around for info. Do you have any links to sources you used for this?
I'll see whether I can dig anything up, but the genesis of all that is very long-winded. I wrote a ZX80/81 emulator back in 2011, which is a machine where the programmer has a great deal of programmatic control over the video output, and probably started reading about it then. But I'm not sure there's a lot to it given that you're emulating a CRTC so must have syncs under control, so I think all you really need to compose the PAL output stream is:
- keep track of a colour subcarrier at 4.43361875Mhz;
- translate to YUV;
- output Y*0.8 + (U*cos(t) + V*sin(t))*0.2, with t being the appropriate measure of the colour subcarrier.
Then to convert back:
Y = input with lowpass filter applied; subcarrier/4 is a good choice;
U = 2 * (input - Y)*cos(t), with even more aggressive lowpass filter applied, probably subcarrier/8; and
V = 2 * (input - Y)*sin(t), with same lowpass filter as for U.
Then colour space convert and output that.
I'm unaware of any particular quirks in how Acorn's hardware produces a composite wave, so for the Electron I'm running exactly by the book from RGB. That's not true of the other machines; both the 2600 and the Vic-20 produce their outputs directly as amplitude + phase offset of a single subcarrier-frequency wave, and the Oric uses a lookup table extracted from the original hardware.
The (most) hand-waving justification for the mathematics being:
As U and V are added on as amplitude modulations of a 4.43361875Mhz, of course low-pass filtering the whole signal will recover Y.
Given the single amplitude modulation of k*sin(t), you can't get back to k by dividing by sin(t) because of sin(t)'s annoying habit of crossing zero. But if you multiply by sin again and throw in a trig identity:
k * sin(t) * sin(t) = k * sin^2(t) = k * 1/2(1 - cos(2t)) = k/2 - k/2 * cos(2t)
... so if you lowpass filter that to kill the cos(2t) term, you get k/2 back out. And then clearly the same logic applies to the cos term because that's just sin with a different phase. And, to be really precise about it:
k * sin(t) * sin(t) + v * cos(t) * sin(t) = ... k as above ... + 1/2(sin(t + t) + sin(t - t)) = ... + sin(2t)/2 + sin(0)/2 = ... + sin(2t)/2
So the fact of composition of two out-of-phase signals just adds some more noise in the 2t frequency range, which you also throw away via the lowpass filtering.
I was originally applying another instance of my Kaiser-Bessel filter to the problem, but getting a decent answer out of that created a bit of a bandwidth issue on the GPU: there's only one texture sample for each two source data samples but it still adds up. So I took a hacky shortcut: I sample my original composite wave at four times the colour subcarrier then just perform an exact average over each group of four to recover Y. Which is a bit brute-force, but is effectively a comb filter, I guess.