Page 3 of 4

Re: b2 - new emulator

Posted: Fri May 19, 2017 10:02 am
by Coeus
ctr wrote:I think you're right. I was confused because SDL renders bitmaps correctly (e.g. the beeb output), and the font is just a bitmap, but the font was coming out wrong. But, as you say, bitmaps rendered through SDL_RenderGeometry aren't using the same code. So it makes sense.
So is this a deliberate feature of SDL? Is there a rationale to why SDL_RenderGeometry is different? or might this be a bug in SDL? If the latter would it make more sense to temporarily work around it in the embedded version of SDL and submit upstream?

Re: b2 - new emulator

Posted: Fri May 19, 2017 10:05 am
by Rich Talbot-Watkins
Looks much better =D>

It looks as if you're doing the same as jsbeeb now: rendering a "pre-filtered" glyph, stretched from 12 to 16 pixels with 4 colour levels from background to foreground. Like I think we both said, that could look weird when the GPU is then filtering a stretched version of that.

Shrinking on the GPU could give better results, but if it's shrunk to less than 50% (as is likely) you'd need to create mipmaps as well or it might not look right.

Re: b2 - new emulator

Posted: Fri May 19, 2017 1:24 pm
by tom_seddon
Coeus wrote:
ctr wrote:I think you're right. I was confused because SDL renders bitmaps correctly (e.g. the beeb output), and the font is just a bitmap, but the font was coming out wrong. But, as you say, bitmaps rendered through SDL_RenderGeometry aren't using the same code. So it makes sense.
So is this a deliberate feature of SDL? Is there a rationale to why SDL_RenderGeometry is different? or might this be a bug in SDL? If the latter would it make more sense to temporarily work around it in the embedded version of SDL and submit upstream?
My repo is the upstream :-| - the RenderGeometry stuff was originally an OS X-and-Linux-only patch attached to a bug report, to which I added Windows D3D9/D3D11 support. Then I uploaded it to a github... I get the impression that official SDL isn't interested until it has a software renderer, so that is probably its home for now.

But you're quite right about RenderGeometry, of course. The rationale for having it pass the data straight through, unprocessed, was that this was sort of its spec (insofar as it has one) - but really it might as well just add the offset when appropriate, since it's probably the right thing to do for virtually all (or more) probable uses.

I can add a toggle to switch it off.

--Tom

Re: b2 - new emulator

Posted: Thu Oct 12, 2017 1:35 pm
by sbadger
Hi Tom.

Is there any chance of ever being able to use shaders?
There is a shader called crt-royale that can simulate a crt, and it does it very well, the best there is at the moment. On displays over 1080p its remarkable!

here is a wiki on it with example screen from a SNES game
https://emulation.miraheze.org/wiki/CRT ... CRT-Royale

stew

Re: b2 - new emulator

Posted: Sat Oct 14, 2017 10:02 pm
by tom_seddon
Maybe at some point, but there's a bunch of internal stuff I need to fix first, and there's some stuff actively on the roadmap that I'll be doing before it: joysticks, UI revamp, debugger, 6502 second processor, save state.

It won't be on all platforms, at least not initially. SDL doesn't support shaders or render targets natively, and I'm using SDL to do pretty much everything. Anything rendering stuff it doesn't support has to be written once for each combination of platform and graphics API. So I'll probably do this for one of the D3Ds first, probably D3D9 if the shaders will work (since SDL picks D3D9 by preference if it's available, and I expect support is a bit wider-spread, assuming anybody's still even using a non-D3D11 GPU...), or D3D11 if not. Then see how I feel about doing OpenGL after that :)

--Tom

Re: b2 - new emulator

Posted: Sat Oct 14, 2017 10:33 pm
by Lion
Those kinds of shaders usually emulate NTSC televisions/monitors, don't they? PAL screens look a little different.

Re: b2 - new emulator

Posted: Sun Oct 15, 2017 9:05 am
by sbadger
Lion wrote:Those kinds of shaders usually emulate NTSC televisions/monitors, don't they? PAL screens look a little different.
Some yes, but the more advanced shaders simulate most aspects of crts. Crtroyale specifially is so configurable people have come up with different configs to match specific makes and models of unit. Sony PVM etc.

http://i.picpar.com/rjV.png - eg this isn't actually a Sony PVM screen shot, but shader settings.

Re: b2 - new emulator

Posted: Wed Apr 18, 2018 2:09 am
by tom_seddon
I've set up a continuous integration/rolling builds type of affair for b2. This just gets the latest code whenever there's a change, and tries to build it and make a release out of it.

Windows is up and running, so you can now always get a version with the latest code. Details here: https://github.com/tom-seddon/b2#rolling-windows-builds

There's also an OS X build, which appears to work, but the output file is (so far) just discarded: https://travis-ci.org/tom-seddon/b2

Linux users don't seem to really be into binary builds, so they'll continue to have to build it themselves...

Originally I'd planned for this to be just be a way of producing a random zip that you could download at your own risk, but getting it part-working has got me thinking, and I'm now probably going to abandon manual releases. (They're a bit of a pain to do.) Instead, I'll set the CI servers up to just publish every successful build to the GitHub releases page - which appears to be a thing you can do - and then anybody that wants a binary build can grab the latest one. There won't be version numbers, and instead each build will be named by its git commit hash. Might add a build date in there or something, so you can tell whether one build is older or newer than another. And I'll change my workflow a bit, to reduce the likelihood of the releases page getting swamped by piles of crappy broken versions that don't run...

--Tom

Re: b2 - new emulator

Posted: Wed Apr 18, 2018 5:52 am
by tricky
Thanks Tom, I'm happy to build from vs on windows, but from a trusted source (like you), assuming others can't inject their stuff, this produces a much lower barrier to entry without removing any options.

Re: b2 - new emulator

Posted: Mon Apr 23, 2018 10:01 pm
by tom_seddon
The continuous integration stuff is now set up, and appears to work, and so there's a new build for OS X and Windows on the releases page: https://github.com/tom-seddon/b2/releases

As well as the GitHub releases, a release is still prepared for Windows for every commit, as above: https://github.com/tom-seddon/b2#rolling-windows-builds

Now that this process is automated, new versions should come more regularly...

Open GitHub issues remain open, along with some others that I haven't entered in just yet! The CI stuff is the only thing that's changed recently.

--Tom

Re: b2 - new emulator

Posted: Mon Apr 23, 2018 11:12 pm
by Elminster
You'll make it too easy, soon everyone will be using B2 :)

I am using Jenkins to do (nearly) CI for Linux builds of B2 in side docker container. (nearly because github push hooks wont work when github cant see your CI server, and my PC not always on). So I poll every 15 mins.

Next step is Jenkins agent on raspberry pi to buold arm B2 automatically.

Not looked at any testing with in the CI build, other than simple 'error', 'no error'. A job for another day.

Re: b2 - new emulator

Posted: Tue Apr 24, 2018 8:00 am
by pau1ie
I build b2 on Arch Linux last night, and it worked fine. I already had the required libraries installed, so only had to install ninja. I only had a quick look around, but it seemed to work fine.

Re: b2 - new emulator

Posted: Tue May 08, 2018 5:03 pm
by Coeus
ctr wrote:Ada and Modula 2 are compiled languages that have coroutines and no garbage collection. (Then I thought, and surely Modula 3? But it doesn't.) If you're not fussed about garbage collection there are go and Haskell. Iterators in C# also work as a poor man's version.

I guess real threads wouldn't be any good because there's far too much communication needed between the components.
I don't know if language-based co-routines do this but an important requirement is that the co-routines remain synchronised with each other. if they were to each proceed at their own pace some things would not work. The obvious case is where a game uses a timer to re-program the CRTC part-way down the frame. If the co-routine running the CRTC emulation were to gain on the one emulating the timer the point at which the display changed would move up the screen. If threads were synchronised once per frame it would then just be a case of having swapped drift for jitter.

Re: b2 - new emulator

Posted: Tue May 08, 2018 5:12 pm
by Coeus
tom_seddon wrote:But you're quite right about RenderGeometry, of course. The rationale for having it pass the data straight through, unprocessed, was that this was sort of its spec (insofar as it has one) - but really it might as well just add the offset when appropriate, since it's probably the right thing to do for virtually all (or more) probable uses.
Back to the issue of this half-pixel offset, I found an explanation at: https://magcius.github.io/xplain/article/rast1.html in the section "SAMPLE LOCATION" so it transpires that if you are copying an existing bitmap to your final bitmap you don't want a half-pixel offset whereas for drawing primitives you do.

Re: b2 - new emulator

Posted: Tue May 08, 2018 8:10 pm
by Rich Talbot-Watkins
Coeus wrote: I don't know if language-based co-routines do this but an important requirement is that the co-routines remain synchronised with each other. if they were to each proceed at their own pace some things would not work. The obvious case is where a game uses a timer to re-program the CRTC part-way down the frame. If the co-routine running the CRTC emulation were to gain on the one emulating the timer the point at which the display changed would move up the screen. If threads were synchronised once per frame it would then just be a case of having swapped drift for jitter.
I have the beginnings of an emulator framework (which could best be described as a 6502 + VIAs + CRTC simulator right now) which also goes for the 'tick each component in turn, cycle-by-cycle' approach. This is just using a fairly traditional state machine in C++ (the generated 6502 state machine ends up being a big switch with 560 cases!). But using a co-routines approach it'd be a bit neater; though, even with parallel stack frames as a language feature, I'm not sure if it'd actually be quicker.

Anyway, with co-routines you'd retain synchronisation by going for exactly the same kind of approach - run one cycle's worth of simulation, and then yield to the next co-routine. It's just cooperative threading, but with the readability advantage that you can write the logic linearly, yielding after each cycle's worth of simulation.

Re: b2 - new emulator

Posted: Tue May 08, 2018 8:15 pm
by ThomasHarte
Coeus wrote:
ctr wrote:Ada and Modula 2 are compiled languages that have coroutines and no garbage collection. (Then I thought, and surely Modula 3? But it doesn't.) If you're not fussed about garbage collection there are go and Haskell. Iterators in C# also work as a poor man's version.

I guess real threads wouldn't be any good because there's far too much communication needed between the components.
I don't know if language-based co-routines do this but an important requirement is that the co-routines remain synchronised with each other. if they were to each proceed at their own pace some things would not work. The obvious case is where a game uses a timer to re-program the CRTC part-way down the frame. If the co-routine running the CRTC emulation were to gain on the one emulating the timer the point at which the display changed would move up the screen. If threads were synchronised once per frame it would then just be a case of having swapped drift for jitter.
In ElectrEm I used separate threads to achieve coroutines with the caveat that from time n, whether any component will affect any other is completely knowable upfront except in the case of the CPU. So the process to run for q cycles was: calculate the largest number less than q before I know for certain that no component will contact another. Ask the CPU to run for that many cycles. If it exits early and says it ran for only p cycles before being about to make contact, update all other components to p, then resume. I was serialising them all though — the thread side of things was just to gain an additional call stack so that my 6502 code could be read from top to bottom as if a normal opcode-level implementation, but actually be cycle correct.

One of the Mega Drive emulators is even smarter. To run for q cycles:
  • have all components store their state;
  • ask all to run for q cycles, in parallel;
  • ask whether any tried to access a shared resource during that period;
  • if so, restore the stored states and try again with the now-known smaller window. Then continue from there.
ElectrEm also did a thing whereby the 6502 knew how to obey and interleave a suitably specific list of memory fetches and chuck them into a buffer. So upon each state change, the video stuff just posted a new list to the CPU and then at end-of-frame it produced the final display, correlating to a timestamped list of palette and mode events. You'd obviously need to do something like that if you actually wanted to spread out across threads.

Clock Signal is a bit more ad hoc; all audio generation and video interpretation is trivially boxed off into separate threads but right now each machine is internally serialised. Or, at least, overwhelmingly so. I use just-in-time processing wherever possible, e.g. a count is kept of how long since the WD1770 was last asked to do anything and attempting to read any of its registers will suddenly make it run for that many cycles prior to being asked for its read.

A slightly softened version applies to user-visible outputs like video collection; that doesn't happen unless or until either the processor is about to write to RAM or to a video register, or the processor reaches the end of the amount of time it's currently supposed to run for, in which case video collection catches up.

I've a mentally-scheduled task which is a pretty simple version of that: push the catch-ups off into asynchronous land. As long as I block until they're all completed before I start the next iteration of the processing loop, life is good. It's not as parallel as if all were running at the same time like real chips, but it should parallelise a bunch of subsystems.

I'm still weighing the Mega Drivey approach mentioned above where there are two or more unpredictable actors with a shared resource. I guess that'd be what a BBC emulator should do to handle the tube. Probably with a drop back to ordinary serialisation upon any communication, which reverts back to reduced-length parallelisation and then increasingly more confident steps only after the two actors seem not to have talked for a certain threshold?

The most similar situation I currently model is the Vic-20 plus C1540, which amounts to two 6502s with a shared serial bus, so my thinking may be unduly boxed in by the specific.
Rich Talbot-Watkins wrote:I have the beginnings of an emulator framework (which could best be described as a 6502 + VIAs + CRTC simulator right now) which also goes for the 'tick each component in turn, cycle-by-cycle' approach. This is just using a fairly traditional state machine in C++ (the generated 6502 state machine ends up being a big switch with 560 cases!). But using a co-routines approach it'd be a bit neater; though, even with parallel stack frames as a language feature, I'm not sure if it'd actually be quicker.

Anyway, with co-routines you'd retain synchronisation by going for exactly the same kind of approach - run one cycle's worth of simulation, and then yield to the next co-routine. It's just cooperative threading, but with the readability advantage that you can write the logic linearly, yielding after each cycle's worth of simulation.
ElectrEm uses the coroutine approach — as above, the processor exists on its own thread, blocking itself in order to yield. Clock Signal uses the state machine approach, though there's only 117 things in its switch statement*. I don't think there's a substantial difference in performance from that angle other than that ElectrEm skips 90% of the overhead via its how-long-until-you-interfere-with-somebody-else scheduling of non-CPU components.

I'm pretty sure the main performance impediment in a modern 8-bit machine emulator is thrashing the instruction cache, branch prediction tables, etc by constantly jumping all over the place. All those jarring transitions from the CPU code and data set to the CRTC code and data set, to the SN76489 code and data set, etc, etc. Especially if you do it strictly as perform a cycle here, perform a cycle there, etc.

* I think I generate mine in a very different sense: the things the switch can hit were selected and implemented manually; what's automatically generated is the table from opcodes to micro-ops, which is nothing beyond the abilities of the C preprocessor. I directly have a list of 256 entries that looks like ZeroXWrite(OperationSTY) or equivalent, but since the list is installed once at machine construction it'd be easy to automate that too. The z80 analogue does so to an extent.

Re: b2 - new emulator

Posted: Tue May 08, 2018 9:42 pm
by Rich Talbot-Watkins
I've contemplated exactly the approach you took for ElectrEm in the past - determining the longest amount of time you can run everything without some kind of interaction, and then letting the CPU abort earlier if it has to. At the time, my instinct was that the additional overhead of managing all this would negate the savings, because typically you can only run the CPU for maybe 10-20 cycles on average before a store requires aborting so that the video system can catch up. But you're probably right that I-cache thrashing is far more prejudicial to performance than the overhead of a rudimentary task manager. I guess it would have to be measured.

The Megadrive emulator approach sounds fairly wasteful at first thought, but I guess it depends again on the kind of contention that typically exists between each component.
Clock Signal uses the state machine approach, though there's only 117 things in its switch statement*.
My approach with the 6502 emulation was just to have a C++ backend identify unique CPU states and generate a source file accordingly. If you consider the beginning of an instruction to be the cycle after the opcode fetch, and the end of an instruction to be the next opcode fetch, essentially you have 256 graphs with varying numbers of nodes, all sharing a final node. Then you can coalesce similar tails, and you get your final state machine. (It would have been smaller had I not decided to distinguish zp, stack and 'other' memory accesses, so that the host interface could take advantage of 'simpler' zp or stack accesses which don't have to handle memory mapped I/O).

Re: b2 - new emulator

Posted: Tue May 08, 2018 10:22 pm
by Coeus
ThomasHarte wrote:In ElectrEm I used separate threads to achieve coroutines with the caveat that from time n, whether any component will affect any other is completely knowable upfront except in the case of the CPU. So the process to run for q cycles was: calculate the largest number less than q before I know for certain that no component will contact another. Ask the CPU to run for that many cycles. If it exits early and says it ran for only p cycles before being about to make contact, update all other components to p, then resume...
So in that model, before adding the memory list you describe next, the processor would stop when it was about to access any memory mapped I/O or was about to write to video memory.

For other devices doesn't is pretty much come down to "would you raise an interrupt in the next n clock cycles?" Or maybe "how many cycles to the next interrupt?"
ThomasHarte wrote:ElectrEm also did a thing whereby the 6502 knew how to obey and interleave a suitably specific list of memory fetches and chuck them into a buffer. So upon each state change, the video stuff just posted a new list to the CPU and then at end-of-frame it produced the final display, correlating to a timestamped list of palette and mode events. You'd obviously need to do something like that if you actually wanted to spread out across threads.
This is a really interesting approach. From my experience of B-Em the two things that are CPU intensive are the CPU emulation and the video processing so if you have a processor with two slow cores rather than at least one fast core these would be the two you'd want to split into separate threads. I am actually thinking if this is actually a one-way pipeline. You mentioned the video implementation sending a memory list back to the CPU but is this just to re-use the buffers? If the CPU were to allocate whatever data structure is used to hold timestamped memory writes and put these on a queue to the video implementation and the video implementation de-queued them processed them, and just freed them when it finished there would presumably no need to explicitly synchronise those two threads, though there would presumably be some locking around the queue and the free memory pool.

Re: b2 - new emulator

Posted: Wed May 09, 2018 3:43 pm
by ThomasHarte
The advantage in ElectrEm of having the video post a list of required addresses to the CPU which dumbly followed them is that the list was usually the same frame-to-frame as mode splits in general are rare, and mode splits that change the addressing are even rarer. So the most common operating case was that having updated the fetch list once upon the most recent mode change, the video circuits then didn't need to do anything ever again other than interpret the collected list right at the end.

It used a one-way pipeline for palette changes, but not for memory writes for no reason other than that as a C++ dunce with a pre-STL book I'd written my own lists and so on, and come up with a bad implementation. Filling my list structure with all the writes proved to be far too inefficient. But that really speaks only to my c.2000 abilities, I wouldn't generalise from it.

Possibly the only relevant thing Clock Signal does is that the video declares a watch zone — the range of addresses it is potentially interested in. The bus does its just-in-time update of video only if a write occurs in that region. Since it does not normally include the zero or stack pages, that saves a lot of work. It's just a broad-phase test, and one can imagine that another potential avenue of exploration might be looking at revisions of that. Rather than announcing a single range, what if it declared a range and a time limit after which you have to ask again? Given that any communications posted to the video system already invalidate the range, that is. Then it could drop to a zero range during vertical blank and sync, then proceeding even with something as relatively coarse as a quarter of the display at a time would significantly reduce the number of wake events or write logs.

EDIT: oh, l'esprit d'escalier: in Clock Signal I'm applying the palette on the CPU but then Electron video data gets turfed over to the GPU at 4bpp — two pixels per byte. And always two independent pixels per byte; Mode 5 submits half as much data as Mode 4, Mode 0 supplies twice as much. Lines with multiple modes on them require multiple submissions. The GPU unpicks all that and paints the display. So actually I have got quite a lot of the classic screen painting going on in a separate thread, certainly including most of the physical byte stuffing for modern-colour-depth displays.

Re: b2 - new emulator

Posted: Tue May 15, 2018 12:49 am
by tom_seddon
Rolling builds are now available for OS X. There's also a new home for the builds, so Windows and OS X versions now come from the same place: https://github.com/tom-seddon/b2#rolling-builds

--Tom

Re: b2 - new emulator

Posted: Tue May 15, 2018 1:44 am
by tom_seddon
Rich Talbot-Watkins wrote:My approach with the 6502 emulation was just to have a C++ backend identify unique CPU states and generate a source file accordingly. If you consider the beginning of an instruction to be the cycle after the opcode fetch, and the end of an instruction to be the next opcode fetch, essentially you have 256 graphs with varying numbers of nodes, all sharing a final node. Then you can coalesce similar tails, and you get your final state machine. (It would have been smaller had I not decided to distinguish zp, stack and 'other' memory accesses, so that the host interface could take advantage of 'simpler' zp or stack accesses which don't have to handle memory mapped I/O).
b2 divides the states up by instruction type + addressing mode, roughly speaking. So there's one set of states for immediate read instructions, one for zero page, and so on. One function per cycle, mostly autogenerated from a handwritten set of states (https://github.com/tom-seddon/b2/blob/0 ... n.cpp#L922), with a few stragglers done by hand (e.g., https://github.com/tom-seddon/b2/blob/3 ... 502.c#L980). There's 278 of these in total (covering both NMOS- and CMOS-type CPUs).

For each specific instruction - ADC, ASL, etc. - there's a function that's called from the relevant state function at the appropriate point. These are all hand-written, and have slightly different parameters and contracts depending on the instruction type: https://github.com/tom-seddon/b2/blob/m ... 502.c#L260. There's 76 for the standard NMOS and CMOS instructions, and 19 for the undocumented NMOS ones.

A 256-entry autogenerated table for each CPU type ties this all together, holding the instruction function and initial state function for each instruction.

I just ignored the question of duplicated states initially, but I think in fact the C++ linker can save me. Even when the code is like mine, and each state sets up the next state itself (meaning the code for each state is unique), the linker ought to be able to find and merge, at least in principle, any shared suffix: because if the final non-unique states of two sequences of states are in fact identical, which in my case they typically are, and they become merged, then the penultimate states then become potentially identical too - and so on.

But this is something I only realised while thinking about this thread, so I haven't actually checked yet that this is happening ;)

--Tom

Re: b2 - new emulator

Posted: Wed May 16, 2018 1:41 am
by tom_seddon
tom_seddon wrote:But this is something I only realised while thinking about this thread, so I haven't actually checked yet that this is happening ;)
I've checked now, albeit only briefly, and it looks like the linker is doing the right thing, at least with VC++. (Haven't checked gcc/clang yet.) Quite a lot of state functions get merged! I wasn't expecting a couple of the cases:

The end of an abs,Y instruction is the same as the end of a (zp),Y instruction, and the end of an abs instruction is the same as the end of a (zp,X) instruction, so a number of pairs of states got merged thanks to these.

Cycle 2 of a JMP is the same as cycle 5 of a JSR, and the last cycle of indirect JMP is the same as the last cycle of interrupt/BRK/reset.

Various combinations of the states that finish the CMOS BCD instructions (i.e., adding an extra cycle in BCD mode) turned out to be identical.

Maybe there's scope for a bit more merging, with some tweaks to the generated code? - I'll have to see if this stuff actually makes a difference to the performance first, though. My instinct is that it won't be noticeable, assuming it's even measurable, since b2 was written assuming a modern, fast PC... which is a nice way of saying it's not very efficient. If the CPU emulation is a bit faster or slower, it probably won't mean much...

--Tom

Re: b2 - new emulator

Posted: Fri May 25, 2018 10:00 am
by richmond62
I have just "had a bash" with B2 latest Mac build

https://github.com/tom-seddon/b2#rolling-builds

on Mac OS 10.7.5 with no joy, I'm afraid.

I tried to run both "b2" and "b2 Debug".

Re: b2 - new emulator

Posted: Fri May 25, 2018 11:12 am
by Elminster
I think Tom might need more debug info. Works okay on my Mac.

Or if you are familiar with docker you could use my Docker build but I haven’t finished documenting that yet. See the acorn Docker thread, but I suspect you won’t want to do that.

viewtopic.php?f=12&t=15031

Edit: it was working last week anyway, not tried running it natively on Mac this week

Re: b2 - new emulator

Posted: Fri May 25, 2018 9:29 pm
by richmond62
Works okay on my Mac.
I am using a family of polycarbonate iMacs (the first Intel ones) from around 2006 running Mac OS 10.7.5.

Possibly our Macintosh computers may differ.

Re: b2 - new emulator

Posted: Fri May 25, 2018 9:32 pm
by Elminster
Ah you will have problems then. I finally retired my Mum's 10.7.x imac last week, gave here my old imac with high Sierra on it. So many security issues on 10.7 I dare not let here connect it to the internet any more. And half the websites used to tell her her broswer was too old. Most of the other browers refused to installed, couldnt install any security software, no security updates from Apple etc. Very scary.

Good luck on that one. I gave up on 10.7.

Re: b2 - new emulator

Posted: Sat May 26, 2018 9:25 am
by richmond62
Well . . .

for anyone who can put up with one of my wibbles :)

1. Running Mac OS 10.7.5
2. Running Avast antivirus.
3. Using Waterfox as my browser [https://www.waterfoxproject.org/en-US/].

4. My wife has an identical machine in her study.

5. Run several more of these in my school.

6. Horizon 1.3.9 [http://www.bannister.org/software/horizon.htm]

I regularly access the internet with my polycarbonate G5 iMac running Mac OS 10.5!

Re: b2 - new emulator

Posted: Sat May 26, 2018 9:34 am
by Elminster
Oh it will work but you would give security professional nightmares. I find the intego podcast good for scaring me :)

The other issue is Apple generally only support N+1 version of OS. Was quite unusual when they ran out the spectre/meltdown ‘fixes’ to N+3 (10.9 I think they went to). Also things like Xcode, key to building stuff on Mac, moves on.

Also very soon Apple will stop support for 32 bit apps, so old apps are likely to be recompiled with latest Xcode as 64 bit.

Edit: But then I guess we are all using stuff not supported since 80s anyway.

Re: b2 - new emulator

Posted: Sat May 26, 2018 10:33 pm
by tom_seddon
richmond62 wrote:
Fri May 25, 2018 10:00 am
I have just "had a bash" with B2 latest Mac build

https://github.com/tom-seddon/b2#rolling-builds

on Mac OS 10.7.5 with no joy, I'm afraid.

I tried to run both "b2" and "b2 Debug".
Thanks for the report. It's supposed to work on OS X 10.7+.

Does anything happen at all when you try to run it?

If you run the Console app, is anything printed there when you start b2?

Does GitHub release 0.0.4 (https://github.com/tom-seddon/b2/releases/tag/0.0.4) work for you? This is a bit outdated now but I built this version on my Mac rather than on the continuous integration server.

Thanks,

--Tom

Re: b2 - new emulator

Posted: Sat May 26, 2018 11:03 pm
by tom_seddon
Elminster wrote:
Sat May 26, 2018 9:34 am
The other issue is Apple generally only support N+1 version of OS. Was quite unusual when they ran out the spectre/meltdown ‘fixes’ to N+3 (10.9 I think they went to). Also things like Xcode, key to building stuff on Mac, moves on.

Also very soon Apple will stop support for 32 bit apps, so old apps are likely to be recompiled with latest Xcode as 64 bit.

Edit: But then I guess we are all using stuff not supported since 80s anyway.
b2 is a 64-bit app, so I don't need to worry about the 32-bit OS X cull.

As far as OS versions go, I plan on supporting whatever's easy to do, given that this is a project I'm doing for fun and I don't have that many devices to test on ;)

b2 currently supports OS X 10.7 because that's the earliest anybody's mentioned, and in theory it's just a couple of extra command line options for the compiler, making support easy enough to add. But at some point I assume Xcode will stop supporting it, or the CI server will stop supporting it, or one of b2's dependencies will stop supporting it... and then b2 will stop supporting it too.

--Tom