Why did Acorn ignore Motorola?

discuss both original and modern hardware for the bbc micro/electron
Coeus
Posts: 2305
Joined: Mon Jul 25, 2016 12:05 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by Coeus »

arg wrote:
Tue Sep 14, 2021 1:00 pm
So really it's the decision to split the MMU from the CPU or not that forces the decision - MIPS couldn't have taken the Acorn approach given they wanted it on the main CPU, Acorn could have taken the MIPS approach but it would have been more awkward for them (especially if not actually wanting to implement demand-paged virtual memory).
By demand-paged VM are you referring specifically to pages that have been moved out to disc and are brought back into RAM as instructions try to use them?

Is there another consideration here of context switch overhead? If you are required to, or choose to, program the whole virtual to physical address mapping into the MMU each time the execution context changes isn't that going to introduce a noticeable overhead?

By comparison, if the OS simply has to change a register that points to a page table in RAM and the hardware then assembles the necessary cache automatically then context switching will be fast, though the first time a page is touched in a process that has just started running again will be slower.
paulb
Posts: 830
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by paulb »

arg wrote:
Tue Sep 14, 2021 1:00 pm
Certainly both systems take a fault for software to patch things up when a target address isn't found, so both systems need restartable instructions in the CPU. There's really not so much difference between them at all; the MIPS TLB and Acorn's page tables store exactly the same info, just that the TLB is a cache onto a larger table stored (arbitrarily) elsewhere while Acorn keeps the whole table on-chip - but that chip isn't the CPU chip. So Acorn's approach uses more silicon area, but that may not be cost-prohibitive as it's spread across multiple chips.
I guess the MIPS TLB is more expensive to implement per entry because it is an arbitrary mapping between virtual and physical pages (or actually page pairs on MIPS, but anyway), whereas the MEMC merely provides a table of virtual page addresses for each of the physical pages, which as people may have noted at the time was the reverse of what they anticipated (and precluded several virtual pages referencing the same physical page). Then again, I think the largest TLBs on MIPS are something like 64 entries, and even more recent MIPS-based SoCs that I have used seem to have only 16 entries. And I think the software-based approach to populating the TLB seems to be able to work towards reducing the hardware demands, so that nobody misses a 128-entry TLB.

I haven't really looked too much into the trade-offs here. But my point was that one seems to hear virtues being made of the simplicity of the hardware designed by Acorn in this endeavour, which is not entirely unreasonable, but when those virtues are tied to some stated design goal of doing as much in software as possible, as I am sure I heard recently in some treatment of the ARM chipset's design, the way that was chosen diverges from the software-driven approach in a fashion that can be contrasted with another design which is clearly more software-driven. From the Furber transcript:
At that time people were beginning to adopt fairly complex memory controllers. These were things that did memory address translation through two layers of tables and they produced quite complex hardware. And I thought about this and decided I could find a much – a much simpler way of doing this. If I sort of inverted the problem and used a small content addressable memory to store translations then the logic for the memory control would be much simpler. But this seemed a bit of a radical change. I mean, why wasn’t everybody else doing it? ... And unbeknown to me, I’d effectively just reinvented the very first memory management hardware that was developed for the Manchester Atlas machine, which again was based on associative memories.
Notably, MIPS wasn't doing "two layers of tables" in hardware, unless I misunderstand what he really means. But I agree that putting a 128-entry lookup table on another chip provides a solution to a problem, which then raises the question of what that problem actually was.
arg wrote:
Tue Sep 14, 2021 1:00 pm
Perhaps a point missed in this discussion is the dual uses of MMU hardware - for protection (which would potentially have been useful to Acorn) and for demand-paging (mandatory for later generations of Unix, but only just coming into widespread use at the time, hence the various 68000 Unix systems mentioned upthread that couldn't do VM). The ARM design did (eventually) intend to support VM, but it's clear that this was something of an afterthought just to keep up with 'state of the art' rather than something Acorn needed for themselves.
This is an interesting observation and possibly a hint at the nature of the "problem" mentioned above: the nature of any system needing virtual memory support. I imagine it might depend on one's background. I don't have any mainframe experience, personally, but browsing through old journals and the like, I definitely get the feeling that various mainframe systems can be rather prescriptive and quite unlike systems like Unix. If one were to ask people familiar with the mainframe business, one would get different answers to those keeping up with Unix-like trends, and this could potentially inform product development in a way that is not necessarily conducive to the products that will ultimately be needed.
arg wrote:
Tue Sep 14, 2021 1:00 pm
paulb wrote:
Mon Sep 13, 2021 11:28 pm
(In the Furber transcript, he notes the simplicity of his design, which I don't dispute, but from what you said earlier, the fixed size of the translation table was a hindrance. A "lazy" translation mechanism, however, would have been more capable.)
Not so much the fixed size of the translation table, as the small size of the translation table. Probably they could have put a bigger table (and hence smaller page sizes) in MEMC without it becoming an unduly large chip, but obviously it would have cost more and didn't give much advantage in Acorn's target styles of OS.
I think Acorn must have had Unix in mind, which I mentioned before (below), as well as the whole ARX endeavour. Maybe the simplest explanation is that they just needed "something".
arg wrote:
Tue Sep 14, 2021 1:00 pm
When searching yesterday, Google found me a contemporary Usenet posting from Roger Wilson extolling the benefits of 32K pages... (unfortunately, I can't find it again now).
It's this one, perhaps:
MEMC's Content Addressable Memory inverted page table contains 128 entries.
This gives rather large pages (32KBytes with 4MBytes of RAM) and one can't
have the same page at two virtual addresses. Our UNIX hackers revolted, but
are now learning to love it (there's a nice bit in the standard kernel which
goes "allocate 31 pages to start a new process"....)
arg wrote:
Tue Sep 14, 2021 1:00 pm
In truth, 32K pages are more efficient that 4K ones, so long as you have enough of them (ie. lots of total RAM) and an efficient disc system so that you can stream in big chunks off disc to fill them. R140 was the perfect storm of barely enough RAM and a slow disc system. So slow in fact that they used the trick of compressing each 32K page of executable files so that it was smaller than 32K by some number of filesystem blocks, then expanding the compressed data when paging in. This not only saved disc space, but was reputedly faster than reading in the extra few K over the slow disc interface!
Yes, this is Mark Taunton's USENIX paper from 1991, "Compressed Executables: An Exercise in Thinking Small", also described in a newsgroup post. I think the reaction to this, even by those involved, was that it made the best out of a non-ideal situation where one might as well get the CPU involved in decompressing the executable pages because the system architecture was going to be involving it in the disk transfer, anyway.
arg wrote:
Tue Sep 14, 2021 1:00 pm
arg wrote:
Mon Sep 13, 2021 2:53 pm
Ultimately the requirements will have been coloured by product priorities; Acorn didn't at the time have any product running a VM operating system, nor any immediate plans to make one - demand paging off floppies doesn't really make sense - so ensuring the MEMC spec gave best VM performance wouldn't have been a big consideration.
But wasn't Acorn supposed to be delivering Unix or something approaching that level of sophistication? The technical publishing system was supposed to be running "Acorn UNIX".
That came later, and was really a repackaging of the already-built R140/R260 (A440/A560).
I was referring to the A680 which was surely around before any of those: the manual is from 1988. From the introduction:
The Acorn Technical Publishing System is a high performance Acorn UNIX workstation specifically designed for the Computer Assisted Technical Publishing (CATP) market. The Acorn Technical Publishing System provides a complete system solution including computer, screen and laser beam printer together with systems and applications software.
At the time, Acorn had managed to pitch a laser printer card to Olivetti which then provided some additional revenue, so the "laser beam printer" thing is totally in character for that era. Maybe there were repackagings of the R140/R260, although Acorn's later efforts appear to have been much more focused on the publishing "bureau" market with the Archimedes bundled with Impression, and so on.
arg wrote:
Tue Sep 14, 2021 1:00 pm
This actually cuts to the heart of the problem: the business systems group always seemed to see their role as re-packaging the machines that had already been built to make a few extra sales in other markets, rather than having the "push" within the company to get a machine built specifically for them.

It started on the wrong foot with trying to sell BBC+Z802P as a business machine because "Business wants CP/M" - which was maybe true, but never made a convincing argument why if you did want a CP/M machine specifically you should buy an Acorn one. That led to mediocre sales, which in turn meant that group had little clout to influence new machines going forward.
It was remarkable at the time, and still remarkable when reviewing them today, that every issue of Acorn User for a good long time had a business section focusing on CP/M, even after it stopped being this fashionable thing. Of course, industry commentary made a big thing of CP/M in the early 1980s (arguably with some justification), prompting and/or responding to plenty of CP/M machines from all and sundry taking advantage of its supposed appeal. And of course, when DOS came along, CP/M fell out of fashion very quickly (arguably with some justification), making DOS the hot new thing everyone wanted to buy and/or sell. Acorn's DOS strategy was not entirely coherent, but it is remarkable to observe that despite the celebrated disdain for DOS, Gates and company, some kind of DOS-based product was always part of Acorn's portfolio after a certain point in the mid-1980s.
arg wrote:
Tue Sep 14, 2021 1:00 pm
Was there a cleaning out of old strategic projects at that point? Reading articles from the era, it comes across that Unix, workstations and such things were all championed by Chris Curry, as was the Communicator. The Communicator hung on for a while longer and, being a non-consumer product, has fascinated many people even to this day. On the one hand, people have expressed sentiments that it fitted in with a more conventional microcomputer strategy and could have been Acorn's own modest step up into 16-bit territory, but it was also along a technological path that didn't make much sense with the ARM chipset in the pipeline.
The Communicator followed on from the Electron-based terminals that had been sold in conjunction with BT. It was certainly Chris Curry's baby, but I think the reason it got funded to completion (compared to various other things) was that there were individual customers prepared to place large(ish) orders for it. The general business market was obviously much larger, but there was no one big contract to go for. Pickfords travel agents was the customer that was supposed to be placing the big order, though I'm not sure how many they actually bought in the end.

The technology of the Communicator was a bit of an orphan, with everybody good working on ARM stuff and not paying attention to the Communicator, but did to an extent feed into the Arthur OS and then RiscOS once the originally intended OS for the ARM was scrapped.
Looking at the hardware, the Communicator appears to be more or less an enhanced version of the BT M2105, merely with the 65816 instead of the 6502 and with updated peripheral chips. I do wonder about the memory architecture of the system, having reviewed some of its documentation, given that it uses the Electron's ULA or some re-spin of it. I see that there is the dedicated video RAM, which must be on the 4-bit bus via the ULA, but the arrangement to access the rest of the memory and just how it all fits together seems intriguing.
arg wrote:
Tue Sep 14, 2021 1:00 pm
One topic that the absence of the 68000 in Acorn's portfolio raises, especially in contrast to Torch's embrace of that architecture, is Acorn's attitude towards Torch, particularly after the companies went their separate ways. Was there ever any disdain towards Torch's products or a conviction that they were taking the wrong tack, or were people simply oblivious to what Torch (and other companies) were doing?
At some point in early 1982, we were suddenly told that we were not allowed to talk to Torch any more. As a very junior engineer, I wasn't privy to the boardroom bust-up behind this, but it was clear that there was one and the name of Martin Vlieland-Boddy (Torch founder) seemed to be particularly in disregard.

My understanding of the resulting rumours was that originally Acorn and Torch was seen (at least from the Acorn side) as a close collaboration, with them carving up the market sectors between them and jointly developing technology to suit. After the bust-up, they were seen as competitors. V-B's departure from Torch in '84 then opened up the possibility of the Acorn takeover which would have made Torch into Acorn's business division, but Acorn's own financial problems then intervened.
Yes, this seems to dovetail with vague reporting and commentary from the era. Vlieland-Boddy was rather prominent but was then ousted for some reason that isn't widely reported. At one point, Torch were supposed to be trying to get financing from, or be acquired by, GEC, but GEC then went and bought Dragon Data for reasons that probably only made sense to that kind of company in that era. However, that could have been rather earlier. Guy Kewney, who was surely popular at Acorn, noted that at some point we might eventually find out what happened, but of course we never did.

I think the bust-up with Torch was blamed on Torch taking on the Graduate from Vlieland-Boddy's new company, Data Technologies, who had tried to bring it to market but had encountered technical difficulties. But that might have been a course taken in response to Acorn seemingly meddling with Torch, bringing in their own person as chief executive officer or whatever the role was. (I actually found an announcement of that appointment in Variety magazine of all places.)
arg wrote:
Tue Sep 14, 2021 1:00 pm
The subsequent Unix developments by Acorn may have been to some extent inspired by what Torch had been doing, and there was crossover on the marketing side, with I believe some ex-Torch staff ending up at Acorn, and the licensing of X related stuff from IXI (a company formed by some of the key technical people from Torch, Ray Anderson & Clive Feather, to develop and promote X desktop technology).
Yes, Torch's marketing manager jumped ship, presumably to pursue workstation marketing efforts at Acorn. This is one of the many things I managed to dig up to try and remedy the general lack of publicly reported and available knowledge about the company, beyond "didn't they make some VIC-20 stuff and a reboxed Beeb?" and "oh, their stuff was on a television drama series", which sort of put them at the "mostly harmless" level of historical importance, rather unfairly in my view, even though I never used any of their stuff.
paulb
Posts: 830
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by paulb »

litwr wrote:
Tue Sep 14, 2021 8:33 pm
IMHO the VAX was very different case. The PDP-11 was just good and promising but the VAX had some political background behind it. IMHO it was a way to lure the USSR to clone the most expensive and unpromising technologies - it also helped to hide the real breaks like the 6502, personal computers, RISC, etc from cloning. The VAX was too expensive and slow. DEC almost stop supporting the PDP-11 after 1975 while they were very popular...
My favorite processor is the ARM. IMHO the ARM-32 ISA is still the best. The 6502, x86 are also good. The 680x, 68k, Z80 are robust but they have too many quirks.
I have limited experience with VAX systems in that I used one occasionally for a short period at university, and that "VAXcluster" (actually just two nodes) was retired in favour of an Alpha-based system running OpenVMS a couple of years later (which I didn't use at all, although I may have used its sibling running OSF/1). However, I think you underestimate the influence of the VAX in empowering numerous technology companies in providing computing infrastructure that let them pursue their own product development goals.

It is also the case with Sun workstations, even the early ones, that they appeared in all sorts of places and facilitated all sorts of new systems to be built. The likes of Acorn would probably have struggled without these kinds of bigger machines being available, even though there are plenty of tales of "beefed up" Beebs with second processors and extra memory.

Also, it is apparent that for many computer companies at a certain level in the marketplace, VAX systems were a competitive threat and/or the technology to measure one's own products by, hence the notion of "VAX MIPS" and, in Acorn's own promotional literature, repeated mentions of VAX systems. Plus, it opened up many opportunities for Unix system developers, both as a target for Unix but also as a vehicle to help get Unix ported onto other systems.

But having said all that, I do see your point. DEC probably didn't see the need to make VAX systems more affordable or more performant until the competition started to bring DEC's success with that product, along with its position in the industry, into question.
User avatar
1024MAK
Posts: 11006
Joined: Mon Apr 18, 2011 5:46 pm
Location: Looking forward to summer in Somerset, UK...
Contact:

Re: Why did Acorn ignore Motorola?

Post by 1024MAK »

I’m always amused and bemused when people compare different microprocessor architectures by clock speed. It’s as useful as comparing only the engine RPM in one gear, between a petrol engine, a diesel engine and an electric motor, and using this to determine the performance of the vehicle…

If you must compare some so called raw hardware figures, compare the instruction timing when all processors that are being compared are at the maximum memory bus speed of the slowest part under test. On a 68k, do this with no wait states - known as DTACK grounded (and yes, there was a publication called this).

Mark
dominicbeesley
Posts: 1649
Joined: Tue Apr 30, 2013 12:16 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by dominicbeesley »

1024MAK wrote:
Wed Sep 15, 2021 12:21 am
I’m always amused and bemused when people compare different microprocessor architectures by clock speed.
That was my point, I had my tongue in cheek, awarding a handicap a bit like in golf...and just as easily subverted! There are, of course, other problems comparing apples and oranges and benchmarks in general but let's not go down that rabbit hole again!
User avatar
BigEd
Posts: 4352
Joined: Sun Jan 24, 2010 10:24 am
Location: West Country
Contact:

Re: Why did Acorn ignore Motorola?

Post by BigEd »

Nice links there paulb, thanks!

There's a 1986 PDF here about Digital's efforts to make a one-chip VAX, which includes some description of the challenge they found themselves facing, with the rise of the microprocessors.

(Edit: here's an archive of Sophie's post "Some facts about the Acorn RISC Machine")
litwr
Posts: 242
Joined: Sun Jun 12, 2016 9:44 am
Contact:

Re: Why did Acorn ignore Motorola?

Post by litwr »

dominicbeesley wrote:
Tue Sep 14, 2021 9:51 pm
I don't think the 68000 basic has been measured except for at 10Mhz (kernelthread's original) and mine at ~16MHz (well 20MHz with wait states) which give a combined speed of 22.35MHz.
Sorry, it seems I used wrong figures for the 32016. :( The correct one for the 32016@6MHz is 4.71MHz
viewtopic.php?p=158642#p158642
But this result seems conroversial, because we also have another correct number 3.48 MHz - http://cpu-ns32k.net/Acorn.html - that is more correct for me because it is well known that Acorn started the ARM when they discovered that the 6502@4MHz outperformed the 32016@6MHz.
I also confused figures for the 68000. The correct one is 16.44MHz for the 68000@10MHz. These figures give us an unexpected result that the 68000 is 2.1-2.8 times faster than the 32016!
1024MAK wrote:
Wed Sep 15, 2021 12:21 am
I’m always amused and bemused when people compare different microprocessor architectures by clock speed. It’s as useful as comparing only the engine RPM in one gear, between a petrol engine, a diesel engine and an electric motor, and using this to determine the performance of the vehicle…

If you must compare some so called raw hardware figures, compare the instruction timing when all processors that are being compared are at the maximum memory bus speed of the slowest part under test. On a 68k, do this with no wait states - known as DTACK grounded (and yes, there was a publication called this).
Sorry I missed your point. We compare processor speeds using ClockSp which is quite good benchmark. Indeed there is no such thing as the perfect benchmark. So it is just a game about approximations and estimations. Now we have that the 68000 shows yourself much better for BBC Basic code than the 32016. Maybe for some other task the 32016 will be faster.
The clock speed provides us some basic background for the total performance but the this background is quite important. Better processors can do more operations for one clock tick, worse ones do less. It is the idea behind DMIPS/MHz or my ER measures.
paulb wrote:
Tue Sep 14, 2021 11:33 pm
However, I think you underestimate the influence of the VAX in empowering numerous technology companies in providing computing infrastructure that let them pursue their own product development goals.

It is also the case with Sun workstations, even the early ones, that they appeared in all sorts of places and facilitated all sorts of new systems to be built. The likes of Acorn would probably have struggled without these kinds of bigger machines being available, even though there are plenty of tales of "beefed up" Beebs with second processors and extra memory.

Also, it is apparent that for many computer companies at a certain level in the marketplace, VAX systems were a competitive threat and/or the technology to measure one's own products by, hence the notion of "VAX MIPS" and, in Acorn's own promotional literature, repeated mentions of VAX systems. Plus, it opened up many opportunities for Unix system developers, both as a target for Unix but also as a vehicle to help get Unix ported onto other systems.

But having said all that, I do see your point. DEC probably didn't see the need to make VAX systems more affordable or more performant until the competition started to bring DEC's success with that product, along with its position in the industry, into question.
My points about DEC, the PDP-11 and VAX are the next:

1) the PDP-11 was very popular. It was quite powerful for 1975. It offered users up to 4 MB memory and its performance was above cheap mainframes. You know the first versions of many important IT were made on the PDP-11 at first: C, Unix, Oracle, ... The PDP-11 development might have provided cheaper and faster systems. Stopping the PDP-11 was like killing the goose that laid them golden eggs.

2) almost all VAX innovations were dead ends: 1) orthogonality; 2) two stacks; 3) CISC. VAX was rather late adapting GUI. IMHO the IBM mainframes always showed more proper ways than the VAX in hardware design.

3) Too many things around the VAX sound strange and illogical... DEC seriously weakened IBM with the PDP-11. IMHO switching to the VAX looks like IBM just broke DEC.
Last edited by litwr on Wed Sep 15, 2021 11:29 am, edited 2 times in total.
User avatar
arg
Posts: 351
Joined: Tue Feb 16, 2021 2:07 pm
Location: Cambridge
Contact:

Re: Why did Acorn ignore Motorola?

Post by arg »

Coeus wrote:
Tue Sep 14, 2021 10:15 pm
By demand-paged VM are you referring specifically to pages that have been moved out to disc and are brought back into RAM as instructions try to use them?
Yes (and likewise pages that haven't yet been brought into RAM for the first time).
Is there another consideration here of context switch overhead? If you are required to, or choose to, program the whole virtual to physical address mapping into the MMU each time the execution context changes isn't that going to introduce a noticeable overhead?
Yes, this is an overhead (rather bigger than the MIPS-style TLB miss, but occurring much less often). One conventional workaround is to tag the entries with a process ID so that you only need to change the process ID register rather than swapping out the entire table. Acorn actually have a vestigial implementation of this - the 1-bit "OS mode" in the MEMC (plus the override for supervisor mode).
By comparison, if the OS simply has to change a register that points to a page table in RAM and the hardware then assembles the necessary cache automatically then context switching will be fast, though the first time a page is touched in a process that has just started running again will be slower.
Certainly the TLB scheme is more flexible (though in the MIPS case it's software that reassembles the cache, not hardware).

I'm definitely not trying to argue here that the Acorn approach was best/ideal, just exploring the various pros and cons. The main thesis that I'm arguing is that the reason what they built didn't make an ideal Unix machine is that there was nobody championing the requirements of Unix when the hardware was being defined - the VM support was an afterthought, based on general knowledge of what other people were doing and what might possibly be needed in the future, rather than "we are building a machine right now and it needs to do _this)".

Another thing we forget in the internet age is that back then it was much harder to know what the state of the art was elsewhere.
User avatar
arg
Posts: 351
Joined: Tue Feb 16, 2021 2:07 pm
Location: Cambridge
Contact:

Re: Why did Acorn ignore Motorola?

Post by arg »

paulb wrote:
Tue Sep 14, 2021 11:13 pm
I guess the MIPS TLB is more expensive to implement per entry because it is an arbitrary mapping between virtual and physical pages (or actually page pairs on MIPS, but anyway), whereas the MEMC merely provides a table of virtual page addresses for each of the physical pages, which as people may have noted at the time was the reverse of what they anticipated (and precluded several virtual pages referencing the same physical page). Then again, I think the largest TLBs on MIPS are something like 64 entries, and even more recent MIPS-based SoCs that I have used seem to have only 16 entries. And I think the software-based approach to populating the TLB seems to be able to work towards reducing the hardware demands, so that nobody misses a 128-entry TLB.
Well, you obviously do miss a bigger TLB the same way you miss a bigger cache, but as usual it's a cost trade-off. Arguably this is an advantage of the MIPS approach as it lets you scale that to suit the application without changing the architecture. Acorn get to scale the page size in the same way, but that's not a comparative advantage as MIPS can scale the page size too.

Ultimately I think we both agree that the MIPS approach ends up the overall winner, but there are disadvantages too.

Although the MIPS hardware seems simple, placing it before the memory system carries a performance penalty. The only MIPS design I have experience of at this level of programming is the 4KEc embedded core; that had only a 16-entry TLB, but even so it incurred an extra clock cycle penalty to access it, which would have been a huge cost - so there were two caches (TLB caches, not data caches, one for I, one 4 D) each holding three entries out of the main TLB that could be accessed in a single cycle (those being hardware-operated caches). So the seemingly simple 16-entry TLB had a very significant silicon cost in that design.

Acorn, by putting the MMU in the memory controller (and consequently having virtually-addressed cache) avoid the MMU's access time being additive: they can run it in parallel with the RAS cycle of the DRAM (and the cache is implicitly caching the address translations as well as the data). But as already discussed, this trades 'straight line' performance for context switch penalty.

Notably, MIPS wasn't doing "two layers of tables" in hardware, unless I misunderstand what he really means. But I agree that putting a 128-entry lookup table on another chip provides a solution to a problem, which then raises the question of what that problem actually was.
I suspect the "two layers of tables in hardware" refers to what some CISC machines were doing - the 32082 MMU for the 16032, for example.
arg wrote:
Tue Sep 14, 2021 1:00 pm
When searching yesterday, Google found me a contemporary Usenet posting from Roger Wilson extolling the benefits of 32K pages... (unfortunately, I can't find it again now).
It's this one, perhaps:
Yes, that's the one I was looking for.
{on-the-fly decompression}
I think the reaction to this, even by those involved, was that it made the best out of a non-ideal situation where one might as well get the CPU involved in decompressing the executable pages because the system architecture was going to be involving it in the disk transfer, anyway.
Indeed, I was citing it as an indication of just how bad the R140's disc system was in comparison to the CPU/memory performance, rather than an advantage for 32K pages!
I was referring to the A680 which was surely around before any of those: the manual is from 1988. From the introduction:
You are right that it was released before the R140 - I had forgotten that part of the timeline. But it was well after the might-have-been Xenix on the 32016, and after the ARM had been in existance for a while, and it's part of the same development as R140 (same Unix porting effort). It's not clear if the A680 came first in concept or the Unix work started with both classes of product in mind (or even if the A680 was a ploy to get Olivetti to fund the project with R140-style workstations more what people inside Acorn wanted to deliver). I will have to ask some of the people involved.
Looking at the hardware, the Communicator appears to be more or less an enhanced version of the BT M2105, merely with the 65816 instead of the 6502 and with updated peripheral chips. I do wonder about the memory architecture of the system, having reviewed some of its documentation, given that it uses the Electron's ULA or some re-spin of it. I see that there is the dedicated video RAM, which must be on the 4-bit bus via the ULA, but the arrangement to access the rest of the memory and just how it all fits together seems intriguing.
It was a bit of a bodge. Unfortunately I haven't kept any circuit diagrams, but as I remember the Electron ULA was just used as a kind of cheap video card, with the main CPU having its own RAM/ROM more conventionally attached. And then the super bodge of the separate teletext chip (accessed over I2C!) to give a real mode 7 display for the Prestel market.

So scraped together out of parts lying around rather than having any serious capital investment to make it a good machine hardware wise. The OS was quite good ironically because they weren't willing to spare any in-house people to work on it and so Paul Bond (architect of BBC MOS) got pulled in as a contractor to lead it.
I think the bust-up with Torch was blamed on Torch taking on the Graduate from Vlieland-Boddy's new company, Data Technologies, who had tried to bring it to market but had encountered technical difficulties. But that might have been a course taken in response to Acorn seemingly meddling with Torch, bringing in their own person as chief executive officer or whatever the role was. (I actually found an announcement of that appointment in Variety magazine of all places.)
That (I think) was the cause of the later bust-up causing V-B to be ousted from Torch. I won't say what rumours were circulating about the original Acorn/Torch bust-up, as I can't remember them accurately enough to be useful and they were probably libellous at the time! Suffice it to say that I was amused putting his name into Google yesterday that the top hit was a legal case concerning a fraud.
Yes, Torch's marketing manager jumped ship, presumably to pursue workstation marketing efforts at Acorn. This is one of the many things I managed to dig up to try and remedy the general lack of publicly reported and available knowledge about the company, beyond "didn't they make some VIC-20 stuff and a reboxed Beeb?" and "oh, their stuff was on a television drama series", which sort of put them at the "mostly harmless" level of historical importance, rather unfairly in my view, even though I never used any of their stuff.
Yes, Torch certainly had some good people and did some interesting stuff, sometimes diminished because they also knocked out some boring "boxshifter" products. I think it's relevant that Torch's founders and subsequent management were all accountants, while Acorn were much more technology-led.
Last edited by arg on Wed Sep 15, 2021 12:14 pm, edited 3 times in total.
Ramtop
Posts: 289
Joined: Tue Oct 23, 2018 1:40 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by Ramtop »

litwr wrote:
Wed Sep 15, 2021 9:55 am
These figures give us an unexpected result that the 68000 is 2.1-2.8 times faster than the 32016!
I remember reading an interview with Shiraz Shivji, the lead designer of the Atari ST, where he stated Atari originally built a prototype ST based on the 32016 but found the performance disappointing and switched to the 68000 instead. Those figures do seem very slow indeed.
Gary
dominicbeesley
Posts: 1649
Joined: Tue Apr 30, 2013 12:16 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by dominicbeesley »

litwr wrote:
Wed Sep 15, 2021 9:55 am
dominicbeesley wrote:
Tue Sep 14, 2021 9:51 pm
I don't think the 68000 basic has been measured except for at 10Mhz (kernelthread's original) and mine at ~16MHz (well 20MHz with wait states) which give a combined speed of 22.35MHz.
Sorry, it seems I used wrong figures for the 32016. :( The correct one for the 32016@6MHz is 4.71MHz
viewtopic.php?p=158642#p158642
But this result seems conroversial, because we also have another correct number 3.48 MHz - http://cpu-ns32k.net/Acorn.html - that is more correct for me because it is well known that Acorn started the ARM when they discovered that the 6502@4MHz outperformed the 32016@6MHz.
I also confused figures for the 68000. The correct one is 16.44MHz for the 68000@10MHz. These figures give us an unexpected result that the 68000 is 2.1-2.8 times faster than the 32016!
No problem on the confusion, I looked through a lot of stuff last night really struggled to work out what environments were being used. I suspect the sub-4MHz figure is probably closest to the mark? [I really didn't think they were that bad though!]

Again, caution should probably be exercised here - kernerlthread's BASIC is highly optimised with cached addresses for GOTO/GOSUB/PROC that probably aren't present in BAS32 so the 68k is being given a bit of a leg up there but on the other side I'm not sure what would have been a normal contemporary clock speed for a 68k at the time these comparisons occurred, 10 and 20 MHz parts for the plain 68000 are common now but 8MHz might have been more common back then. [Aside: For my project I was hoping to find a 68010 substitute but note the maximum speed for these seems to be 12.5MHz? There are some on ebay marked 25MHz but they look rather sketchy!]
kernelthread
Posts: 25
Joined: Fri Aug 06, 2021 4:30 pm
Location: South London
Contact:

Re: Why did Acorn ignore Motorola?

Post by kernelthread »

dominicbeesley wrote:
Wed Sep 15, 2021 2:33 pm
... I'm not sure what would have been a normal contemporary clock speed for a 68k at the time these comparisons occurred, 10 and 20 MHz parts for the plain 68000 are common now but 8MHz might have been more common back then. [Aside: For my project I was hoping to find a 68010 substitute but note the maximum speed for these seems to be 12.5MHz? There are some on ebay marked 25MHz but they look rather sketchy!]
My recollection of the time is that 8MHz would have been a typical clock speed for an early 80's 68000 system. Towards the mid 80's you might see 12MHz on some boxes. I don't believe that any version of the 68010 was produced with a specified clock frequency above 16.7MHz, so if you find one marked 25MHz, it will be fake.

From this site: http://www.cpu-ns32k.net/CPUs.html it seems the 32016 had a minimum instruction time of 4 clock cycles, which is the same as the 68000. However it looks like it has a real 32 bit ALU whereas the 68000 has a 16 bit ALU (most of the 32 bit instructions take longer), which should give the 32016 somewhat of an advantage.
User avatar
BigEd
Posts: 4352
Joined: Sun Jan 24, 2010 10:24 am
Location: West Country
Contact:

Re: Why did Acorn ignore Motorola?

Post by BigEd »

Ramtop wrote:
Wed Sep 15, 2021 12:08 pm
I remember reading an interview with Shiraz Shivji, the lead designer of the Atari ST, where he stated Atari originally built a prototype ST based on the 32016 but found the performance disappointing and switched to the 68000 instead.
I'd like to see that interview! I found this one - is it the same one?
kernelthread wrote:
Wed Sep 15, 2021 3:09 pm
From this site: http://www.cpu-ns32k.net/CPUs.html it seems the 32016 had a minimum instruction time of 4 clock cycles, which is the same as the 68000.
Also relevant, how fast the RAM needs to be to avoid wait states. See this thread:
A request for rare 32016 based hardware
CJE-4D
Posts: 110
Joined: Thu Jul 10, 2014 9:38 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by CJE-4D »

Coeus wrote:
Mon Sep 06, 2021 8:35 pm
According to Steve Furbur, the issue was interrupt latency.
This comment I believe though was in the context of investigations as to 'which processor should Acorns next computer be based on' not a possible tube connected co-pro.
Last edited by CJE-4D on Wed Sep 15, 2021 9:20 pm, edited 1 time in total.
litwr
Posts: 242
Joined: Sun Jun 12, 2016 9:44 am
Contact:

Re: Why did Acorn ignore Motorola?

Post by litwr »

kernelthread wrote:
Wed Sep 15, 2021 3:09 pm
From this site: http://www.cpu-ns32k.net/CPUs.html it seems the 32016 had a minimum instruction time of 4 clock cycles, which is the same as the 68000. However it looks like it has a real 32 bit ALU whereas the 68000 has a 16 bit ALU (most of the 32 bit instructions take longer), which should give the 32016 somewhat of an advantage.
Thank you very much. I made a comparison table that consists of numbers of clock ticks.

Code: Select all

             32016       68000
            R-R  M-M    R-R  M-M
move Byte     3   17      4   12
     Word     3   17      4   12
    DWord     3   17      4   20

add  Byte     4   20      4   n/a
     Word     4   20      4   n/a
    DWord     4   28      6   n/a

mul  Byte    38   43    n/a   n/a 
     Word    54   58  38-70   n/a
    DWord    86   96    n/a   n/a
I used (An),(An) addressing for the 68000 memory-memory instructions. So the 32016 has shorter minimal memory access cycle and more orthogonal ISA. However the table shows that its advantages were actually marginal. It seems that the 32016 shorter memory cycle was similar to the 68020/30 memory cycle that was usually equal to 4 and very rarely to 3...
Ramtop
Posts: 289
Joined: Tue Oct 23, 2018 1:40 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by Ramtop »

BigEd wrote:
Wed Sep 15, 2021 4:52 pm
I'd like to see that interview! I found this one - is it the same one?
My memory turns out to be a bit faulty on this! The quote from Shriaz Shivji is actually in the book "Faster Than Light" by Jamie Lendino where he says he was "quite disappointed" by the performance of the NS chip, but was taking about a 32032 based prototype rather than the 32016. Although I would assume if the '32 wasn't fast enough for Atari then the '16 would be even less attractive.
Gary
User avatar
BigEd
Posts: 4352
Joined: Sun Jan 24, 2010 10:24 am
Location: West Country
Contact:

Re: Why did Acorn ignore Motorola?

Post by BigEd »

Ah, that's amusing - I'm pretty sure Faster than Light is quoting from that very article!

As you say, if the 32032 disappoints, then it's not looking good.
dominicbeesley
Posts: 1649
Joined: Tue Apr 30, 2013 12:16 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by dominicbeesley »

Out of interest as we are comparing contemporary(ish) systems what scores does one get running ClockSp on a 1st generation Archimedes?
kernelthread
Posts: 25
Joined: Fri Aug 06, 2021 4:30 pm
Location: South London
Contact:

Re: Why did Acorn ignore Motorola?

Post by kernelthread »

dominicbeesley wrote:
Wed Sep 15, 2021 10:43 pm
Out of interest as we are comparing contemporary(ish) systems what scores does one get running ClockSp on a 1st generation Archimedes?
You could try running it on the MAME emulation of Archimedes. The emulation of my 2nd processor board gives very accurate timing results.
paulb
Posts: 830
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by paulb »

arg wrote:
Wed Sep 15, 2021 12:05 pm
Looking at the hardware, the Communicator appears to be more or less an enhanced version of the BT M2105, merely with the 65816 instead of the 6502 and with updated peripheral chips. I do wonder about the memory architecture of the system, having reviewed some of its documentation, given that it uses the Electron's ULA or some re-spin of it. I see that there is the dedicated video RAM, which must be on the 4-bit bus via the ULA, but the arrangement to access the rest of the memory and just how it all fits together seems intriguing.
It was a bit of a bodge. Unfortunately I haven't kept any circuit diagrams, but as I remember the Electron ULA was just used as a kind of cheap video card, with the main CPU having its own RAM/ROM more conventionally attached. And then the super bodge of the separate teletext chip (accessed over I2C!) to give a real mode 7 display for the Prestel market.
The "Communicator Systems Manual" is available from the Centre for Computing History and it confirms the use of I2C for various peripherals, but "videotex" details are sparse. However, it would make sense to use I2C, I guess. I'd have to look at the materials I found to understand how all of this fitted together. Fortunately, with the Chain/M2105 some archived materials were uncovered that make the operation of that machine much easier to figure out.
arg wrote:
Wed Sep 15, 2021 12:05 pm
So scraped together out of parts lying around rather than having any serious capital investment to make it a good machine hardware wise. The OS was quite good ironically because they weren't willing to spare any in-house people to work on it and so Paul Bond (architect of BBC MOS) got pulled in as a contractor to lead it.
I found a nice source just a few days ago that indicated the frugal nature of the effort, with the technological side needing to effectively see what the company had to hand (in contrast to the article a few pages away in the same publication about the industrial design approach taken with the Torch Triple X). That said, I had the impression from Chris Curry's strategic musings that the ULA was meant to be used in different machines. It is just a shame they went for the 4-bit memory bus: a decision of the moment that had such an impact.
arg wrote:
Wed Sep 15, 2021 12:05 pm
That (I think) was the cause of the later bust-up causing V-B to be ousted from Torch. I won't say what rumours were circulating about the original Acorn/Torch bust-up, as I can't remember them accurately enough to be useful and they were probably libellous at the time! Suffice it to say that I was amused putting his name into Google yesterday that the top hit was a legal case concerning a fraud.
No wonder Kewney said no more, if he knew anything of substance!
arg wrote:
Wed Sep 15, 2021 12:05 pm
Yes, Torch certainly had some good people and did some interesting stuff, sometimes diminished because they also knocked out some boring "boxshifter" products. I think it's relevant that Torch's founders and subsequent management were all accountants, while Acorn were much more technology-led.
It seems that Torch went for the public sector in a substantial way, even getting traction within British Telecom, although perhaps a bit too late in the day and perhaps without significant differentiation over the competition.
dominicbeesley
Posts: 1649
Joined: Tue Apr 30, 2013 12:16 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by dominicbeesley »

Good point, I didn't think of Mame - but I did get Arculator working (I think). I've set it up to run RO 2.0 as I couldn't get Arthur further than a boot prompt
clocksp310.png
This certainly beats a 68000 but I suspect a 68030 which was more contemporaneous to an ARM2(?) might get a good deal closer?

Aside: Whilst I was looking I came across another benchmark program that I'd started developing a number of years back. The aim was to try and better narrow down what the benchmark was actually measuring - ClockSp says what it is doing i.e. "String manipulation" but much of the time is actually spent doing floating point. Is there any mileage in persuing it? I started it a couple of years ago. I'll try and run it on the 68000 tomorrow. It has still got issues as to what it is measuring but it tries to be a bit more specific.

I suspect the last thing that is needed is yet another benchmark but I did find it useful when tweaking my 6809 interpreter.
beebtst310.png
Coeus
Posts: 2305
Joined: Mon Jul 25, 2016 12:05 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by Coeus »

dominicbeesley wrote:
Thu Sep 16, 2021 1:12 am
This certainly beats a 68000 but I suspect a 68030 which was more contemporaneous to an ARM2(?) might get a good deal closer?
I had a look at Wikipedia to create a table of the three families:
processor.png
processor.png (8.17 KiB) Viewed 155 times
So even this from an emulated ARM evaluation system is a bit late for comparison, but maybe closer:
armclocksp.png
By the way, way there a trig optimise between 1.00 and 1.04 of ARM BASIC V?
dominicbeesley wrote:
Thu Sep 16, 2021 1:12 am
Aside: Whilst I was looking I came across another benchmark program that I'd started developing a number of years back. The aim was to try and better narrow down what the benchmark was actually measuring - ClockSp says what it is doing i.e. "String manipulation" but much of the time is actually spent doing floating point. Is there any mileage in persuing it? I started it a couple of years ago. I'll try and run it on the 68000 tomorrow. It has still got issues as to what it is measuring but it tries to be a bit more specific.
In a sense, because the test harness and test in question are both written in BASIC and included in the timing, there will always be some of testing the test harness. I don't know how your benchmark program works but one way around that. given that many of these things are run in a loop, is to time the empty loop first, then subtract the time from that from all the subsequent tests, leaving just the time for the code under test.
User avatar
BigEd
Posts: 4352
Joined: Sun Jan 24, 2010 10:24 am
Location: West Country
Contact:

Re: Why did Acorn ignore Motorola?

Post by BigEd »

One of the problems with dating a new processor is that it will typically have an announcement date, a launch date, an availability date for design-in, and a mass availability date. Not to mention the successive fixing of bugs and ramping up of frequency. For the 68000, Wikipedia says
Formally introduced in September 1979, initial samples were released in February 1980, with production chips available over the counter in November. Initial speed grades are 4, 6, and 8 MHz. 10 MHz chips became available during 1981, and 12.5 MHz chips by June 1982.
In some ways it might be better to look for retail adverts of systems which used those chips (and keeping an eye on what speed they ran at too.) Although of course that's much more digging.

Edit: looks like you could get an ARM3 upgrade in 1990??

Edit: perhaps see also Sarah's benchmarking thread.
Coeus
Posts: 2305
Joined: Mon Jul 25, 2016 12:05 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by Coeus »

Coeus wrote:
Thu Sep 16, 2021 2:58 pm
By the way, way there a trig optimise between 1.00 and 1.04 of ARM BASIC V?
Or maybe BASIC 1.04 takes advantage of the hardware multiply on ARM2? Looking at the titles the trig functions may be the first to test FP multiply as the loops are more likely to exercise add and compare.
dominicbeesley
Posts: 1649
Joined: Tue Apr 30, 2013 12:16 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by dominicbeesley »

Fairly close but I suppose the ARM2 is a bit earlier than a 68030 then!

Is that emulating ARM1 or ARM2 are the differences to my test solely down to an improved interpreter do you think?
Coeus wrote:
Thu Sep 16, 2021 2:58 pm
I don't know how your benchmark program works but one way around that. given that many of these things are run in a loop, is to time the empty loop first, then subtract the time from that from all the subsequent tests, leaving just the time for the code under test.
That's pretty much it, though I've gone a bit further i.e. when testing

Code: Select all

REPEAT:A=A+B:UNTILx
it subtracts the time for

Code: Select all

REPEAT:A=A:UNTILx
so the time for an addition is timed as the time for (var.assign + var.lookup*2 + FP add) - (var.assign + var.lookup)

So it's timing FP add + one variable lookup - I didn't workout how to get the time for a lookup on its own yet. Or work out a fudge for the number of characters parsed. It really started as a tool to help me optimise some stuff rather than being super accurate more of a "is it better or worse" and "roughly by how much" when I was tinkering with BASIC's internals.

This all assumes that BASIC works in certain way though, i.e. it doesn't cache stuff.
paulb
Posts: 830
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by paulb »

arg wrote:
Thu Sep 16, 2021 8:47 am
That said, I had the impression from Chris Curry's strategic musings that the ULA was meant to be used in different machines. It is just a shame they went for the 4-bit memory bus: a decision of the moment that had such an impact.
I think that may have been rationalisation after the fact. I wasn't deeply involved in Electron until the very end (when we were doing validation testing before the thing was signed off for production), but certainly there was no talk at that point about other uses and we did no testing other than in complete Electrons. Maybe there could have been a plan for variant ULAs for other machines or something.
The piece that comes to mind first is this one ("Chris Curry of Acorn", Practical Computing, October 1982), with this excerpt:
The likely date for Acorn to bring out its first portable machine is June 1983. "It won't look much like an Osborne but it will be a machine that includes its own display facilities. Because it will have limited interfaces and it must be as small as possible, it will be based on the Electron rather than the BBC machine. It will have a very strong emphasis on communications. If it is used in an office it will expect to see a local file station acting as its storage. It will have an Econet local area network interface and Modems for the British Telecom network."
I would have to look around for other quotes to support notions of some kind of strategy, but that article is more broadly informative about strategy, also touching on other topics mentioned in this thread.
arg wrote:
Thu Sep 16, 2021 8:47 am
The 4-bit didn't harm the Communicator materially, as the ULA was just being used as a video card and the CPU had its own memory, though obviously it did hurt the Electron significantly. Driven unfortunately by Electron being designed at the peak of a cycle of DRAM prices.
Yes, there was the infamous semiconductor price spike that is usually mentioned rather opaquely in the computing press of that era, along with the high initial pricing of the DRAM devices chosen. However, I rather feel that we don't really get the full picture. I only say this because retail pricing of 64-kilobit memory devices was already flattening out by mid-1982, and I somehow doubt that random parts suppliers were getting preferential pricing from the memory manufacturers, in the face of colossal orders from computer manufacturers, just so they could sell modest quantities to consumers.

Having seen management theory essays that have touched upon Acorn, I think there could easily be another one that weighs up the different factors impacting Acorn at that time, probably with observations about supply chain management and flexible manufacturing, particularly when considerable uncertainties were involved. One paper that covers many such interesting topics is this one:

"Managing Growth at Acorn Computers", Journal of General Management, March 1988.

Of course, one can always say that this is hindsight, but a management theorist would probably reply that if a company's management cannot adapt to circumstances, hindsight is the only thing that will inform their strategy, if that doesn't happen too late to make a difference.
Last edited by 1024MAK on Fri Sep 17, 2021 12:19 am, edited 1 time in total.
Reason: Edited to split off Communicator discussions
User avatar
1024MAK
Posts: 11006
Joined: Mon Apr 18, 2011 5:46 pm
Location: Looking forward to summer in Somerset, UK...
Contact:

Re: Why did Acorn ignore Motorola?

Post by 1024MAK »

Can you please post discussions about the Communicator and it's video systems in the new topic Communicator Teletext Video Mode.

Thanks

Mark
Coeus
Posts: 2305
Joined: Mon Jul 25, 2016 12:05 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by Coeus »

dominicbeesley wrote:
Thu Sep 16, 2021 3:56 pm
Is that emulating ARM1 or ARM2 are the differences to my test solely down to an improved interpreter do you think?
This was B-Em emulating the ARM evaluation system. The comment at the top of the arm.c file says it is emulating ARM1 and that the emulation is by Sarah Walker and comes from Arculator.

As a double-check there does not seem to be a multiply instruction in this emulation which is one distinguishing feature of ARM2 over ARM1.

On the other hand, the Wikipedia article says that ARM1 ran at 6Mhz and B-Em is running it at 4Mhz so possibly those figures should be half as fast again. Interestingly, if we scale my overall result (29.74) by 6/4 we get 44.61 which is very close to the figure you got.

What is not close is the performance of the trig/log test which is much faster in your result. That is why I was speculating that maybe the way the trig/log functions are calculated may make much more use of floating point multiply than anything else in CLOCKSP. That would assume that, at least for the machine you emulating in Arculator, it is emulating ARM2 and that ARM BASIC 1.04 takes advantage of the hardware multiply.
dominicbeesley
Posts: 1649
Joined: Tue Apr 30, 2013 12:16 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by dominicbeesley »

It seems I wasn't being quite fair to the ARM2 though I'd forgotten to RMFaster BASIC. That gives another boost. I'm not sure if there's an equivalent thing on the evaluation system but I wouldn't be surprised if running from ROM is slower than RAM.
clocksp310_faster.png
I'm half tempted to build a 68020 or 68EC020 machine to compare against...I'm a bit put off by not having any known good chips to experiment with.

D
Post Reply

Return to “8-bit acorn hardware”