Why did Acorn ignore Motorola?

discuss both original and modern hardware for the bbc micro/electron
paulb
Posts: 842
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by paulb »

paulb wrote:
Tue Sep 21, 2021 10:01 pm
In that respect, it would arguably be more pertinent to contrast HP's adaptation to the microprocessor (and RISC) era with the likes of ICL, as opposed to Acorn who merely needed to find or develop something to make their significantly narrower product range more competitive.
In fact, there was one thing in the back of my mind that I neglected to mention but which connects these companies to each other and to Motorola. At the start of the 1980s, ICL was looking to get into the workstation market, so they started looking at machines they could sell and partnerships with emerging vendors (who might also be acquisition targets). Apparently, according to this fascinating resource, some people were developing a 68000-based system in ICL, but when they saw that the Three Rivers PERQ was two years ahead of these internal developments, they decided to "buy Three Rivers or at the very least come to some arrangement with the company". Everyone with money in the UK research community appears to have been wildly excited about the PERQ. Just read around on that site if you want to see people throwing crazy sums of money around trying out hardware!

However, when checking out Three Rivers, ICL also checked out Apollo Computer whose products were 68000-based, and there seems to have been much jockeying for position, with contracts being negotiated and renegotiated, executives flying back and forth across the Atlantic, and so on. With the research community pushing for PERQ and with technical comparisons favouring it, ICL ended up going with Three Rivers and not Apollo. Despite the apparent technical edge and supposed suitability of the PERQ, there was then a long process of getting an acceptable Unix solution working on the hardware, with several different approaches including one featuring the Accent kernel which would be the forerunner of the Mach microkernel. As noted in this history, "The clear lead that ICL and Three Rivers had in the market had been completely eroded over the period 1981/2." The chosen Unix solution for the PERQ eventually arrived in 1984.

(I suspect that Accent will also have had an influence on the activities of Acorn's research division in Palo Alto.)

Where this crosses over to HP is that eventually HP acquired Apollo Computer in 1989. So, Apollo managed to last rather longer than Three Rivers (who closed in 1986), and it might be said that the attractions and benefits of "bit-slice microcoded systems" with "different microcodes... for different application areas" faded as microprocessors and their chipsets started to deliver constantly improving performance. By 1989, there were also Apollo workstations using Apollo's own RISC architecture, elements of which apparently fed into HP's PA-RISC architecture (and might have influenced the HP/Intel Itanium architecture).

ICL eventually brought 68000-series systems to market, as part of a parade of different solutions. An assessment of how things ended up makes this remark:
Some saw the future as the combined phone and PC on the desk (One Per Desk). Others still believed in the mainframe market surviving.
The mainframe mindset may well have inhibited ICL somewhat in this and other endeavours. The One Per Desk, being some kind of worked over Sinclair QL, was something of a clumsy attempt to target what was arguably a specialised market at that time, but despite also being a 68000-series machine, it perhaps deserves its own thread, just like the Communicator did!
User avatar
Bobbi
Posts: 683
Joined: Thu Sep 24, 2020 12:32 am
Contact:

Re: Why did Acorn ignore Motorola?

Post by Bobbi »

Thank you for posting that link to the ICL / PERQ history. Fascinating stuff.

(As an aside, I used to work at ICL a very long time ago.)
paulb
Posts: 842
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by paulb »

Bobbi wrote:
Fri Sep 24, 2021 12:22 am
Thank you for posting that link to the ICL / PERQ history. Fascinating stuff.

(As an aside, I used to work at ICL a very long time ago.)
That sounds fascinating in itself! ICL is not quite "before my time" in that the company was obviously still around when we were all getting into microcomputing the first time around, albeit as this serious business computer company that would be mentioned in the context of the biggest computers in the canonical mainframe, minicomputer, microcomputer hierarchy. But for people like me, ICL remained this abstract entity that didn't seem to bother with microcomputing, or at least not in the home and education sectors. (Acorn bought ICL's school computing division at one point during its acquisitive phase, however.)

That said, I did have a brief brush with ICL hardware in a very short work placement at a local government office. They had a system running VME - I imagine it was some kind of ICL 2900 system, but I don't remember - and the computer room had a load of Xerox printing hardware to produce things like bills that the council would send out. I spent part of a morning with the systems analyst whose job involved writing programs on cards to give to the data entry people, and we quickly realised that there was nothing for me to really see in "shadowing" her job. So I ended up shadowing the support and maintenance guys, which involved looking at ICL-branded PCs that had arrived and driving around between council offices troubleshooting their modem links.

The next time I went by that office was on the way to the annual school prizegiving event, which was held in the adjacent/attached civic centre building, and since the printing done there happened to involve the Poll Tax bills, there was a bunch of protesters making quite a fuss.
User avatar
Bobbi
Posts: 683
Joined: Thu Sep 24, 2020 12:32 am
Contact:

Re: Why did Acorn ignore Motorola?

Post by Bobbi »

ICL was my first proper job really. I wonked at ICL Bracknell (BRA01 was the site code) for a year after school and before uni. That was 1989-1990. The main part of my job was as a sort of bug database administrator, but I managed to expand my role and ended up doing sysadmin work on ICL UNIX (on a DRS500) and even a bit of programming work. ICL was generous enough to sponsor me through my undergraduate years so I returned each summer for the next four years for 2 or 3 months, each time in a different part of the company (although always somewhere in Bracknell.)

The ICL BRA01 "machine hall" was quite the site to behold. I was told it was the biggest machine room in Europe at the time.

Even in the early 90s it was pretty obvious ICL was in trouble though, and in some ways it was a rather dispiriting place to work.

One memory I treasure is one of my colleagues (who was close to retirement) telling me how he begun his career with English Electric LEO, which is pretty much the dawn of commercial computing!
User avatar
BigEd
Posts: 4405
Joined: Sun Jan 24, 2010 10:24 am
Location: West Country
Contact:

Re: Why did Acorn ignore Motorola?

Post by BigEd »

paulb wrote:
Thu Sep 23, 2021 10:28 pm
... according to this fascinating resource...
Great site. I love a bit of computer history.
By 1989, there were also Apollo workstations using Apollo's own RISC architecture...
I'd forgotten that - PRISM and the DN10000.
Bobbi wrote:
Fri Sep 24, 2021 1:58 am
... he begun his career with English Electric LEO, which is pretty much the dawn of commercial computing!
I do recommend Georgina Ferry's book!
User avatar
arg
Posts: 429
Joined: Tue Feb 16, 2021 2:07 pm
Location: Cambridge
Contact:

Re: Why did Acorn ignore Motorola?

Post by arg »

That PERQ stuff is very interesting as an indication of the markets into which Acorn was/could have been selling.

So in 1983/4 there was still dispute as to whether Unix needed VM (with scientific markets clearly needing it for large datasets moving off mainframes, while commercial markets didn't). Even Sun1 didn't have VM (though they later retrofitted the machines with Sun2 CPU cards to fix this).

The relative capabilities of the PERQ machines in 1984 and the Acorn 16032 2nd processor makes the Acorn product look quite good, but evidently the market wasn't huge and the politics of selling to it complex. So Acorn's approach of targeting a lower price market segment (that could maybe be bought out of discretionary budgets rather than grant funding) sounds logical.

PS. thanks also for the PA-RISC links. PA-RISC as an architecture rather passed me by, probably because HP only used it internally and didn't sell the chips on the general market.
Coeus
Posts: 2342
Joined: Mon Jul 25, 2016 12:05 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by Coeus »

paulb wrote:
Thu Sep 23, 2021 10:28 pm
...some people were developing a 68000-based system in ICL, but when they saw that the Three Rivers PERQ was two years ahead of these internal developments, they decided to "buy Three Rivers or at the very least come to some arrangement with the company".
When I first read that, I assumed you meant that Three Rivers were ahead in developing a 68000-based system but, on reading the linked resources, it seems the PERQ was bit-slice.
paulb wrote:
Thu Sep 23, 2021 10:28 pm
...there was then a long process of getting an acceptable Unix solution working on the hardware
One of the article you linked also comments on SUN being able to offer a BSD-based system with reasonable ease because they were able to make use of effort already put in by UCB in porting BSD to the 68000. Was the bit-splice processor in the PERQ propriatary so porting of compilers and the kernel would need to start from the beginning?

It is interesting to read about issues with FORTRAN and Pascal compilers. I wonder who was active writing compilers at the time and what ICL already had internally. Ideally this should have consisted of writing the code generation for an existing compiler front-end.

And user-mode utilities? I am intrigued as to what the problems there were, though I know we sometimes had trouble porting C code from Unix to an IBM PC because, prior to ANSI C, it was common not to bother declaring functions and in a consistently 32 bit environment it did no harm for the compiler to think a function returned an integer when it actually returned a pointer as these were the same size. On a PC they were often not. With no useful warnings from the compiler things would just crash.
paulb wrote:
Thu Sep 23, 2021 10:28 pm
Where this crosses over to HP is that eventually HP acquired Apollo Computer in 1989.
Yes, I remember one of the earlier Appollo systems running AEGIS being used for CAD work, then some workstations that ran DomainOS, Appollo's version of Unix and also token ring networks. Post aquisition, there was a bit of mix and match going on with both hardware and software with, if I have got the names right, PA-RISC being HP-PA with bits of PRISM and a merger of the commercial version of HP-UX, the workstation version of HP-UX and DomainOS once PA-RISC was being used in both the workstations and servers.
Some saw the future as the combined phone and PC on the desk (One Per Desk). Others still believed in the mainframe market surviving.
There was another similar system, also 68000-based, I think, called Mezza. But the OPD concept was surely betting on the survival on the mainframe, or at least of the departmental computer. The idea was to embrace the idea of people having some computer power on their desks but not to go the typical IBM PC (or previous CP/M) route of complete independence, and with the OPD or similar being networked back to a larger computer with plenty of storage and to which it could act as a termimal. This would allow the computer on the desk to be made more cheaply, though probably people didn't appreciate the effects of ecconomy of scale and intense competition on the IBM PC clone market, and for some central control to be maintained, especially over an organisation's most critical applications.

Ideas from this time have resurfaced in various forms since: the X terminal, the Sun Network Computer, Citrix Metaframe, modern Web-based applications and the Chromebook. As for the phone integration I have no separate work phone and have not had one for some years, just a headset that plugs into the laptop. The software has changed over the years but the concept is the same that whever I am sitting at my laptop, that's where my phone is so it's now a personal number, not a geographical one.
paulb wrote:
Fri Sep 24, 2021 12:51 am
...the computer room had a load of Xerox printing hardware to produce things like bills that the council would send out.
I remember something very similar from my work experience with BT. We visited the (IBM) mainframe installaton at one of the many computer centres and were shown, amongst other things, the new laser printer, a much bigger machine that the ones we came to have on desks. At the time an operator was unjamming it and explained that it very rarely reached the end of a box of 2,000 fanfold forms before it jammed but, when it did, it would complete the box in two and half minutes, i.e. 800 pages/min. At the time, the next fastest printer, a drum printer, could manage 600 lines/min so the laser printer was about 88 times faster and, despite the time spent unjamming it, was quicker overall.

Also, during the time I was at school, a friend's dad worked for ICL as a maintenance technician and reported that he spent the vast majority of his time working on printers.
paulb
Posts: 842
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by paulb »

arg wrote:
Fri Sep 24, 2021 3:28 pm
The relative capabilities of the PERQ machines in 1984 and the Acorn 16032 2nd processor makes the Acorn product look quite good, but evidently the market wasn't huge and the politics of selling to it complex. So Acorn's approach of targeting a lower price market segment (that could maybe be bought out of discretionary budgets rather than grant funding) sounds logical.
Acorn's perennial £4000 price point was ambitious, especially if you look at what was considered acceptable by this audience. Mentioned previously, the Whitechapel MG-1 with a 32016 CPU running National Semiconductor's own Unix variant cost around £10000.

I think that Acorn probably needed to partner with a company like ICL or GEC to get into the institutional market in the UK, at least if the products were going to cost quite a bit and benefit from existing procurement arrangements. GEC apparently acted as some kind of agent for Sun in the UK, but both GEC and ICL seem to have been rather incoherent, GEC also deciding to do their own VAX clone - the Series 63 - for instance. Maybe ICL's dithering explains how Acorn managed to get the Chain/M2105 into the BT Merlin range, when most of that range was provided to BT by ICL.

The notion of a very expensive single-user workstation was an odd one. The PERQ and the Apollo workstations appear to have carried on the paradigm defined at Xerox with the Alto, where each user has complete control over their own machine, and where one has networked facilities to offer multi-user services. The paradigm itself is understandable enough - one user per screen, interacting with their own system - but the prices involved were not so easy to justify outside high-spending research establishments and Xerox PARC.

If you read commentary from the early 1980s, there is a degree of enthusiasm for having microcomputer-level systems that can support multiple users - for a while, Guy Kewney appeared obsessed with the Rair Black Box, which was adopted by ICL as the basis of their own first-generation personal computer range - partly to mitigate the cost of each system, but also out of maintainability considerations. But even a workstation being used by one person can benefit from a software architecture like Unix that introduces different accounts and privilege levels for maintenance purposes and for robustness. (Interestingly, I think Plan 9 moved back towards the networked single-user system paradigm somewhat.)

Alongside the technological evolution of various Unix products, standardisation was evidently driving Unix adoption, which is one reason why people were so interested in getting Unix on the PERQ. It was no longer enough that you could buy a system with a fast processor and nice graphics that could compile and run, say, Pascal programs, particularly if there was limited interoperability and no real software ecosystem. This is where Acorn's Panos-based Cambridge Workstation seemed rather out of place. It was marketed almost like an accessory to a mainframe and a way of spreading the processing power around in an organisation, just like had happened organically when people found that microcomputers could run things like spreadsheets.

But the market was a lot more mature and discerning by the time the Cambridge Workstation came out. When less powerful systems have a greater range of software and more sophisticated software environments, why would anyone accept anything less from a workstation? The idea of a fast piece of hardware for a limited range of computational needs is perhaps acceptable if it is some kind of peripheral, but not if it is the thing you have to sit in front of.
arg wrote:
Fri Sep 24, 2021 3:28 pm
PS. thanks also for the PA-RISC links. PA-RISC as an architecture rather passed me by, probably because HP only used it internally and didn't sell the chips on the general market.
Although I only ever encountered HP/Apollo-branded PA-RISC systems running HP-UX, it is interesting to see that other vendors sold PA-RISC systems, although I think it follows the general pattern that one also sees with things like SPARC, Alpha, MIPS and POWER, each to a greater or lesser extent. Much was also made of Commodore using a PA-RISC-based chipset that never came to market due to Commodore's demise.
User avatar
arg
Posts: 429
Joined: Tue Feb 16, 2021 2:07 pm
Location: Cambridge
Contact:

Re: Why did Acorn ignore Motorola?

Post by arg »

Coeus wrote:
Fri Sep 24, 2021 6:56 pm
paulb wrote:
Thu Sep 23, 2021 10:28 pm
...there was then a long process of getting an acceptable Unix solution working on the hardware
One of the article you linked also comments on SUN being able to offer a BSD-based system with reasonable ease because they were able to make use of effort already put in by UCB in porting BSD to the 68000. Was the bit-splice processor in the PERQ propriatary so porting of compilers and the kernel would need to start from the beginning?

It is interesting to read about issues with FORTRAN and Pascal compilers. I wonder who was active writing compilers at the time and what ICL already had internally. Ideally this should have consisted of writing the code generation for an existing compiler front-end.

And user-mode utilities? I am intrigued as to what the problems there were, though I know we sometimes had trouble porting C code from Unix to an IBM PC because, prior to ANSI C, it was common not to bother declaring functions and in a consistently 32 bit environment it did no harm for the compiler to think a function returned an integer when it actually returned a pointer as these were the same size. On a PC they were often not. With no useful warnings from the compiler things would just crash.
If you dig into those PERQ documents, it explains that the original PERQ instruction set was approximately P-code in order to optimally accept the output of the widespread P-system Pascal compilers (which were used for the PERQ's native OS), and this P-code was not optimal for C. So when coming to port Unix, they had the choice of either making a new compiler that turned C into P-code, or, since it was a microcoded machine, re-writing the microcode to give an instruction set closer to those targeted by available C compilers. Re-writing the microcode also allowed the option of adding VM support, which the original didn't have. Alternatively, they could implement Unix as a microkernel on top of another OS (with its own microcode and VM support) that was being developed at CMU.

They apparently tried all three approaches.... The Unix targeting the original P-code never achieved satisfactory performance; the version based on the CMU work initially showed most promise, but then CMU did a deal with IBM and Sun and stopped putting effort into the PERQ target. ICL tackled the new microcode version, but that group at ICL didn't believe in VM, and ICL sabotaged the development by moving the development team from Bracknell to Dunfirmline, and then moving it again to Letchworth.

The bit about the utilities was that apparently these were heavily PDP-11 specific - although notionally all in C, some apparently had PDP-11 executable code just stored as literal constants in the C-source! That seems rather surprising; evidently they were using raw AT&T V7 Unix, but even that was supposedly more portable: the book "Design & Implementation of the 4.3BSD UNIX operating system" claims that V7 Unix, while originally targeting the PDP-11 was already ported to the Interdata 8/32 (which was a 32-bit machine compared to the 16-bit PDP-11) and also to the Vax - the Berkeley work started from that Vax edition with the addition of VM support (in 1979). Possibly ports of V7 Unix were maintained separately rather than being integrated in a single distribution as we are accustomed to nowadays; I believe Berkeley did a lot of portability work turning what started out as a patch set for V7 into their renowned 'distribution' tapes. And probably similar portability work was done by USG in turning the AT&T research Unix into System V.

So the PERQ Unix project probably started out at a particularly bad moment in history.
User avatar
arg
Posts: 429
Joined: Tue Feb 16, 2021 2:07 pm
Location: Cambridge
Contact:

Re: Why did Acorn ignore Motorola?

Post by arg »

paulb wrote:
Fri Sep 24, 2021 10:34 pm
The notion of a very expensive single-user workstation was an odd one. The PERQ and the Apollo workstations appear to have carried on the paradigm defined at Xerox with the Alto, where each user has complete control over their own machine, and where one has networked facilities to offer multi-user services. The paradigm itself is understandable enough - one user per screen, interacting with their own system - but the prices involved were not so easy to justify outside high-spending research establishments and Xerox PARC.
Yes, that was really my point - those machines costing ~£10K (and with a $300/month maintenance contract!) did have a nice spec, but the main extra thing they had was better displays. If you then read those same documents about the projects for which they were buying the PERQs, it was all about being able to run heavyweight processing jobs (Fortran being important, Lisp etc) and the nice graphics was really just a 'nice to have'. Those same documents reveal that windowing display software for most of these machines at that time was mostly vapourware and/or didn't perform.

We bought a Sun 3/50 in 1985, and it was a very nice machine, but we only got it because it was super cheap in a bankruptcy auction, and I never understood how anybody could justify paying full price for one.

All of which goes to show that it was a very peculiar market and you had to follow the politics to make sales rather than having rationally the best product.
paulb
Posts: 842
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by paulb »

arg wrote:
Fri Sep 24, 2021 10:48 pm
paulb wrote:
Fri Sep 24, 2021 10:34 pm
The notion of a very expensive single-user workstation was an odd one. The PERQ and the Apollo workstations appear to have carried on the paradigm defined at Xerox with the Alto, where each user has complete control over their own machine, and where one has networked facilities to offer multi-user services. The paradigm itself is understandable enough - one user per screen, interacting with their own system - but the prices involved were not so easy to justify outside high-spending research establishments and Xerox PARC.
Yes, that was really my point - those machines costing ~£10K (and with a $300/month maintenance contract!) did have a nice spec, but the main extra thing they had was better displays. If you then read those same documents about the projects for which they were buying the PERQs, it was all about being able to run heavyweight processing jobs (Fortran being important, Lisp etc) and the nice graphics was really just a 'nice to have'. Those same documents reveal that windowing display software for most of these machines at that time was mostly vapourware and/or didn't perform.
The "Methodology of Window Management" document is interesting from a historical perspective because you can see the beginnings of systems that would subsequently emerge like the X Window System and NeWS.

I agree with the assessment of the applications that this particular audience seemed to have. In many respects, it looks like they wanted to offload the mainframe, just like Acorn would promise, rather than play to the strengths of the paradigm: there is a reason why high-resolution workstations emerged at Xerox and, through cost reduction, eventually became microcomputers that were seen as good for desktop publishing work.

I suppose that there would be opportunities for better visualisation with graphical workstations and, for some audiences (but not the referenced high performance computing audience), workstations would become powerful enough to do all you really needed to do, which pretty much described the computer science department I encountered at university: labs of workstations and no mainframes. But that was a few years away when those documents were written.
arg wrote:
Fri Sep 24, 2021 10:48 pm
We bought a Sun 3/50 in 1985, and it was a very nice machine, but we only got it because it was super cheap in a bankruptcy auction, and I never understood how anybody could justify paying full price for one.

All of which goes to show that it was a very peculiar market and you had to follow the politics to make sales rather than having rationally the best product.
I don't see many mentions of Hewlett-Packard with respect to workstations on the Chilton site, at least in the mid-1980s era, which is probably a bit odd since HP were definitely in the Unix workstation business by then, so maybe that is another indication of the market's peculiarities and a degree of protectionism where you see UK companies importing US technology to sell into that sector.

Curiously, searching for Hewlett-Packard produces some correspondence presumably from a lawyer looking for prior art to defend Microsoft and HP against litigation from Apple.
Coeus
Posts: 2342
Joined: Mon Jul 25, 2016 12:05 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by Coeus »

paulb wrote:
Fri Sep 24, 2021 10:34 pm
If you read commentary from the early 1980s, there is a degree of enthusiasm for having microcomputer-level systems that can support multiple users - for a while, Guy Kewney appeared obsessed with the Rair Black Box
Reading that link I am not quite sure how this black box fits in - it looks like a PC.

But I have commented before that, as the 8086-series developed, Intel seemed to be of the impression that running many small tasks rather than one big one was the way people wanted to go. The segmented architecture in the original 8086 leans that way and virtual 8086 mode in one of the later processors does too.

There were Unix machines based on the Z8000 processor - I remember seeing one in BT's offices at Bibb Way in Ipswich. I don't remember the exact details, except that it was of modest size - bigger than a PC but smaller than a domestic automatic washing machine, ran some form of Unix and had a number of terminals attached. The C8002 made by Onyx Systems (see https://en.wikipedia.org/wiki/Zilog_Z8000 under Z8000 CPU based systems) sounds like the kind of thing. Staff used it for typical office applications: documents, spreadhseets etc. which they could easily share and everyone would have been running the single installed version of the office suite so no version mismatch problems. Of course, with green-screen terminals it was not WYSIWYG but neither were CP/M or early MS-DOS office applications either.
paulb wrote:
Fri Sep 24, 2021 10:34 pm
But even a workstation being used by one person can benefit from a software architecture like Unix that introduces different accounts and privilege levels for maintenance purposes and for robustness.
Certainly my experince with a group of engineers who were writing programs to process research data was that we very much appreciated, when writing software that had bugs in memory references, that the result was "core dumped" on the terminal and not a complete crash and need to reboot.
arg wrote:
Fri Sep 24, 2021 10:40 pm
If you dig into those PERQ documents, it explains that the original PERQ instruction set was approximately P-code in order to optimally accept the output of the widespread P-system Pascal compilers (which were used for the PERQ's native OS), and this P-code was not optimal for C. ...
One of the things I have found fascinating about this discussion is the gradual encroachment of the microprocessor on the traditional domain of the mainframe. This highlights a big difference in general approach, though.

In the mainframe architectures of the day, the instruction sets seems to have been designed as almost a programmer's wishlist without being constrained by what was easily possible in hardware. Furthermore there was a deliberate intention that the instruction set would be stable over time so it would include things that would be slow on the initial and cheapest hardware, using either complex microcode or even trapping to the OS to emulate the instruction, which would be much faster if you paid for the most expensive hardware or as time passed and even the cheaper machines were able to do more directly in hardware. The result of this is that binary software could be run on any machine in the series and would perform better on better hardware without needing to be updated. That would have been important when there was still a mix of compiled software and software written in assembler.

By comparison microprocessor instruction sets were more closely tied to what was implemented in hardware and evolved as the hardware improved. The CISC microprocessors were definitely moving in the mainframe direction, though, and the 68000 was a case in point by choosing a 32 bit logical architecture even though most of the implementation was initially only 16 bit. Just like a mainframe, later processors would execute the exact same code faster once they were fully 32 bit.

So the PERQ was firmly in the CISC, mainframe camp by the sounds of it but, unlike the mainframe (and probably 68000) instruction sets that were designed for an assembly language programmer, was optimised for the output of a compiler which was, from what I remember, part of the argument for RISC. If people stopped writing assembler and wrote only in high level languages the instruction set need not be quick for a programmer to write by having powerful instructions that many compilers would never use but instead execute small, generally useful instructions very fast. That became even more true with explictly parallel architectures of which PRISM, already mentioned, was one. No-one would want to be working out what could execute in parallel on each execution unit by hand.

So the idea of using microcode to effectively run the intermediate code from a compiler is an interesting approach but also seems somewhat limiting in that it assumes that the entire software suite running at any point in time will all be from that same or a compatible compiler.

So this would be one way in which RISC, if available at the time, would have been a better bet. One could, of course, write a p-code interpreter in assembler in the RISC instruction set and doing the same for other intermediate codes would enable different processes under the same OS to run the output of different compilers on the same machine at the same time. But, presumably, once one has small, fast instructions it would also be quite reasonable to machine translate p-code into object code, almost by macro expansion, as a final step in code generation.

I have not written a compiler myself but I did read one of the standard reference books and I have commented before that the academics seemed very interested in lexical analysis and parsing and then didn't really bother with code generation, perhaps because that was getting away from the general and had to target some real hardware. Then it seems, from some of the links about the PERQ, that connections with a uninversity were seen as critical. Was there a lack of really skilled compiler writers working in industry, preferring to stay in universities?
paulb wrote:
Fri Sep 24, 2021 10:34 pm
ICL sabotaged the development by moving the development team from Bracknell to Dunfirmline, and then moving it again to Letchworth.
and if that is how industry treats people then can you blame the talented for staying in their academic institutions.
paulb
Posts: 842
Joined: Mon Jan 20, 2014 9:02 pm
Contact:

Re: Why did Acorn ignore Motorola?

Post by paulb »

Coeus wrote:
Sat Sep 25, 2021 12:41 am
paulb wrote:
Fri Sep 24, 2021 10:34 pm
If you read commentary from the early 1980s, there is a degree of enthusiasm for having microcomputer-level systems that can support multiple users - for a while, Guy Kewney appeared obsessed with the Rair Black Box
Reading that link I am not quite sure how this black box fits in - it looks like a PC.
Yes, it is intended to run MP/M, which is a multi-user version of CP/M.
Coeus wrote:
Sat Sep 25, 2021 12:41 am
But I have commented before that, as the 8086-series developed, Intel seemed to be of the impression that running many small tasks rather than one big one was the way people wanted to go. The segmented architecture in the original 8086 leans that way and virtual 8086 mode in one of the later processors does too.
It is clear that Intel were designing processors for more advanced systems, supporting process isolation and so on. As I may have noted before, there were systems like the AT&T 6300 Plus that allowed people to run DOS as a process within Unix on 80286 hardware, albeit with additional hardware for this kind of virtualisation.
Coeus wrote:
Sat Sep 25, 2021 12:41 am
There were Unix machines based on the Z8000 processor - I remember seeing one in BT's offices at Bibb Way in Ipswich. I don't remember the exact details, except that it was of modest size - bigger than a PC but smaller than a domestic automatic washing machine, ran some form of Unix and had a number of terminals attached. The C8002 made by Onyx Systems (see https://en.wikipedia.org/wiki/Zilog_Z8000 under Z8000 CPU based systems) sounds like the kind of thing. Staff used it for typical office applications: documents, spreadhseets etc. which they could easily share and everyone would have been running the single installed version of the office suite so no version mismatch problems. Of course, with green-screen terminals it was not WYSIWYG but neither were CP/M or early MS-DOS office applications either.
I think Onyx may have been first to market with a Unix system beyond the minicomputer market, or at least some relevant "first" may have been claimed, and people consequently had big hopes for the Z8000. The Zilog strategy then unravelled somewhat. I think I may have read articles about problems getting the product to market, reliability, and so on. Also, customers may have been confused by the naming, thinking it to be in the same family as the Z80, so Zilog then made the Z800 to rectify that misconception. There was also a Z80000, but nobody seems to have been paying attention to Zilog by that point. They should probably have fired the people responsible for naming their products.
Coeus wrote:
Sat Sep 25, 2021 12:41 am
One of the things I have found fascinating about this discussion is the gradual encroachment of the microprocessor on the traditional domain of the mainframe. This highlights a big difference in general approach, though.

In the mainframe architectures of the day, the instruction sets seems to have been designed as almost a programmer's wishlist without being constrained by what was easily possible in hardware. Furthermore there was a deliberate intention that the instruction set would be stable over time so it would include things that would be slow on the initial and cheapest hardware, using either complex microcode or even trapping to the OS to emulate the instruction, which would be much faster if you paid for the most expensive hardware or as time passed and even the cheaper machines were able to do more directly in hardware. The result of this is that binary software could be run on any machine in the series and would perform better on better hardware without needing to be updated. That would have been important when there was still a mix of compiled software and software written in assembler.
Yes, there was considerable vertical integration within traditional computer manufacturers where they could presumably move things into hardware fairly readily. Naturally, this was not always a good idea, although your point about using hardware support for instructions as a way of "upselling" the customers is a good one.
Coeus wrote:
Sat Sep 25, 2021 12:41 am
By comparison microprocessor instruction sets were more closely tied to what was implemented in hardware and evolved as the hardware improved. The CISC microprocessors were definitely moving in the mainframe direction, though, and the 68000 was a case in point by choosing a 32 bit logical architecture even though most of the implementation was initially only 16 bit. Just like a mainframe, later processors would execute the exact same code faster once they were fully 32 bit.
Being able to offer a narrower interface to memory and having a lower pin count has various cost advantages for manufacturers looking to move up to more powerful processor. Similarly, the 32016 led to the 32032, and the 80286 led to the 80386, although the latter succession may have been more radical than the former.
Coeus wrote:
Sat Sep 25, 2021 12:41 am
So the PERQ was firmly in the CISC, mainframe camp by the sounds of it but, unlike the mainframe (and probably 68000) instruction sets that were designed for an assembly language programmer, was optimised for the output of a compiler which was, from what I remember, part of the argument for RISC. If people stopped writing assembler and wrote only in high level languages the instruction set need not be quick for a programmer to write by having powerful instructions that many compilers would never use but instead execute small, generally useful instructions very fast. That became even more true with explictly parallel architectures of which PRISM, already mentioned, was one. No-one would want to be working out what could execute in parallel on each execution unit by hand.
The difficulty in making compilers that target exotic instructions was definitely a motivation for eliminating those instructions. It also turned out to be difficult to get compilers to figure out how to use all the parallel instruction "slots" on architectures with wide instructions, as the PRISM's Wikipedia article notes (without citations, but this is generally agreed). Also, from what I read, Intel's i860, which got a lot of hype in the late 1980s as a "supercomputer on a chip" and such, turned out to be unsuitable for things like workstations because of the impact of context switching on all the pipelines and execution units, so it ended up being used in graphics accelerators and things that focus on doing one thing very quickly.

(Acorn appear to have dabbled with the i860, and I wonder if some of this work eventually made its way into Gnome Computing's product line. I bet there were a few internal Acorn projects that ended up being given away in such a fashion.)
Coeus wrote:
Sat Sep 25, 2021 12:41 am
So the idea of using microcode to effectively run the intermediate code from a compiler is an interesting approach but also seems somewhat limiting in that it assumes that the entire software suite running at any point in time will all be from that same or a compatible compiler.
The need to make sure that everything is on the same page with respect to the architecture being provided would be very worrying to me if I were trying to persuade people to buy this kind of system. It is like the dependency management nightmare of modern software that is constantly changing at multiple levels. Stability has some very significant advantages when deploying technology.
Coeus wrote:
Sat Sep 25, 2021 12:41 am
So this would be one way in which RISC, if available at the time, would have been a better bet. One could, of course, write a p-code interpreter in assembler in the RISC instruction set and doing the same for other intermediate codes would enable different processes under the same OS to run the output of different compilers on the same machine at the same time. But, presumably, once one has small, fast instructions it would also be quite reasonable to machine translate p-code into object code, almost by macro expansion, as a final step in code generation.
I haven't looked at p-code, but there are virtual machines whose design may have made sense at a certain point in time, but which make less sense as software and hardware progresses. Here, I am thinking of stack-based virtual machines that spend time manipulating the stack, but where a register-based virtual machine conceptually accesses an evaluation stack directly. Targeting the register-based model requires a bit more effort in software, and the hardware might also be more complicated, but it proves to be more efficient.
Coeus wrote:
Sat Sep 25, 2021 12:41 am
I have not written a compiler myself but I did read one of the standard reference books and I have commented before that the academics seemed very interested in lexical analysis and parsing and then didn't really bother with code generation, perhaps because that was getting away from the general and had to target some real hardware. Then it seems, from some of the links about the PERQ, that connections with a uninversity were seen as critical. Was there a lack of really skilled compiler writers working in industry, preferring to stay in universities?
Despite delays in getting products to market, I would say that the people at ICL who were doing the work did seem to actually get things done, so I doubt that they lacked skilled developers, although some of the developers may have come from other companies and institutions. One could turn this around and note that a lot of effort was directed towards academic projects that were interesting and influential but not able to deliver a usable product, thinking of the Accent-based system that proved unsatisfactory for the PERQ.
Coeus wrote:
Sat Sep 25, 2021 12:41 am
paulb wrote:
Fri Sep 24, 2021 10:34 pm
ICL sabotaged the development by moving the development team from Bracknell to Dunfirmline, and then moving it again to Letchworth.
and if that is how industry treats people then can you blame the talented for staying in their academic institutions.
Actually, arg wrote that part, but I also doubt that the corporate paper shuffling and its consequences helped retain key employees, especially when there would have been plenty of other opportunities outside the company for those people.
Post Reply

Return to “8-bit acorn hardware”