6502 Assembly language

Coding in Assembly for an Apple II

In my last post, I wrote how I tried to code a port of Conway’s Game of Life for my Apple II. This port was written in C and relied on a cross compiler suite (cc65), generating binaries for the Apple’s CPU : the MOS 6502. The conclusion was that the port was viable. But unfortunately, it was also way too slow. Indeed, the 6502 is a primitive 8 bit CPU that is not well suited to be targeted by a high level compiled language such as C.

It was time to speed things up and re-write the inner loops in assembly!

I will begin by presenting the results I obtained, before explaining the “workflow” I followed to produce binaries, upload then debug them.

The code can be cloned from my Github repository:

Results obtained

I concentrated my optimizing efforts on the main “updating” function. Indeed, it is responsible to determine which cell will die and where new ones will spawn, therefore is the most time-consuming part of the program.

  1. Rewriting the most inner function, “count_neighbours“, in assembly.
    • In my previous post, I noted that the code generated by the compiler was around 220 opcode long, which was huge.
      After my rewriting the function shortened to only 30 opcode long!
  2. Rewriting the C code of the outer loop in order to make it more 6502-friendly.
    • Using __fastcall__ calling convention, leading to the use of the accumulator and the X register to pass parameters instead of the “fake” stack.
    • Eliminating indirect addressing ( ie. [X][Y] ) which are specially expensive.
  3. Rewriting all the update function in assembly.

Incidentally, I also abandoned the ASCII display functions provided by the compiler suite, and wrote some assembly functions to replace them. I then abandoned the original text mode to draw the cells in color 😉

And here is the normalized execution time, for 30 iterations, of all these versions. Of course, the less, the better.

Normalized execution time of iterated versions

A 18 time speed-up! Writing assembly is not always easy, but it is definitely worth it!

The 6502 opcodes are few and “logical”, thus they are quite easy to learn and use. However, as I mentioned in my previous post, this CPU is primitive by today standards. Its resources are scarce and I have to admit that I was not used to 8 bit arithmetic: ceiling at 255 can be quite frustrating!
Especially when branching. The 6502 provides very efficient conditional branching instructions. The drawback is that you can only jump 128 bytes backwards and 127 bytes forward. As most instructions are 2 or 3 byte long, you have to code tight and carefully layout your instructions if you want to use them!

Remember that most of the code (all the logic, state machine, textual displays…) is still written in C. This remaining of code is less systematic and would be much more difficult to write in ASM. But it does not matter much, as the time spent in these parts is also much smaller.

To conclude, I would therefore say that nowadays it is not too difficult to write a program for Apple II computers. Thanks to cc65, you can write the whole program in C and, if you require more performance, rewrite the most time consuming function in ASM. And thanks to modern editors and emulators, the process of writing, testing and debugging is way more easy and pleasant than it was thirty years ago!

Quick workflow

When coding such a small project in C, I could live without a debugger. But such was not the case anymore when I started to rewrite some parts in assembly.

  1. I was not familiar with the 6502 architecture and opcodes, so it was unthinkable that I could produce bug free binaries at my first attempts
  2. I don’t know how to debug a low level program without a proper debugger

Thus I established my (quick and dirty) workflow.

The assembler

That’s the easy part, as the cc65 compiler suite provides a macro assembler simply called ca65. But I did not want to rewrite all my program, only the inner loops. Thus I had to learn the compiler’s calling convention, aka how to call a subroutine written in ASM from the C code. And return from it… Fortunately, this part is well documented.
The only caveat I could not overcome was to access the symbols declared in C. Thus, I introduced a function, init_asm, to pass their address to my “ASM world”.

Producing a disk image

The linker will produce a perfectly viable Apple II executable file, but  loading it on an emulator requires to embedded it on a disk image. For that purpose, I used AppleCommander. It is a utility (unfortunately written in java) which purpose is to manipulate Apple II’s disk images. And one bright spot is that it can be used from the command line, thus allowing to invoke it as a build step in the Makefile.

Command to add the executable to the disk image:

Command to remove the executable from the disk image:

The debugger

In order to test my program and debug it, I used an emulator: AppleWin. It is quite accurate, but most important, it also features a competent debugger! Of course, it is rougher to use than a modern one. But hey, if you’re coding in 6502 assembly that won’t stop you! 😉

At any moment, you can enter this debugger by pressing F7. It is then quite easy to place breakpoints, run until a specific address or inspect the memory. Unfortunately, after exiting the debugger the program often does not resume correctly. Thus I could not reliably place my breakpoints before launching my program :/ So I often had to place an infinite loop at the beginning of my code. When halted, I then entered the debugger to manually modify the program counter in order to exit this loop. Then I could ask the debugger to run until my area of interest.

AppleWin's integrated debugger

AppleWin’s integrated debugger

 

Coding in C for an 8 bit 6502 CPU

As my French speaking readers may know, I recently awoke my Apple IIe from its long hibernation and proceeded to some minor repairs to render it usable again.

There are many many interesting games running on Apple II, but I’m a coder so I want to run some of my own code on it! 😉

CC65, a compiler suite targeting the 6502

Traditionally, these 8bit personal computers were programmed in BASIC. Apple IIs were provided with AppleSoft, a quite powerful (for that time) BASIC from Microsoft. But BASICs are interpreted languages, thus are quite slow. So if you wanted to do something serious, you had no choice but programming in assembly.

The Apple II is powered by a very simple MOS 6502 CPU. But although I had my time programming in ASM (mainly for Motorola’s 68000 and 56000), I wanted a way to avoid plunging too deeply into the arcane of the 6502 architecture. So I was quite pleased to find that there exists an open-source cross compiler suite targeting the 6502! It even provide a limited support of the standard library on the Apple II and an easy way to read inputs and draw ASCII characters on the screen!

Armed with this powerful suite, I decided to quickly implement the Game of Life. This “Game” consists in placing so called “cells” on a board then watching them evolve or die, as they follow some basic rules.

Some cells evolving following the Game of Life's rules.

Some cells evolving following the Game of Life’s rules.

Compilation, was flawless. Putting the binary on a disk was not much of a hurdle either. But when I watch the cells evolving… It is SLOOOOOOOOOOOW ! 😼

Investigating the code

I know that the 6502 in my Apple is a 8 bit CPU running at a paltry 1MHz. But the most demanding part of my code is the following function, called 798 times per screen (or 38 times per line).

So we are talking about a grand total of around 10000 a 8 bit additions and 20000 8 bit memory access. That’s not negligible, but that should not take so long. Drawing a cell on the screen is only a matter of ms. So I know that my function is the main culprit.

I decided to have a look at the generated assembly file. It’s quite easy as C65 compiles the C into assembly, before assembling it to the binary object.

WTF ??!!??

As I said I’m not an expert in the 6502’s ISA, but more than 220 instructions, including many jumps to subroutines ??? Basic operations such as additions, and stack operations are performed by subroutines (addqysp and pusha0)????
Clearly there is something wrong. No wonder that my Game of Life runs so slowly !

I read the coding hints of the C65 documentation, but it appears that I did nothing too wrong. Plus, the CC65 compilation suite is praised, and is considered more effective than some C compiler of the 80s.

There must be something else.

And indeed, the culprit is the 6502 itself: its architecture is totally unsuited for high level programming !

The MOS 6502

The 6502 is indeed a very unusual beast. If we compare it to the Z80, a very common 8 bit CPU of that time,  the 6502 is even more a true 8 bit architecture: it does not provide any 16 bit register nor any support for 16 bit operations. And as a matter of fact, it only comes with a single “true” register! The Z80 provided eight 8 bit registers that could be combined to form four 16 bit registers! Ouch!

If we look at the 6502 block diagram below, we can see that its 8 bit register is called the Accumulator and can be source or destination to ALU and LOAD/STORE operations. There also two more “simili-registers”, X and Y, which are called Index Registers. Indeed, operating with them is much more limited. They are mainly used to store and produce indexes that will serve in some indirect addressing modes.

The 6502 block diagram with the area of interest highlighted

The 6502 block diagram with the areas of interest highlighted

Some more limitations:

  • The 6502 access data stored in 256 bytes pages. If you want to access any address higher than the “zero page” (0x0 to 0xFF) you’ll get a penalty as it requires to compute a 16 bit address !
  • The stack’s address is fixed to the “first page” (address 0x100 to 0x1FF) and thus cannot contain more than 256 bytes.
  • They is no multiply nor divide operation. As a matter of fact, no “complex” operation is supported as there is no microcode!
  • Only the Accumulator can be pushed or pulled (poped) from the stack.

I could continue this list.
I’m sure you begin to understand that, this is far from being C friendly.

In order to address these shortcomings, the compiler programmers had no choice other than to rely on inefficient solutions. For instance, C65 stores its own “unofficial” stack in the highest addresses and grows downwards. In order to use it, it has to rely on custom “push” and “pop” subroutines. It is because of the accumulation of such tricks that the generated code is so inefficient.

But anyway kudos to the C65 developers! It was not a small task to allow easy C programming to such a target !

Any strong point?

If the 6502 was so primitive, how comes that it had such a tremendous success in its time? You can find it the Apple II, but also in the NES, the C64, the BBC micro and many others!

Well, I only stated that is was unsuited for high level compiled languages but, during the 80s, home computers were not programmed like that!

When properly programmed in assembly, the 6502 can truly sing!

First, its small instruction set (only 56!) can be seen as a strength. Decoding them is fast and cheap. Furthermore, the execution time is small compared to the other architectures of the time : almost always one cycle (excluding memory access). Some kind of primitive pipelining is also possible when combining  some addressing mode with some operations: the next instruction can be fetched before the completion of the current one!

And the chip itself was cheap. Cheap to produce, thus cheap to buy. It is composed of only 3510 transistors!!! As a matter of fact, the 6502 is considered by many as the precursor of the RISC architectures. And it well known that it as inspired the designers of one of the most well known RISC CPU : ARM!

Finally, the 6502 access its memory faster than its contemporaries: in one cycle! Therefore, a programmer can use the zero page as a pool of 256 8 bit registers! The 6502’s designers were not crazy: their CPU lacks registers because it does not need any!

With these strengths the 6502 could compete with the other 8 bit CPUs, often clocked 2 to 4 times faster. It is a clean and elegant design, that is unfortunately ill-equipped for modern programming.

 

So, after all I’ll have to plunge into what I wanted to avoid: I’ll have to write my core routines in assembly language! Stay tuned! 😉

Shadow and the future of Cloud Computing

Shadow

Introduction: The end of Moore’s Law

The Moore’s Law will soon hit a wall. Or perhaps not.

But it does not matter as we have entered, for more than a decade, into a zone of diminished returns. Twenty years ago, things were simple. Thanks to a continuous progression of frequency and efficiency, computer processing power doubled every two years. In 1983 my personal computer was an Apple II with a mere 1MHz 8 bit CPU, in 1994 it was a 90MHz 32 bit Pentium and in 2004 a 2GHz 64 bit Athlon.

But the frequency progression came to an halt. Twelve years later our desktop CPU cannot easily sustain running at 4GHz. Efficiency also came to an halt. Even a very wide CPU, as the Intel Core, can only process 1.5 instruction per cycle on average.

The industry transitioned to multi-cores. Now even phone CPUs are at least 2-cores. But desktop CPUs seem to have hit a 4 core ceil. The reason? Except in very specific cases, it is very hard to develop an efficient multi-threaded application.

So what now? Some predict the advent of “Elastic compute cloud cores”, which is a neat name for “Hardware as a Service” (HaaS). And that’s what I will discuss in this article, through the prism of Blade a young French startup that claims to have achieved some breakthroughs in the field of cloud gaming!

Cloud Gaming?

Cloud Gaming is a very peculiar subset of HaaS. Instead of running your video game on your home console or tablet, it is run on a distant server, in the “cloud” and the resulting frames are streamed to you. Your device is, thus, only used for inputs and display.
Two of the most famous services are Geforce Now from NVIDIA and Playstation Now from Sony.

But cloud gaming is also one of the most difficult area of cloud computing. Games require a tremendous amount of processing power and more specifically, they require to run on a powerful GPU. And last but not least comes the latency. In order to be playable, a game requires a very low latency from the moment you input your orders, to the moment the result is displayed on the screen. As Internet was not designed with low latencies in mind, this aspect is not easily tamed.

The two services mentioned above almost achieve to solve these problems. Almost…
Playstation Now will run only ancient games, requiring less computational power. And Geforce Now is capped to 1080p. Plus, according to reviews, it achieves latencies in the region of 150ms. Quite good: the games are playable. But this small latency can render the game a bit jerky if you are as attentive as some gamers affirm they are.

Enters Blade

Blade is a young French startup claiming that, thanks to its brand new patented technologies, cloud gaming is now a problem solved! (Note though, that those patents are still pending and not yet public).

If true, their most impressive achievement is the latency induced by their solution: less than 16ms according to their website!

Concerning this latency thing

Let me define what “latency” refers to. It is the time spent between your inputs (ie. a button pressed on the controller) to their effect on the image displayed on the screen.

If your gaming system is a home console, such as a Xbox or a Playstation, the latency is the sum of the time for your input to be interpreted by the software (negligible), the time spent to render the resulting 3D image, then the time spent to send it to the TV (negligible).

But if you’re playing on a distant server, things are not so simple.
First, the data corresponding to your input travels to the server. It goes through your ISP then many routers and servers before arriving to its destination. Internet was not designed as a low latency network…
Then, just as on a home console, the software computes the corresponding image.
But before being sent back, it has to be compressed or the bandwidth requirement to send the stream of images would be unreasonable…
Then back to home. With more or less the same number of routers and severs laying on the way.
Then… the image has to be decompressed! Before finally being displayed on the screen.

 

Sources of latencies

Sources of latencies

Blade’s answers to latency

A few days ago, I was invited at Blade, were I could ask many questions in order to try to understand how they tackle these problems. I will explain my understanding of their technology and what I extrapolate. Because of course Blade did not share all their secrets, so I have to try to fill the blanks…

The network

Blade will require its first customers to have fiber. Cable subscribers will come second. But there will be no support for xDSL users. And over 4G? I don’t know, but such mobile access does not appear to be their priority right now. Fiber offers very low latencies compared to DSL, so it helps a lot.

But they are also restricting their very first user base to France. That allowed them to deal with the four major internet providers in the country and to directly link their network to those ISPs’. This limits the number of hops required for data to travel from most French subscribers to Blade’s servers.

They also claim to have some patents pending concerning the network side of thing, but I could not gather much info there. As it is far from my field of expertise, I will not try to guess what those patents could cover.

The video compression

I was eagerly waiting answers for about video compression, as a plain H264 encoding is not adapted to low latency streaming. Indeed, the main purpose of H264, when it was designed, was to compress movies. In this case it does not matter much if, in order to achieve the best quality/bitrate ratio, the coder has to bufferize many frames, producing 100s ms of latency.

In that field too, Blade claims yet undisclosed pending patents. They told me were using an heavily tuned H264 coder. So much tuned that most hardware decoders are not flexible enough to handle the video.
I also can imagine that they use an open-sourced low latency audio codec. For instance, OPUS can go as low as 2.5ms.

In order to decode their stream in the best condition, they will also sell a small “box”: the Shadow. Its size is comparable to a Raspberry Pi and was designed by Blade, using off the shelf components: no custom ASIC or FPGA here. The Shadow is powered by Linux and connects to the stream as soon as it has booted. This ways user never see the actual OS and is given the illusion to use a Windows box: the distant computer.

A software client is also available on Linux, Android and Windows. But the 16ms latency will only be sustained on the Shadow. As a matter of fact, buying the Shadow will be mandatory to their subscribers!

The server

The server’s hardware is also crucial. If a game runs at 60fps on it, a frame will take 16ms to be computed. Blade would then miss its target of 16ms for the whole latency. Thus, the hardware has to be fast enough to run your game way faster than 60fps! It has to run it at maybe 120 or even 140fps!
As on a gaming computer the most crucial element is the GPU, Blade was not shy and its servers are equipped with the latest Geforce 1080! More, each user gets the full benefit of a GPU: they are not shared between users as on competing solutions!

The magic behind the curtain allows to instantiate a virtual gaming machine on the fly, consisting of a virtualized CPU and its main memory, a networked storage hosting the user’s private data, and a physical GPU for the user’s sole usage.

Actually, as a customer, you rent such a virtual machine and the server will instantiate it when you need it. Your data are also kept on Blade’s servers and are, of course, persistent.

The demos

I admit that I have limited knowledge in the field of networking and virtualization. So, although Blade’s developers could provide me convincing answers about their pretension to keep latency below 16ms, I was waiting for the demo to form my opinion.

They currently have two demos in place. The first one is involving two cheap netbooks. One is running Blade’s solution, the other one is the plain netbook. The second demo consists of playing Overwatch on their Shadow hardware.

On the netbooks

The goal of this demo is to use the two netbooks side by side.

The user should not notice any difference on light works, such as handling local files using explorer or checking mails.
On this occasion I managed to guess which computer was displaying the stream from the cloud. By moving the scroll like a crazy, I could detect a few missed frames. Admittedly, this is not a representative use case: I actively tried to provoke those lags.

Next, they launched Photoshop. A gaussian blur filter was to be applied on a hundreds megapixel photo. Of course, the real netbook struggled, while the (quite demanding) task was way faster on the distant computer.
On the plus side, the lag was low enough that I could not notice it when moving the mouse pointer and going through the menus. On the minus side, I could again notice some lags and some compression artifacts when I did clicked undo/redo. This is a more realistic use case than what I did before to provoke lags. Indeed, on a powerful computer with plenty of RAM, undoing (or going back in history) should be near instantaneous. When working on my photos, it is part of my workflow to go back and forward to evaluate the effect of the various filters I happen to apply.

Finally, I ran the latest Futuremark’s benchmark on both machines. Of course score of the “true netbook” was pitiful while the one of the cloud computer was stellar!
But, once again, I could spot the difference. In the corners, there where some compression artifacts (lightly blurred blocks) appearing on the streamed video.

Bear in mind that, although the cloud computer could not be 100% undistinguishable from a true local computer, the demo was convincing enough. And deported heavy applications such as Photoshop are quite usable! As I know a thing or two about video compression, I knew exactly how to stress their video encoder and where to look for the result. It is doubtful that an average user would spot the difference.

On the Shadow

Remember that Blade developed a custom hardware. As they are working full time on it, they also told me that the software solution running on the netbook was not up to date. Thus I was expecting an even better experience on the Shadow!

This time, the demo consisted of playing a game of Overwatch. FPS are the most demanding games so this is quite pertinent. As before, I tried to provoke artifacts by jumping, turning around and teleporting like a crazy. But this time I could not spot any!
As a matter of fact, I could not feel any difference with playing on my own computer! The graphic settings where 1080p / Ultra.

This time I was 100% convinced! 🙂

So much that I unplugged the Ethernet cable to verify that the game was indeed streamed 😀

The future of cloud gaming?

Although I cannot certify that the latencies I experienced where indeed lower than 16ms, Blade’s devs could provide me credible answers to my questions and I was convinced by the actual experience. On their hardware device at least.

I’m quite sure that they’re technologically onto something.

I’m less sure about what they unveiled concerning their business model.

They plan to target hardcore gamers and sell them a monthly subscription to their service. The price is still to be disclosed.
Blade will also require their customer to buy their hardware, including a Windows 10 license that will run on the cloud computer instance. Thus expect something around 200€.
And don’t forget that all you’ll get is an access to a cloud-computer: gamers will still have to buy their games. It is not a “Netflix-like” kind of subscription, offering a large library of games to be streamed.
I think that’s a large amount of money. Even if the gaming experience is premium compared to the way cheaper competing solution.

But investors think to believe in them, so I shall be wrong. Just wait and see…

The future of cloud computing?

If have my doubts about the viability of their announced business plan focusing on cloud gaming, their technical solution opens a window on something else. Remember the demo when I tried Photoshop? It was quite usable indeed!

Nowadays, customers privilege light and practical computing devices. Unfortunately, those are too weak to manage heavy computing tasks. And they will stay too weak for the years to come because, as you may remember, the Moore’s Law is dead!

Enter cloud computing. If, in time of need, those customers could easily instantiate a cloud computer such as the one I ran Photoshop onto, the problem would be solved!
Of course, the pricing could not be the one Blade presented me. Maybe a price depending of the usage? And there is the problem of  Windows license which is tied to the instantiated personal cloud-computer…

But the perspective are quite fascinating!

Apple IIe: Fabrication d’un cĂąble pour Le Chat Mauve

Voici le deuxiĂšme billet d’une sĂ©rie destinĂ©e Ă  documenter les quelques menues rĂ©parations effectuĂ©es afin de refaire tourner mon antique Apple IIe, aprĂšs 25 passĂ© dans un placard.

Aujourd’hui, je vais montrer comment j’ai fabriquĂ© un cĂąble afin de brancher une carte graphique “Le Chat Mauve EVE” en PĂ©ritel.

Le “Chat Mauve” ? “EVE” ? Kesako ??

Afin de comprendre l’utilitĂ© d’un Chat Mauve, je commencer par introduire la maniĂšre dont l’Apple II affiche ses couleurs.

Lors de sa sortie en 1977, l’Apple II est en avance sur ses concurrents directs par bien des aspects. Et notamment parce qu’il est capable d’afficher des images en couleur sur une tĂ©lĂ©vision ! Pour cela, et afin de limiter les coĂ»ts au maximum, le gĂ©nial Steve Wozniak eu l’idĂ©e de se servir des dĂ©fauts du NTSC, la norme de tĂ©lĂ©vision AmĂ©ricaine. Et des dĂ©fauts, le NTSC en avait ! Au point d’ĂȘtre surnommĂ© par certaines mauvaises langues “Never Twice the Same Color”…

Le principe est le suivant ; Pour afficher une image couleur en 140×192, le circuit vidĂ©o envoie au tĂ©lĂ©viseur une image noir et blanc d’une dĂ©finition double, de  280×192. Cette image, constituĂ©e d’une succession de points noirs et blancs va ĂȘtre Ă  l’origine d’un signal, envoyĂ© au tĂ©lĂ©viseur, qui dĂ©passe le gabarit frĂ©quentiel prĂ©vu pour la luminance (utilisĂ©e exclusivement un affichage N&B) et “bave” dans les frĂ©quences codant la couleur ! Si vous avez connu l’Ă©poque de la tĂ©lĂ©vision analogique, vous savez peut-ĂȘtre qu’il fallait Ă©viter de porter une chemise Ă  carreau devant une camĂ©ra, sous peine de gĂ©nĂ©rer de disgracieuses barres rouges Ă  l’Ă©cran. C’est le mĂȘme principe, maĂźtrisĂ© !

Le possesseur d’une tĂ©lĂ© Noir et Blanc verra donc ceci :

Apple II - Gremlins en noir et blanc

Tandis que voici ce que verront les possesseurs d’une tĂ©lĂ©vision couleur !

Gremlins_color

La couleur affichée dépend de la succession des points N&B:

  • Deux points allumĂ©s donneront un pixel blanc,
  • Deux points Ă©teints donneront un pixel noir,
  • L’alternance de points Ă©teints et allumĂ©s pourront donner un pixel magenta, vert cyan ou orange en fonction de leur position et d’un Ă©ventuel “demi-pixel” de dĂ©calage.

Apple II - Generation couleur

Woz avait donc trouvĂ© le moyen d’afficher Ă  peu de frais une image en couleur, puisque le hardware Ă©tait peu diffĂ©rent de ce qui Ă©tait nĂ©cessaire afin d’afficher une image noir et blanc !

Tout cela est trĂšs astucieux, mais… cela ne fonctionne que sur des tĂ©lĂ©viseurs NTSC ! Sur un tĂ©lĂ©viseur SECAM, comme il y en avait en France, il faut se contenter du noir et blanc !

Mais vers 1981, Jean-Louis GassĂ©e, alors PDG d’Apple France, rencontre dans la Silicon Valley une poignĂ©e d’ingĂ©nieurs français bidouillant une carte d’extension qui leur permet d’afficher en couleur sur une tĂ©lĂ©vision française. Il les convainc de la commercialiser et c’est ainsi que naĂźt la sociĂ©tĂ© “Le Chat Mauve“.

Jusqu’Ă  la sortie de l’Apple IIGS, leur gamme de cartes sera officiellement supportĂ©e par Apple France et connaĂźtra un grand succĂšs !

EVE est leur premier modĂšle, et c’est celui que je possĂšde.

Fabrication du cĂąble

La carte EVE Ă©tait fourni avec un cĂąble se branchant sur la prise pĂ©ritel. Avantage : non seulement il fournissait un signal composite SECAM, mais Ă©galement un signal RVB autorisant une qualitĂ© d’affichage incomparable !

Malheureusement, ayant perdu mon cĂąble, il me fallait en refabriquer un !

Je suis parti d’un cĂąble PĂ©ritel MĂąle/MĂąle possĂ©dant toutes les broches. Ce dĂ©tail est important, car sur une pĂ©ritel tous les signaux ne sont pas obligatoires et certains cĂąbles bas de gamme ne proposent que le composite. Pire : certains possĂšdent les broches RVB, mais ces derniĂšres ne sont pas reliĂ©es !

Une fois le bon cĂąble pĂ©ritel trouvĂ©, c’est assez simple : j’en ai sectionnĂ© les fils d’une extrĂ©mitĂ© afin de les ressouder sur un connecteur DB9 mĂąle, Ă  enficher in-fine sur le connecter DB9 femelle de la carte.

Apple-II---Chat-Mauve---Cablage

L’opĂ©ration est d’autant plus aisĂ©e que la documentation de la carte indique la correspondance entre les broches des deux prises. Ah on n’en fait plus des docs comme cela !

Apple II - Chat Mauve - Doc

Avant de brancher au tĂ©lĂ©viseur, vĂ©rifier la tension de la pin #8 qui doit ĂȘtre de 12V et de la pin #16 qui doit ĂȘtre environ de 3V. Cette derniĂšre indique Ă  l’Ă©cran qu’un signal RVB arrive sur la PĂ©ritel.

Et voilà le travail ! 🙂

Choplifter en couleur !

Choplifter en couleur !

Apple IIe: rĂ©paration de l’alimentation

J’ai rĂ©cemment pu rĂ©cupĂ©rer mon tout premier micro-ordinateur : un Apple IIe que mon pĂšre avait achetĂ© aux environ de 1983. Et oui, je suis vieux :/

Que d’heures passĂ©es Ă  l’Ă©poque sur Decathlon, Captain Goodnight ou encore Choplifter et Lode Runner. Mais Ă©galement Ă  “programmer” un peu de BASIC. Enfin… Ă  recopier des listing publiĂ©s dans les magasines, sans finalement y comprendre grand chose !

Bref, il est certain que ma passion de l’informatique doit beaucoup Ă  cette machine !

Ce billet est le premier d’une sĂ©rie illustrant les rĂ©parations nĂ©cessaires Ă  la remise en route de ce vieil Apple IIe “PAL”.

Avant de la remettre en route, je me dis que tester l’alimentation pourrait-ĂȘtre une bonne idĂ©e. Et ce fĂ»t une bonne idĂ©e car Ă  peine le bouton ON enfoncĂ©, elle prit feu ! AprĂšs un moment de panique et quelques recherches sur le Net, j’appris qu’il s’agit d’une panne “classique”, aprĂšs tant d’annĂ©es passĂ©es dans un placard. Un condensateur de filtrage a en effet tendance Ă  s’enflammer. De maniĂšre trĂšs spectaculaire, mais entrainant en gĂ©nĂ©ral peu de dĂ©gĂąts.

J’entrepris donc de remplacer le condensateur fautif, mais Ă©galement tous les condensateurs chimiques, au cas oĂč…

L'alimentation de l'Apple II est trĂšs propre et compacte pour l'Ă©poque.

L’alimentation de l’Apple II est trĂšs propre et compacte pour l’Ă©poque.

A l’ouverture du boitier scellĂ©, premiĂšre surprise : le condensateur de filtrage fautif est directement soudĂ© sur l’arrivĂ©e du 220V ! De maniĂšre un peu crade, avec une rĂ©sistance en parallĂšle.

Le condensateur est soudé directement sur le 220V !

Le condensateur est soudé directement sur le 220V !

AprĂšs renseignements, ce filtre ne sert pas Ă  rendre plus propre la tension d’entrĂ©e, mais Ă  protĂ©ger cette derniĂšre et l’installation Ă©lectrique de la maison contre divers parasites pouvant ĂȘtre gĂ©nĂ©rĂ©s par l’alimentation elle mĂȘme !

Retirer cette capa ne pose finalement aucun problĂšme technique.

Le fautif !

Le fautif !

Il Ă©tait temps de retirer Ă©galement tous les condos chimiques. Mais surtout de commander de nouvelles piĂšces ! Heureusement, comme vous pouvez le constater sur la premiĂšre photo, les composants du dĂ©but des annĂ©es 80 sont suffisamment gros pour ĂȘtre trĂšs lisibles. De plus, de nombreuses indications sont indiquĂ©es sur le PCB. Mais comme on n’est jamais trop prudent, je me suis mis en tĂȘte de rĂ©cuperer la “part-list” officielle. Et lĂ  je dois dire que j’ai un peu galĂ©rĂ©…

J’ai en effet trouvĂ© assez rapidement des informations sur les alimentations US 110V, mais je voulais ĂȘtre certain de trouver la doc correspondant Ă  mon alimentation. Celle-ci devait ĂȘtre lĂ©gĂšrement diffĂ©rente, puisque prenant en entrĂ©e du 220V…

Afin de vous Ă©viter de longues recherches, voici la doc technique des alimentations EuropĂ©ennes des Apple II et Apple III que j’ai pu rĂ©cupĂ©rer. Et mĂȘme, pour ceux qui veulent aller droit au but, la liste des condensateurs et les liens pour les acheter chez Conrad 😉

La suite ne posa aucun problĂšme. Commande des composants, un peu de soudure (facile mĂȘme pour un maladroit comme moi), et voilĂ  !

Vous vous demandez donc pourquoi cet article est annoncĂ© comme le premier d’une sĂ©rie alors que mon Apple II semble maintenant fonctionner correctement ? Et bien ce n’est pas tout Ă  fait le cas. J’ai quelques soucis avec le joystick et, surtout, avec sa carte d’extension Le Chat Mauve EVE. Cette derniĂšre permet d’afficher des images couleurs sur une tĂ©lĂ© munie d’une prise pĂ©ritel RVB ! En effet, les Apple II ont toujours affichĂ© la couleur sur les tĂ©lĂ©s NTSC, mais au prix d’un hack trĂšs ingĂ©nieux peu transposable au PAL. Et ne parlons pas des tĂ©lĂ©viseurs SECAM. Malheureusement, j’ai Ă©garĂ© le cĂąble de ma carte. Il va donc falloir m’en refabriquer un

Si souhaitez en savoir plus sur l’Apple II, je vous conseille cet Ă©pisode de “Very Hard” consacrĂ© Ă  la machine. (Disclaimer: je l’ai co-Ă©crit…)


Very Hard, Épisode 29 – Apple II, le papy qui… par assomo5