Capturing Disk ][ data signals using logic analyzer

12 posts / 0 new
Last post
ven
Offline
Last seen: 1 year 3 months ago
Joined: Sep 11 2022 - 22:02
Posts: 8
Capturing Disk ][ data signals using logic analyzer

Hi,

I've got a logic analyzer hooked up to disk ][ card to capture the read signals when the machine boots Dos 3.3.  I want to turn the read signals into actual 1's and 0's so I can feed them to my program that decodes the disk data; the latter part I have already finished. 

I was wondering if anyone can offer advice on coverting the timing signals into actual 1's and 0's.  One question I have is, there seems to long gaps between signals, do I treat those as zeros or after a maximum of two zeros, ignore until the next read pluse?

I have read chatper 9 in Sather's book and have some working knowledge of the disk ][ system.

 

Thanks

-Ven

Offline
Last seen: 10 hours 2 min ago
Joined: Apr 1 2020 - 16:46
Posts: 1080
Some notes on compressing Apple II disk read signals.

You could count the number of 500ns intervals between the pulses and then store only that information.

This is equivalent to a 2 MHz sampling rate.

If you then run this through a software model of the state machine on the Disk II controller, you would get a "0" and "1" bit stream of the  raw GCR data on the disk.

You could, of course, also look at the contents of the 74LS323 shift register on the DISK II card and measure the time between the MSBs being set.

For normal disk 'nybbles' (or whatever it was called) worth 8 disk bits you would get a time interval of ~32us (8 bit cells of 4us each).

For SYNC bytes, the time interval until the MSB is set again is longer than that. ideally, ~40 us.

Alas, the write splice areas conspire against such an easy detection of SYNC bytes.

In any case you would need hefty postprocessing by appropriate software.

 

There are flux samplers out there like Kryoflux, Catweasel, and Greaseweazle  (the latter is the cheapest one I've seen) which sample the read pulses at a much higher resolution, and for some of them you can find source code online to do the postprocessing for Apple II floppy disk raw read data. Last time I looked, Greaseweazle had no support of Apple II postprocessing yet, but it looks as if the hardware as such could read Apple II floppy disks using standard 5.25" floppy disk drives. I did not try any of those as I have my own flux sampler, which, alas, I had designed and built when ECP parallel printer ports were en vogue - nowadays it's getting tedious to find old notebook computers who support ECP together with DMA, and do not need wicked bit-banging deep in the motherboard chipset. This is why my own solution never could have been commercialized - there are too many different chipsets, and each of them would need tedious work to write the device driver for the flux engine, based on rudimentary or non-existent documentation for the particular chipset.

 

In the decades which passed by, USB got fast enough to do the work, and the Greaseweazle is so ridicolously cheap (and open source) that if I needed a flux engine to read old floppy disks today, I would go for it.

 

Why reinvent the wheel.

 

- Uncle Bernie

 

 

 

Offline
Last seen: 2 weeks 6 days ago
Joined: May 31 2022 - 18:18
Posts: 367
I've done something similar,

I've done something similar, except not the converting to 0/1 I just read the hex off the traces. That said, what's you're objective? You are aware there's the data is coded? What are you using for a logic analyzer? Is tis something like a real o-scope? If you're using something like salea or analog discovery? If so should be easy enough to follow the data format decoder examples and spin your own.

 

Offline
Last seen: 15 hours 14 min ago
Joined: Mar 8 2009 - 21:53
Posts: 29
Example code

I've done this numerous times with various logic analysers. The problem I had with pre USB analysers was that the sample time was fixed so you could sample high speed signals and slow speed signals but there was not enough memory to record both at the same time which I needed for the disk data stream. 

I cover this in my blog entries (this one is probably the best one to have a look at).  http://lukazi.blogspot.com/2016/12/archiving-convict-and-dealing-with-data.html

At the end of this blog I have attached tools (extensions for the Saleae logic analyser software) and example code for looking at the disk II data stream.

 

From what I can recall it was just a matter of working out the time between pulses. This needed to be fine tuned because the times varied slightly from disk to disk. It depended on the speed of disk drive that was writing the data. No all disk drives were calibrated to the same speed. I can't recall what the variation was but possibly 5%, 10%?

 

These days I would just use a microcontroller to sample the data. Microcontrollers such as the Raspberry Pi Pico or Beagle Bone Black would be my choice because they contain cores to easily do the user interface / disk storage parts as well as internal state machines (PIO/PRU) that allow you to do dedicated communications using custom protocols (great for Apple II disk or video stream processing).

 

Good luck.

Alex. 

Offline
Last seen: 15 hours 14 min ago
Joined: Mar 8 2009 - 21:53
Posts: 29
More examples

Also my blog entry http://lukazi.blogspot.com/2012/06/ched-update.html gives an overview of the disk II signals.

Ta,

Alex.

Offline
Last seen: 10 hours 2 min ago
Joined: Apr 1 2020 - 16:46
Posts: 1080
Some comments on floppy disk bit stream recovery

In post #4, lukazi wrote:

 

"From what I can recall it was just a matter of working out the time between pulses. This needed to be fine tuned because the times varied slightly from disk to disk. It depended on the speed of disk drive that was writing the data. No all disk drives were calibrated to the same speed. I can't recall what the variation was but possibly 5%, 10%?"

 

Uncle Bernie comments:

 

Most 5.25" Floppy disk drives were adjusted to 300 RPM +/-3%, which is quite tight for DC motors and primitive controller electronics. Later floppy disk drives (those with direct drive, they have no drive belt) used a polyphase motor controlled by a PLL. These tend to keep their spindle speed in tighter tolerance bands, so they should be preferred for disk imaging work.

 

Still, there will be a lot of wow and flutter and bit jitter effects and this is why you can't use a fixed timing window to do the bit stream recovery. Look into the topic of "floppy disk data seperators". The old Western Digital datasheets and app notes for their 179X series of LSI floppy disk controllers are a treasure trove of information how that works - but don't get confused, these controllers are FM and MFM and not GCR used in the Disk II. So you can simplify everything as you don't need to separate clock and data bit streams when decoding GCR. In GCR there are no clock bits.  These are replaced by making rules which code groups are allowed, i.e. no more than two "0" in a row.

 

In any case you need to oversample the raw read pulses by at least a factor of 8, and more is better, which Apple did when replacing the original Disk II controller with the IWM custom IC. You just measure the distance between the raw read pulses, and the longer intervals mean one or two "0" bits. The Disk II has ideal (theoretical only) intervals of 4us, 8us and 12us between raw read pulses. But due to the wow, flutter and jitter you have to allow for margins around the ideal values in your decoder. Over the years, these clock and data recovery circuits got more and more sophisticated and supported higher and higher bit rates and more and more complicated modulation schemes. This trend continued until the floppy disk storage systems went the way of the dinosaur.

 

- Uncle Bernie

ven
Offline
Last seen: 1 year 3 months ago
Joined: Sep 11 2022 - 22:02
Posts: 8
Thanks for the info

Thanks Uncle Bernie and all for the info.  I do have a Grease Weasel or two.  My back story: I was working on a TRS-80 Model 1 redesign with new components, except for the floppy controller.  To become familiar with the FDC and hardware interfacing, I decided to interface Arduino ATMega to a Shugart floppy.  I made some progress, but hit a few roadblocks.  I then thought let me find something 'simpler' to implement to get some experience.

 

I thought (foolishly), Apple II disks should be easier to read as I heard how simplified the circurity was - wasn't prepared for the software challenge.  Once I started, I realized that I should have just stayed with the TRS-80.  Anyway, I've been working backwards, I'm able to decode Apple ][ disks (via images), the final step is to read a disk real time.  I interfaced a disk ][ to an Arduino but reading the data is tough due to the limitations of the Arduino, I'm having to resort to assembly.  I'm not sure if I am reading the pulses correctly and needed a reference point.  So I used a KingST LA1010 to capture the read pulses during the boot process, wired to pins on the disk ][ card.  I now have timing data, but need to generate 1's and 0's.  I'm too far in to give up.

 

-Ven

ven
Offline
Last seen: 1 year 3 months ago
Joined: Sep 11 2022 - 22:02
Posts: 8
More examples

Thanks for links Alex, I'll check them out.

-Ven

ven
Offline
Last seen: 1 year 3 months ago
Joined: Sep 11 2022 - 22:02
Posts: 8
lukazi wrote:From what I can
lukazi wrote:

From what I can recall it was just a matter of working out the time between pulses. This needed to be fine tuned because the times varied slightly from disk to disk. It depended on the speed of disk drive that was writing the data. No all disk drives were calibrated to the same speed. I can't recall what the variation was but possibly 5%, 10%?

 

One thing I have noticed is there are pulses less than the 1 us, should I ignore pulses say less than 900ns?

-Ven

Offline
Last seen: 15 hours 14 min ago
Joined: Mar 8 2009 - 21:53
Posts: 29
My process

I don't recall having to ignore any pulses. I would not ignore it. If it is less than 1 us then it is probably due to the sampling rate on the logic analyser and not from the disk II data signal. I sampled the data at 24MHz which was the maximum my logic analyser could handle. There was no need for me to reduce this sample rate to see how far I could push it before it gave me incorrect results. Looking at my captured samples (from my blog) I can see that a valid pulse can be out by 1 us from where it is supposed to be and still be valid. That's a 25% error ie 1 us out from a 4 us window. This disk had more variance than any other disk I had previously worked with.

 

Theoretically if a pulse is 2 us from where it is meant to be then it is exactly in the middle between where two pulses can exist so you can not tell which one it is meant to be. In practice (or should I say in my case) pulses tended to be further away than closer together. Theoretically the gap between pulses should only be 3 us, 7 us or 11 us. Anything other than this is invalid and you have corruption. Looking at my code, my process went like this: step 1. A byte of data always starts with a pulse so wait for a pulse. First bit = "1". The time between bytes is irrelevant. There are relatively large gaps between sync bytes but not between data bytes. Step 2 if the gap between pulses is less than 5.6 us then there is no hole ie next bit is "1". 5.6 us was worked out by trial and error using my data set. Step 3 if the next gap is less than 9.5 us (and not step 2) then we have one hole ie next two bits are "01" (or just "0" if end of byte - i think, but don't quote me). Step 4 if the next gap is less than 12.8 us (and not step2 or step3) then we have two holes ie the next three bits are "001" (or "00"/"0" if end of byte - I can't remember if this is correct). Step 5 do until you have processed 8 bits. That's it. Easy hey?

Offline
Last seen: 10 hours 2 min ago
Joined: Apr 1 2020 - 16:46
Posts: 1080
You can add a PLL-ish algorithm

As a comment to the previous post #10 of "lukazi", you could add a PLL like algorithm which essentially keeps a running average of how far off from the ideal 4us, 8us and 12us intervals the timing between the raw read pulses  is. Based on this average (which acts much as the loop filter of a real PLL)  you could adjust your decoder's interval time "binning" which yields the bit pattern chunks. This would remove the influence of different spindle RPMs between the floppy disk drive the recording was done and your drive used for the analysis, greatly improving the reliability of your readback and decoding process.

 

- Uncle Bernie

Offline
Last seen: 15 hours 14 min ago
Joined: Mar 8 2009 - 21:53
Posts: 29
PLL will not help

PLL will only help if the variance is from the rotation speed but this is only a tiny part of the overall variance. Here is a 24MHz sample of a real disk. You can see that within a single byte you can get large variances from pulse to pulse. The pulse P1 is ~500 ns left of optimal, P2 is ~1 us right of optimal but P3 and P4 are optimal. Gap G1 is nearly twice the length of G3 yet they both need to be treated as no hole between pulses. Gap G2 is only slightly larger than G1 yet G2 needs to be treated as one hole between pulses.    

 

The flexibility of your algorithm will determine if it can read close to optimal disks or if it can read all disks that an Apple II drive can read.  

 

Log in or register to post comments