Friday, November 20, 2015

FM squelch and subaudible tone detection on the mcHF

In a previous installment  ("Adding FM to the mcHF SDR Transceiver" - link) I described how the demodulation of FM signals was added to the mcHF SDR transceiver but being able to receive FM implies the addition of a few other features, namely that of squelch "circuitry" - of both "carrier" and "tone" types.

Determining the "squelch" status of a received signal:

One of the most obvious ways to determine the presence of a signal is to compare the signal strength against a pre-set threshold:  If the signal is above that threshold it is considered to be "present" and an audio gate is opened so that it may be heard.

This sounds like a good way to do it - except that it isn't, really, at least not with FM signals.

If you were listening to an FM signal on 10 meters that was fading in and out (as it often does!) one would have to set the threshold just above that of the background noise - or "open" (e.g. disable) it completely to prevent it from disappearing when the signal strength dove into a minimum during QSB.  If the background noise were to vary - as it can over the course of a day and with propagation - the squelch would be prone to opening and closing as well.

As it turns out, typical FM squelch circuits do not operate on signal strength as there are better methods for determining the quality of the signal being received that can take advantage of the various properties of the FM signals themselves.

Making "Triangle Noise" useful:

Mentioned in the previous entry on this topic was "Triangle Noise", so-called by the way it is often represented graphically.
Figure 1:
A graph representing the relative amplitude of noise with strong weak FM signals.  It is the upward tilt of the noise energy to which "Triangle" noise refers - the angle getting "steeper" as the signal degrades.  Also represented is a high-pass filter that removes the modulated audio, leaving only the noise to be detected.
From this diagram one can begin to see why pre-emphasizing audio along a curve similar to the "weak signal noise" line can improve weak-signal intelligibility by boosting the high-frequency audio on transmit (and doing the inverse on receive) to compensate for the noise that encroaches on weak signals.

As can be seen in Figure 1 the noise in a recovered FM signal increases as the signal get weaker - but notice something else:  The noise increases more quickly at higher frequencies of audio than it does at lower audio frequencies.  Looking at Figure 1 you might make another observation:  Because there is typically some low-pass filtering of the transmitted audio to limit its occupied bandwidth, there is no actual (useful) audio content above that frequency from the distant station, but the noise is still there.

High-pass filtering to detect (only) "squelch noise":

From the above drawing in Figure 1 it can be recognized that if we only "listen" to the high-frequency audio energy that passes through the "Squelch noise" high-pass filter all we are going to detect is the noise level, independent of the modulated audio.  If we base our signal quality on the amount of noise that we detect at these high frequencies - which are typically above the typical hearing range (usually ultrasonic, above 10 kHz) - we can see that we don't need to know anything about the signal strength at all.

This method works owing to an important property of FM demodulators:  The amount of recovered audio does not change with the signal strength as the demodulator is "interested" only in the amount of frequency change, regardless of the actual amplitude.  What does change is the amount of noise in our signal as thermal noise starts to creep in, causing uncertainty in the demodulation.  In other words, we can gauge the quality of the signal by looking only at the amount of ultrasonic noise coming from our demodulator.
Figure 2: 
An representation of an analog squelch circuit with hysteresis.  The high-pass filter removes the "program" audio modulated onto the carrier (e.g. voice) which is then amplified as necessary and then rectified/filtered to DC to derive a voltage proportional to the amount of ultrasonic noise present:  The higher the voltage, the "weaker" and noisier the signal.
The resulting voltage is then fed with a comparator that includes hysteresis to prevent it from "flapping" when it is near the squelch threshold.
An analog representation of a squelch circuit may be seen in Figure 2.  For the simplest circuit, the high-pass filter could be as simple as an R/C differentiator followed by a single-transistor amplifier, and the same sorts of analogs (pun intended!) could be applied in software.

After getting the mcHF FM demodulator to be functional I tried several different high-pass filter methods - including a very simple differentiator algorithm such as that described in the previous posting - except, of course, that the "knee" frequency was shifted far upwards.  The absolute value was then taken from the output of the high-pass filtered and smoothed and printed on the screen while the input signal, modulated with a 1 kHz audio sine wave and fed to a SINAD meter (read about SINAD here - link) while signal level was varied:  In this way I could see how the output of the noise detection circuit behaved with differing signal conditions.

In doing this testing I noted that a simple differentiator did not work as well as I'd hoped - likely due to the fact that unlike an analog circuit in which the high-frequency energy can continue to increase in intensity with frequencies well into the 10's or 100's of kHz, in the digital domain we have a "hard" limit enforced by Nyquist, stopping at 23 kHz or so on the mcHF with its 48 ksps rate.

With less high frequency spectral noise energy (overall) to work with it is necessary to amplify the output of a simple differentiator more, but this also brings up the lower frequency (audio) components, causing it to be more affected by speech and other content, requiring a better filter.  Ultimately I determined that 6-pole IIR high-pass audio filter with a 15 kHz cut-off frequency, capable of reducing "speech" energy and its second and third harmonics below 9-10 kHz by 40-60dB, worked pretty well:  In testing I also tried a similar filter with an 8 kHz cut-off, but it was more-affected by voice modulation and its immediate harmonics.

Comment:
If the FM demodulation is working properly the result will be a low-distortion, faithful representation of the original audio source with little/no energy above the low-pass filter's cut-off in the transmitter.  If the signal is distorted in some way - such as with multipath distortion, being off-frequency or with excess deviation, energy from this distortion can appear in the ultrasonic region which cannot be easily distinguished from "squelch" noise.
If this energy is high enough, the squelch can close inadvertently since the signal may be "mistaken" as being weak:  This is referred to as "squelch clamping" and is so-called as it is often seen on voice peaks of signals degraded by multipath and/or off-frequency.

Determining noise energy:

In short, the algorithm to determine the squelch energy was as follows:

loop:

   squelch_avg = (1-α) * squelch_avg + sqrt(abs(hpf_audio)) * α
   if(squelch_avg > MAX_VALUE)
      squelch_avg = MAX_VALUE

Where:
   α = "smoothing" factor
   hpf_audio = audio samples that have been previously high-pass filtered to remove speech energy
   squelch_avg = the "smoothed" squelch output

If you look at the above pseudocode example you'll notice several things:
  • The square root value of the absolute value of the high-pass noise energy is taken.  It was observed that as the signal got noisier, the noise amplitude climbed very quickly:  If we didn't "de-linearize" the squelch reading based on the noise energy - which already has a decidedly non-linear relationship to the signal level - we would find that the majority of our linear squelch adjustment was "smashed" toward one end of the range.  By taking the square root our value increases "less quickly" with noisier signals than it otherwise would.
  • The value "squelch_avg" is integrated (low-pass filtered) to "smooth" it out - not surprising since it is a measurement of noise which, by its nature, is going to vary wildly - particularly since the instantaneous signal level can be anything from zero to peak values.  What we need is a (comparatively) long-term average.
  • The "squelch_avg" value is capped at "MAX_VALUE".  If we did not do this the value of "squelch_avg" would get very high during periods of no signal (maximum noise) and take quite a while to come back down when a signal did appear, causing a rather sluggish response.  The magnitude of "MAX_VALUE" was determined empirically by observing "squelch_avg" with a rather noisy signal - the worst that would be reasonably expected to open a squelch.
Obtaining a usable basis of comparison:

The above "squelch_avg" value increases as the quieting of the received FM signal decreases which means that we must either invert this value or, if a higher "squelch" setting means that a better signal is required for opening the squelch, that same "squelch setting" variable must have its sense inverted as well.

I chose the former approach, with a few additional adjustments:
  • The "squelch_avg" value was rescaled from its original range to approximately  24 representing no-signal conditions to 3 representing a full-quieting signal with modulation with hard limits imposed on this range (e.g. it is not allowed to exceed 24 or drop below 3).
  • The above number was then "inverted" by subtracting it from 24, setting its range to 2 representing no signal to 22 for one that is full-quieting with modulation.
It is not enough to simply compare the derived "squelch_avg" number after scaling/inversion with the squelch setting, but rather a bit of hysteresis must also be employed or else the squelch is likely to "flap" about the threshold.  I chose a value of 10% of the maximum range, or a hysteresis range of +/-2 which seemed to be about right.

The final step was to make sure that if the squelch was set to zero that it was unconditionally open - this, to guarantee that no matter what, some sort of signal could be heard without worrying about the noise threshold occasionally causing the squelch to close under certain conditions that might cause excess ultrasonic energy to be present.

The result is a squelch that seems to be reasonably fast in response to signals, weak or strong, but very slightly slower in response to weak signals.  This slight asymmetry is actually advantageous as it somewhat reduces the rate-of-change that might occur under weak-signal conditions (e.g. squelch-flapping) - particularly during "mobile flutter."  The only downside that is currently noted is that despite the "de-linearization" the squelch setting is still somewhat compressed with the highest settings being devoted to fairly "quiet" signals" and most of the range representing somewhat noisier signals - but in terms of intelligibility and usability, it "feels" pretty good.

Subaudible tone decoding:

One useful feature in an FM communications receiver is that of a subaudible tone (a.k.a. CTCSS) decoder.  For an article about this method of tone signalling, refer to the Wikipedia article here - link.

In short, this method of signalling uses a low-frequency tone, typically between 67 and 250 Hz, to indicate the presence of a signal and unless the receiver detects this tone on the received signal it is ignored.  In the commercial radio service this was typically used to allow several different users to share the same frequency but to avoid (always) having to listen to the others' conversations.  In amateur radio service it is often used as an interference mitigation technique:  The use of carrier squelch and subaudible tone greatly reduces the probability that the receiver's squelch will falsely open if no signal is present or, possibly, if the wrong signal - in the case where a listener is an an area of overlapping repeaters - is present - but this works only if there is a tone being modulated on the desired signal in the first place.

The Goertzel algorithm:

There are many ways to detect tones, but the method that I chose for the mcHF was the Goertzel algorithm.  Rather than explain exactly how this algorithm works I'll point the reader to the Wikipedia article on the subject here - link.  The use of the Goertzel algorithm has several distinct advantages:
  • Its math-intensive parameters may be calculated before-hand rather than on the fly.
  • It requires only simple addition/subtraction and one multiplication per iteration so it need only take a small amount of processor overhead.
  • Its detection bandwidth is very scalable:  The more samples that are accumulated, the narrower it is - but also slower to respond.
The Goertzel algorithm, as typically implemented, is capable of "looking" at only one frequency at a time - unlike an FFT which looks at many - but since it is relatively "cheap" in terms of processing power (e.g. the most intensive number-crunching is done before-hand) it is possible that one could implement several of them and still use fewer resources than an FFT.

The Goertzel algorithm, like an FFT, will output a number that indicates the magnitude of the signal present at/about the detection frequency, but by itself this number is useless unless one has a basis of comparison.  One approach sometimes taken is to look at the total amount of audio energy, but this is only valid if it can be reasonably assured that no other content will be present such as voice or noise, which may be generally true when detecting DTMF, but this cannot be assured when detecting a subaudible tone in normal communications!

"Differential" Goertzel detection:

I chose to use a "differential" approach in which I set up three separate Goertzel detection algorithms:  One operating at the desired frequency, another operating at 5% below the desired frequency and the third operating at 4% above the desired frequency and processed the results as follows:
  • Sum the amplitude results of the -5% Goertzel and +4% Goertzel detections.
  • Divide the results of the sum, above, by two.
  • Divide the amplitude results of the on-frequency Goertzel by the above sum.
  • The result is a ratio, independent of amplitude, that indicates the amount of on-frequency energy.  In general, a ratio higher than "1" would indicate that "on-frequency" energy was present.
By having the two additional Goertzel detectors (above and below) frequency we accomplish several things at once:
  • We obtain a "reference" amplitude that indicates how much energy there is that is not on the frequency of the desired tone as a basis of comparison.
  • By measuring the amplitude of adjacent frequencies the frequency discrimination capability of the decoder is enhanced without requiring narrower detection bandwidth and the necessarily "slower" detection response that this would imply.
In the case of the last point, above, if we were looking for a 100 Hz tone and a 103 Hz tone was present, our 100 Hz decoder would "weakly-to-medium" detect the 103 Hz tone as well but the +4% decoder (at 104 Hz) would more strongly detect it, but since its value is averaged in the numerator it would reduce the ratiometric output and prevent detection.

Setting the Goertzel bandwidth:

One of the parameters not easily determined in reading about the Goertzel algorithm is that of the detection bandwidth.  This parameter is a bit tricky to discern without using a lot of math, but here is a "thought experiment" to understand the situation when it comes to being able to detect single-frequency (tone) energy using any method.


Considering that the sample rate for the FM decoder is 48 ksps and that the lowest subaudible tone frequency that we wish to detect is 67.0 Hz, we can see that at this sample rate it would take at least 717 samples to fully represent just one cycle at 67.0 Hz.  Logic dictates that we can't just use a single cycle of 67 Hz to reliably detect the presence of such a tone so we might need, say, 20 cycles of the 67 Hz tone just to "be sure" that it was really there and not just some noise at a nearby frequency that was "near" 67 Hz.  Judging by the very round numbers, above, we can see that if we had some sort of filter we might need around 15000 samples (at 48 ksps) in order to be able to filter this 67 Hz signal with semi-reasonable fidelity.

As it turns out, the Goertzel algorithm is somewhat similar.  Using the pre-calculated values for the detection frequency, one simply does a multiply and a few adds and subtractions of each of the incoming samples:  Too few samples (fewer than 717 in our example, above) and one does not have enough information with which to work at low frequencies to determine anything at all about our target frequency of 67 Hz, but with a few more samples one can start to detect on-frequency energy with greater resolution.  If you let the algorithm run for too many samples it will not only take much longer to obtain a reading, but the effective "detection bandwidth" becomes increasingly narrow.  The trick is, therefore, to let the Goertzel algorithm operate for just enough samples to get the desired resolution, but not so many that it will take too long to obtain a result!  In experimentation I determined that approximately 12500 samples were required to provide a tradeoff between adequately-narrow frequency resolution and reasonable response time.

This is part of the reason for the "differential" Goertzel energy detection in which we detect energy at, above and below the desired frequency:  This allows us to use a somewhat "sloppier" - but faster - tone detection algorithm while, at the same time, getting good frequency resolution and, most importantly, the needed amplitude reference to be able to get a useful ratiometric value that is independent of amplitude.

Debouncing the output:

While an output of greater than unity from our differential Goertzel detection generally indicates on-frequency energy, one must use a significantly higher value than that to reduce the probability of false detection.  At this point one can sort of treat the output of the tone detector as a sort of noisy pushbutton switch an apply a simple debouncing algorithm:

loop:

   if(goertzel_ratio >= threshold)   {
      debounce++
      if(debounce > debounce_maximum)
         debounce = debounce_maximum
   }
   else   {
      if(debounce > 0)
         debounce--
   }
   if(debounce >= detect_threshold)
      tone_detect = 1
   else
      tone_detect = 0

where:

   "goertzel_ratio" is the value "f/((a+b)/2))" described above where:
      f = the on-frequency Goertzel detection amplitude value
      a = the above-frequency Goertzel detection amplitude value
      b = the below-frequency Goertzel detection amplitude value
   "threshold" is the ratio value above which it is considered that tone detection is likely.  I found 1.75 to be a nice, "safe" number that reliably indicated on-frequency energy, even in the presence of significant noise.

   "detect_threshold" = the number of "debounce" hits that it will take to consider a tone to be valid.  I found 2 to be a reasonable number.
   "debounce_maximum" is the highest value that the debounce count should attain:  Too high and the it will take a long time to detect the loss of tone!  I used 5 for this which causes a slight amount of effective hysteresis and a faster "attack" than "decay" (e.g. loss of tone).

With the above algorithm - called approximately once every 12500 samples (e.g. just under 4 times per second with a 48ksps sample rate) - the detection is adequately fast and quite reliable, even with noisy signals.

Putting it all together:

Figure 3:
An FM signal with a subaudible tone being detected, indicated by
the "FM" indicator in red.
If tone decoding is not enabled, the Goertzel algorithms are not called at all (to save processor overhead) and the variable "tone_detect" is set to 1 all of the time.  For gating the audio a logical "AND" is used requiring that both the tone detect and squelch be true - unless the squelch setting is 0, in which case the audio is always enabled.

Finally, if the squelch is closed (audio is muted) the audio from the FM demodulator is "zeroed".


* * *

In a future posting I'll describe how the the modulation part of this feature was accomplished on the mcHF along with the pre-emphasis of audio, filtering and the generation of burst and subaudible tones.


[End]

This page stolen from "ka7oei.blogspot.com".

Monday, November 9, 2015

Repairing the TUNE capacitor on the Heathkit HL2200 (SB-220) amplifier

Figure 1:
The front panel of the HL-2200 amplifier - which is really just a slightly
modernized version of the SB-220.
Click on the image for a larger version.
Earlier this year I picked up a Heathkit HL2200 amplifier (the newer, "brown" version of the SB-220) at a local swap meet for a reasonable price.  What made it particularly attractive was that it not only had a pair of new, graphic 3-500Z tubes in (Chinese-made, but RF-Parts Inc. tested/branded) but it also had a Peter Dahl "Hypersil" tm power transformer rather than the "just-adequate" original Heathkit transformer and an already-installed circuit that allowed the milliamp, low-voltage keying rather than the 100-ish volts of the original.

Obligatory Warning:
The amplifier/repair described on this page presents lethal voltages in its circuits during normal operation.  Be absolutely certain to take any and all precautions before working on this or any piece of equipment that contains dangerous voltages!
This amplifier was unplugged and the built-in high-voltage safety shorting bar operated by removing the top cover was verified to be doing its job.
DO NOT work on equipment like this unless you have experience in doing so!
 Problems with the amplifier:

While it was servicable as-is, it did have a few known issues, namely a not-quite-working "10 meter" modification (the parts are all there, apparently having been pulled from an old SB-220 or from the previous owner having obtained a "10 meter" kit) but my interest at this time was the tendency of the "Tune" capacitor of the output network to arc over at maximum RF output power.

Figure 2: 
Some "blobs" on several of the rotor plates of the TUNE capacitor.
 Click on the image for a larger version.
If I operated the amplifier in the "CW" position with just 2.4 kV or so on the plates (at idle) everything was fine, but if I switched to the "SSB" position with 3.2 kV (at idle) then the capacitor arced over, causing signal distortion and high grid current - not to mention a loud hissing and the smell of ozone.  In popping the cover (with the power removed and the shorting bar doing its job!) I could see a few "blobs" on some of the capacitor plates which meant that when this had happened to the previous owner, it had probably been in sustained operation - obviously long enough to cause parts of some of the aluminum plates to be melted, further decreasing the distance between plates and increasing the likelihood of even more arcing!

Figure 3: 
In the center of the picture, a rather serious "blob" on one of the stator
plates.
Click on the image for a larger version.
After having had this amplifier for several months and operating it only at reduced power I finally got around to taking a closer look at what it would take to extract the TUNE capacitor and effect a repair.  Even though it is slightly cramped, it wasn't that difficult to do:  Remove the front-panel knob,  the left-hand tube, disconnect the blocking cap from the TUNE capacitor, remove the rear screw and nut holding it down and loosening the front screw and nut and pulling out the capacitor.



Disassembling the capacitor:

Fortunately, the capacitors used in these amplifiers are constructed from lots of small pieces rather than, like some "high-end" capacitors, press-fit into finely-machined rods and brazed.  What this meant was that simply by undoing a few bolts and screws the entire tuning capacitor can be reduced to a large pile of spacers and plates!

Figure 4:  A pile of parts from the disassembled rotor.
The still-intact stator is in the background.
Click on the image for a larger version.
The capacitor itself was disassembled in a shallow cookie sheet that I also use for assembling SMD-containing circuits:  It was fairly likely that any small part will be trapped in this pan rather than wander off elsewhere, such as onto my (messy!) workbench or, even worse, disappear into the carpeted floor!  Because this capacitor has several small parts and many spacers I felt it prudent to take this precaution - particularly with respect to the small ball bearings on the main shaft and the single bearing at the back end of the capacitor:  These smallest of parts were carefully sequestered in a small container while I was working on the capacitor.

Once the capacitor was "decompiled" all of the plates were very carefully examined for damage and it was found that there were two rotor plates and just one stator plate with large-ish blobs and some very minor damage to one or two other plates.  As is the nature of these things, it was the blob on the stator plate that was the most serious as it was the "weakest link" in terms of breakdown voltage and was always the smallest distance between two points no matter the setting of the capacitor (rotor) itself.

"Fixing" the damage:
Figure 5: 
The most badly-damaged capacitor plates, with an undamaged stator plate
(upper-left) for comparison.  The surfaces show evidence of oxidation
due to arcing.
Click on the image for a larger version.

If the damage is comparatively minor, as was the case here, then the "fix" is fairly simple:
  • Identify all plates that have any sort of "blob" or sharp edges.
  • Grind down any raised surface so that it is flush with the rest of the plate.
  • Using very fine sandpaper, eliminate any sharp edges or points.
If the plates are hopelessly melted you have the option of finding another capacitor on EvilBay, making our own plates, or simply cutting away the mangled portion and living with somewhat reduced maximum capacitance:  It is unlikely that the loss of even one entire plate would make the amplifier unusable on the lowest band, and it is also unlikely that more than two or three plates would have sustained significant damage, either, as this sort of damage tends to be somewhat self-limiting.

Placing a damaged plate on a piece of scrap wood, a rotary tool with a drum sanding bit was used to flatten out the "blob" on each of the three damaged plates.  Once this was done the plate was flat, but it was not particularly smooth, the rather coarse sandpaper having left marks on the plate, so I attacked the plates that had been "repaired" with 1200 grit wet-dry sandpaper and achieved a very nice luster where the grinding had taken place.  I also took special care to "ease over" the edges of the plates to remove any sharp edges - either from the original manufacturing process (stamping) or from the grinding that was done to remove the blob:  This is important as sharp edges are particularly prone to leading to ionization and subsequent arcing!

Because many of the plates showed some oxidation I decided that, while I had the capacitor apart, to polish every single plate - both rotor and stator - against 1200 grit "wet/dry" paper and, in the process, discovered several small "burrs" - either from minor arcing or from the plate having been stamped out of a sheet of metal.  I also took the trouble of "easing over" all edges of the capacitor plates in the process:  Again, sharp edges or points can be prone to arcing so it is important that this be considered!

Once I was done I piled the plates into an ultrasonic cleaner containing hot water and a few drops of dishwasher soap and cleaned them, removing the residual aluminum powder and oxide.  After 2 or 3 cycles in the cleaner the plates were removed and dried yielding pristine-looking plates - except, of course, for the three that had been slightly damaged.

Reassembly:

Figure 6: 
A rotor and stator plate having had the "blobs" ground off, but not yet
having been polished with 1200 grit sandpaper.  A bit of lost
plate material is evident on the left-hand side of the round rotor plate
as evidenced by its assymetry.
Click on the image for a larger version.
I first reassembled the stator, stacking the plates and spacers in their original locations and making sure that none of them got "hug up" on the rods with the last stator plate to be installed being the one that had been damaged.  The rotor was then reassembled, the job being fairly easy since its shaft is hexagonal, "keying" the orientation of the plates.  Because there had been two plates that had been damaged, I placed these on the ends so they were the first and last to be installed:  There is one more rotor plate than stator plate which means that when fully meshed, the two "end" (outside) plates are on the rotor.  Even though I was not particularly worried about it, by placing the "repaired" plates at the ends it would be possible to bend them and increase the distance slightly if they happened to be prone to arc without significantly affecting the overall device capacitance.

Having degreased the bearing mounts and the ball bearings themselves I used some fresh, PTFE-based grease to hold the bearings to the shaft while it was reinstalled, using more of the same grease to lubricate the rear bearing and contact, aligning it carefully with the back plate and finger-tightening the screws and nuts.  Once proper positioning was verified, the screws and nuts holding the end plates in place were fully tightened.

Both the rotor and stator plates are mounted on long, threaded rods with jam nuts on each end and by loosening one side and tightening of the other it is possible to shift the position of the rotor and/or stator plates.  Upon reassembly it was noted that, unmeshed, the rotor plates were not exactly in the centers of the stator plates overall so the nuts on the rotor were loosened and retightened as appropriate to remedy this.  On fully meshing the plates it was then observed that the stator plates were very slightly diagonal to the rotor plates overall so the appropriate nuts were adjusted to shift the positions of those as well.  The end result was that the majority of the rotor plates were centered amongst the stator plates - the desired result, as the capacitor's breakdown voltage is dictated by the least amount of spacing at just one plate.
Figure 7: 
The reassembled TUNE capacitor with a slightly foreshortened
and "repaired" rotor plate at the far end.
Click on the image for a larger version.

Inevitably there will be a few plates that are closer/farther and/or off center from the rest and that was the case here so a few minutes were taken to carefully bend rotor and/or stator plates, using a small blade screwdriver, as needed to center them throughout the rotation.  When I was done all plates were visually centered, likely accurate to within a fraction of a millimeter.

The capacitor was reinstalled quite easily with the aid of a very long screwdriver.  The only minor complication was that the solder joint for the high-frequency end of the tank coil - the portion that consists of silver-plated tubing - broke loose from the rest of the coil, but this was easily soldered by laying the amplifier on its left side so that any drips fell there and not into the amplifier.

"Arc" testing:

After reinstalling the top cover, verifying that it pushed the safety shorting bar out of the way, and installing the many screws that held it and the other covers in place I fired up the amplifier into a 50 ohm dummy load and observed that at maximum plate voltage and with as much input and output power as I could muster, the TUNE capacitor did not arc!

One of these days I need to figure out why the 10 meter position on the band switch isn't making proper contact, but that will be another project!

[End]

This page stolen from "ka7oei.blogspot.com".

Sunday, November 1, 2015

Adding FM to the mcHF SDR transceiver

This has been one of those "Rabbit Hole" features.

In the past I have stated several times on the Yahoo Group that I would not be adding FM to the mcHF (a completely stand-alone QRP SDR HF transceiver) any time soon, mostly because I was quite certain that there was simply not enough processor horsepower to do it properly, but quite recently I (sort of) got obsessed with making it work, going from "0 to 60" in just a few evenings of code "hacking".

While it took only an hour or so to get the FM demodulation and modulation working it ultimately took much longer than that to integrate the other features (subaudible tone encode/decode, etc.) and especially the GUI interface with everything!  By the time I was done I'd spent more time than I had hoped (not unexpected!) but at least had something to show for it!

FM on an HF transceiver:

First of all, FM is one of those modes that I have used on HF only a few dozen times in my 30+ years of being a ham - using it most often (albeit rarely) when 10 meters is open and I hear a repeater booming in, but other than that I haven't really had any reason to do so, particularly since there are no local 10 meter repeaters and local activity on "10 FM" is quite sparse.

Figure 1:
Reception of an FM signal as displayed on the waterfall (using the
"blue" palette) showing the sidebands from the 1 kHz tone
modulated onto the carrier being received.  The white background
on the "FM" indicator shows that the (noise) squelch is open.
Click on the image for a larger version.
What might be a practical reason to add FM to the mcHF other than for 10 meter openings or with a transverter (or a modified radio) on a higher band such as 6, 4 or 2 meters where FM is common?  I asked this question on the mcHF Yahoo group and several people noted that in various parts of Europe, FM repeaters and simplex operation on 10 meters is/are (apparently) more common than here in the U.S - not to mention the fact that I suspect that some mcHF users like to hang out on "CB" frequencies where FM is used in Europe!

Background:

As it turns out I was recently adding/fixing some of the current mcHF features, streamlining some code and I decided to look into what, exactly, it would take to demodulate FM.

Before I begin the description it should be noted that the mcHF, in FM receive mode, must utilize "frequency translation" which shifts the local oscillator by 6 kHz - this, to get away from the "0 Hz hole" intrinsic to many SDR implementations that down-convert the RF to baseband:  If we did not do this the FM carrier would land in this "hole" and hopelessly distort it!

With the signal to be received centered at 6 kHz at the input of the A/D converter, a software frequency conversion shifts it back to "0 Hz" where, in the digital domain, the hole does not exist.  True, the FM signal is now modulated +/- zero and includes "negative" frequencies, but since the signals are quadrature and it is just math at this point we can get away with it!

(Note:  The "0 Hz hole" still exists but is now 6 kHz removed from the center of the carrier so it has no effect at all on demodulated signals in receive bandwidths up to 12 kHz.)

The PLL method of demodulating FM:

The most common way to do this is via a PLL, implemented in software
Figure 2:  PLL implementation of an FM demodulator

Depicted in Figure 2 this works by "tracking" the variations of the input signal's frequency using the PLL:  Differences in instantaneous phase are detected, applied to the VCO's tuning line via the Loop Filter.  The loop filter is present to remove the energy from the original carrier frequency, effectively leaving only the audio modulation behind, a sample of which is high-pass filtered to remove the DC content and used to extract the demodulated audio.

In hardware this type of scheme is used in PLL ICs such as the NE564 and NE565, or even the good old 4046 PLL IC:  Many implementations of hardware-based FM demodulators may be found on the internet using these chips!  (Hint:  Google "NE565 SCA decoder").

In software this method would typically be applied in those instances where the "carrier" frequency was high, compared to the highest modulated frequency contained in the FM signal to be demodulated and the "VCO" would be an NCO (Numerically Controlled Oscillator) - essentially a software-based DDS (Direct Digital Synthesis) "oscillator."  In a typical SDR application the implementation would appear more complex than in the above block diagram, using both the I and Q channels as part of the phase detector (and to avoid ambiguity since both are needed to avoid the "frequency image" anyway) - but the intent is quite clear.

Out of curiosity I decided to implement this on the mcHF, but as expected it was to "expensive" in terms of processing power - plus, the "carrier" frequency (in software) was only 6 kHz - not too far above the highest modulated frequency.  I could hear the audio, but it was somewhat distorted.

The "arctangent" method of demodulating FM:

Applying another tactic I went for the "arctangent" method.  If you recall your trigonometry, the arctangent is that number that you get when you input the ratio between the two sides of a triangle to yield the angle.  Related to the arctangent is the "atan2" function that appeared in computer languages where not just the ratio is inputted to the function, but the actual lengths of the sides of the triangle (y, x) and the angle is computed from that.

If we consider the instantaneous amplitudes of our I and Q signals to be vectors, we can see how we can use this function to determine the angle between those two vectors at any given instant - and since FM consists of rapidly changing phases (angles) we can therefore derive about the frequency modulation:  The more the angle changes, the more deviation!  (More or less...)

Figure 3:  Speech-modulated audio being received and displayed
on the "Spectrum Scope" showing the width of a typical
voice-modulated, pre-emphasized FM signal.  The red "FM"
indicator shows that the subaudible tone is being received and
properly decoded.
Click on the image for a larger version.


In order to do this you must know how the vector has changed from one sample to the next, using previous information to do so, so a bit of math is first used to accomplish this as described in the following code snippet:

loop:
   y = Q * I_old - I * Q_old
   x = I * I_old + Q * Q_old
   audio = atan2(y, x)
   I_old = I
   Q_old = Q

Where:
   "I" and "Q" are our quadrature input signals
   "audio" is the demodulated output, prior to de-emphasis (see below.)


Because we supply it with both x and y, "atan2" function has the convenient feature of knowing, without ambiguity, the quadrant from which the two sides of the triangle are derived - something not possible from the normal "atan" function.  For example if our vector is +1, +1 - a ratio of 1 - we know that our angle is 45 degrees in the first quadrant, but if our vector is -1, -1 - our ratio is still 1, but clearly this angle is in the fourth quadrant and the normal "atan" function would give us a completely bogus answer - but "atan2" would faithfully yield the correct answer of 315 (-45, actually) degrees.

If you are coming from the analog world you know that one of the necessary steps in properly demodulating an FM signal is to apply limiting after bandpass filtering, but before the actual FM demodulation - this being a process where incoming signal is typically amplified and then strongly clipped - often several times - to assure that the amplitude of the resulting signal is constant regardless of the strength of the original signal.  The reason for this is that most analog FM schemes are either amplitude sensitive to a degree (e.g. slope detection, the "Foster-Seeley" discriminator - link, or even the so-called "ratio detector" - link) or can operate over only a somewhat limited amplitude range and still maintain "linearity" in their frequency-dependent demodulation.

As it turns out the while both of the aforementioned schemes are amplitude-insensitive with this limiting applied, the "atan2" method can be considered to be the ultimate "ratio detector" in that all it really cares about is the ratio of the vectors from the I and Q channels and not a whit about their amplitudes!  If the signal is reasonably strong, both channels (I and Q) will reflect the instantaneous angle-change of the received signal with respect to the previous sample.  As with any other method of detection of frequency modulation as signals get weaker, noise begins to intrude, causing uncertainty and the calculation of the instantaneous angle begins to be contaminated with random bursts of energy which naturally shows up as "popcorn" and/or "hiss" in the recovered audio.

Rather than using the compiler's rather slow, built-in floating-point "atan2()" function I decided to use the "Fixed point Atan2 with Self-Normalization" function (in the public domain) attributed to Jim Shima posted (among other places) on the DSP Guru Site (link here).  Actually, this algorithm was quite familiar to me as I'd unwittingly used a very similar method many years ago on a PIC-based project in which I needed to do an ATAN2-like function using integer math to derive the bearing on a direction-finding (DF) system. (The DF system is decribed here - link)

Needless to say, this algorithm is blazing fast compared to the built-in, floating-point "atan2" function and demodulating FM with this much horsepower simply would not be possible without its "cost savings" and the accuracy of the result easily yields sub 1% THD (distortion) - more than adequate for communications-grade FM.

De-emphasizing the demodulated audio:

The audio that spits out of the "atan2" function is proportional to the magnitude of the frequency deviation at any modulated frequency (within reason, of course!) - but in amateur radio, at least for voice communications, we do not use "FM" per se, but rather "Phase" modulation (PM).  Without going into the math, the only thing that you need to know is that in order to use an FM modulator to generate a PM-compatible signal one must "pre-emphasize" the frequency response of the audio at a rate of 6 dB/octave.  In other words, if you were set a signal generator to produce +/- 2 kHz of deviation with an audio signal of 1 kHz, if you change that audio signal to 2 kHz - but kept the audio level the same - the deviation would increase to +/- 4 kHz of deviation when you are using PM.

If you did not do this pre-emphasis, your audio would sound muffled on an ordinary amateur "FM" receiver.  Conversely, if you use an "FM" receiver for the reception of PM you must apply a 6 dB/octave de-emphasis to it:  Without this the audio tends to sound a bit "sharp" and tinny.

There is a practical reason for doing this and it is in the form of "triangle noise" - that is, as an FM signal gets weaker, the recovered audio does not get quieter, but it gets noisier, instead, with the noise appearing in the high frequencies first as high-pitched hiss.  By using "PM" (or more typically, "true" FM with pre-emphasis on transmit and de-emphasis on receive) we reduce the intensity of this high-pitched noise on weaker signals:  Since we are boosting the "highs" on transmit and then reducing them back to normal on receive, those audio frequencies that would be first affected by noise as the signal weakens are maintained at higher levels while at the same time reducing the amplitude of the high frequencies in which the noise will first appear, preventing the phenomenon by which the high frequency component would otherwise be the first to disappear into the noise with weak signal.  The end result is that with PM, the signal can be weaker - and seem to be noise free - more than is possible with "straight FM".

For an explanation of noise in FM signals, in general, read the page "Pre-Emphasis (FM) Explained" - link.

In an analog circuit this de-emphasis is as simple as an "R/C" (resistor-capacitor) low-pass filter (series resistor followed by a capacitor to ground) and it may be simulated in code as follows:

loop:
  filtered = old + α * (input + old)
  old = input

Where:
   "α" is the "smoothing" parameter
  "input" is the new audio sample
  "output" is the low-pass (integrated) audio data

In the above, the output data is proportional to the previous output and the next input which means that as the rate-of-change increases, the output decreases - effectively forming a single-pole low-pass filter of 6 dB/octave.

If you were to implement this with real components (resistor, capacitor) you would not select them for really low frequency as this would mean that by the time you got to speech frequencies (1-2 kHz) your audio would rolled of by many dB - but this would also mean that low-frequency components (subaudible tones, AC hum, even the low-frequency components of noise) would seem to be artificially amplified and could "blast" the speaker/amplifier.  Instead, one would select a "knee" frequency above which one would start to roll off the audio - typically just below the bottom end of the "speech" range of 300 Hz or so and by doing this the low frequencies are not as (seemingly) amplified as they would by otherwise.  As it turns out, with an "α" setting of 0.05 or so we can achieve a reasonable (low frequency) "knee" frequency at a sample rate of 48 kHz.

Even if we appropriately select a "knee" frequency as above our audio amplifier/speaker will still get blasted by low-frequency noise since we must still amplify the signal by well over 10 dB to get reasonable amplitudes at speech frequencies, but we can - with a "differentiator" - the inverse of (but similar to) the integrator described above, knock off these low-end components.  In software this differentiator function (which, in the analog world, is a series capacitor followed by a resistor to ground) is performed as demonstrated below:

loop:
  filtered = α * (old_input + input - old_filtered)
  old_filtered = filtered
  old_input = input

Where:
  "α" is the the equivalent of the time constant in that a "small" α implies an R/C circuit with a fast time-constant strongly affecting "low" frequencies.
  "input" is the new audio sample.
  "filtered" is the high-pass filtered (differentiated) audio

Setting "α" to 0.96 (with a sample rate of 48kHz) put the "knee" roughly in the area of 300-ish Hz and with the "low-pass" (integrator) and "high-pass" (differentiator) cascaded the low-frequency speech components were minimally affected, but the combination of the "knee" frequency of the integrator and the nature of the differentiator meant that the very low components (below approximately 200 Hz) were being attenuated at a rate of around 12dB/octave - all by using simple filtering algorithms that take little processing power!  Again, it is important that we do this or else the very low frequencies (subaudible tones, the "rumble" of open-squelch noise) would be of very high amplitude and easily saturate both the audio amplifier and speaker, causing clipping/distortion at even low audio levels.

At some point I may attempt to design an FIR or IIR filter that will both de-emphasize the audio at 6dB/octave and filter out the low frequencies used by subaudible tones but I wanted to at least try the above method which was pretty quick, easy to do, and had a fairly low processor burden.

Pre-filtering:

One factor not mentioned up to this point, but extremely important - especially with FM - is that one must bandwidth-limit the I and Q channels before demodulation.  More than in the case of AM, off-frequency signals will contribute to noise and nonlinearity in the demodulation process and it is easy to see why:  If we are simply using vectors with our "atan2" function to recover the frequency modulator, anything that contaminated that information would distort and/or add noise to the resulting audio so it is important that we feed only enough bandwidth to the demodulator to pass "enough" of the signal.

Defining the bandwidth of a modulated FM signal is rather tricky because it is, in theory, infinitely wide. In practice, the energy drops off rather quickly so the "far out" sidebands soon disappear into the noise, but how quickly can we "clip off" the "close in" sidebands?  Clearly, we must have at least enough bandwidth to pass all of our audio, and since the FM signal is symmetrical about its center, it's twice as wide as that.  There is also the issue of the amount of frequency deviation that is used.  If we take our example of +/- 2.5 kHz deviation in "narrow" mode we know that just because of that, alone, our signal must be at least 5 kHz wide!  It would make sense, therefore, that the actual bandwidth of the signal is related to both the amount of deviation and the audio imposed on it - and it does, and this is called "Carson's Rule" and you can read about it here - link.  This rule is:

 occupied bandwidth = 2 * (highest audio frequency) + 2 * (deviation)

If we have +/-2.5 kHz deviation and our audio is limited to 2.6 kHz the calculated bandwidth would be 10.2 kHz.  It should be remembered that this is considered to be the occupied bandwidth of the signal and generally indicates the minimum spacing between similar signal, but it turns out that if we are willing to put up with minor amounts of signal degradation our receive bandwidth may be narrower.  By cutting off a few extra sidebands the result is a bit of added distortion since part of signal that represents the vectors presented to the "atan2" function have been removed and the representation is understandably less-precise.

In the case of the mcHF, filtering for the FM demodulator is done by the same Hilbert transformers that are already present for "wide" SSB and AM demodulation where they can also be configured to provide a low-pass function.  For example, there exist 3.6, 5 and 6 kHz low-pass versions of the Hilbert transformers that provide the 90 degree I/Q phase shift and coupled with the fact that these would operate both above and below center frequency, they yield approximate detection bandwidths of 7.2, 10 and 12 kHz, respectively.  Using a filter wider than 12 kHz (+/- 6 kHz) is problematic in this implementation because, as noted earlier, we are shifting our signal by 6 kHz and with a 12 kHz bandwidth, one edge of the filter actually falls in the "zero Hz hole" of the hardware.  This is not a problem at the 12 kHz bandwidth, but it is a problem at wider bandwidths and can result in significant distortion.

While receive bandwidths more than 12 kHz could be obtained using a shift greater than 6 kHz, testing (both on-the-bench and on-air) has shown that "wide" +/- 5 kHz deviation signals may be received with no obvious distortion with the 12 kHz bandwidth setting - and even the 10 kHz setting is very "listenable".

Surprisingly, if you cram a +/- 5 kHz deviation signal through the 7.2 kHz filter, the results are generally quite usable although distortion and frequency restriction are becoming evident and there may be the risk of clamping if the squelch is set too tight.  One advantage of a 7.2 kHz filter over a 12 kHz filter is that the former, being only 60% as wide, will intercept commensurately less noise on a weaker signal which means that it may be possible to gain an extra dB or two of receiver sensitivity by switching to the narrower bandwidth - if one is willing to accept the trade-off of lower fidelity!

In a later posting I'll talk about the squelch "circuit", subaudible tone detection as well as frequency modulation.


[End]

This page stolen from "ka7oei.blogspot.com".