Friday, November 20, 2015

FM squelch and subaudible tone detection on the mcHF

In a previous installment  ("Adding FM to the mcHF SDR Transceiver" - link) I described how the demodulation of FM signals was added to the mcHF SDR transceiver but being able to receive FM implies the addition of a few other features, namely that of squelch "circuitry" - of both "carrier" and "tone" types.

Determining the "squelch" status of a received signal:

One of the most obvious ways to determine the presence of a signal is to compare the signal strength against a pre-set threshold:  If the signal is above that threshold it is considered to be "present" and an audio gate is opened so that it may be heard.

This sounds like a good way to do it - except that it isn't, really, at least not with FM signals.

If you were listening to an FM signal on 10 meters that was fading in and out (as it often does!) one would have to set the threshold just above that of the background noise - or "open" (e.g. disable) it completely to prevent it from disappearing when the signal strength dove into a minimum during QSB.  If the background noise were to vary - as it can over the course of a day and with propagation - the squelch would be prone to opening and closing as well.

As it turns out, typical FM squelch circuits do not operate on signal strength as there are better methods for determining the quality of the signal being received that can take advantage of the various properties of the FM signals themselves.

Making "Triangle Noise" useful:

Mentioned in the previous entry on this topic was "Triangle Noise", so-called by the way it is often represented graphically.
Figure 1:
A graph representing the relative amplitude of noise with strong weak FM signals.  It is the upward tilt of the noise energy to which "Triangle" noise refers - the angle getting "steeper" as the signal degrades.  Also represented is a high-pass filter that removes the modulated audio, leaving only the noise to be detected.
From this diagram one can begin to see why pre-emphasizing audio along a curve similar to the "weak signal noise" line can improve weak-signal intelligibility by boosting the high-frequency audio on transmit (and doing the inverse on receive) to compensate for the noise that encroaches on weak signals.

As can be seen in Figure 1 the noise in a recovered FM signal increases as the signal get weaker - but notice something else:  The noise increases more quickly at higher frequencies of audio than it does at lower audio frequencies.  Looking at Figure 1 you might make another observation:  Because there is typically some low-pass filtering of the transmitted audio to limit its occupied bandwidth, there is no actual (useful) audio content above that frequency from the distant station, but the noise is still there.

High-pass filtering to detect (only) "squelch noise":

From the above drawing in Figure 1 it can be recognized that if we only "listen" to the high-frequency audio energy that passes through the "Squelch noise" high-pass filter all we are going to detect is the noise level, independent of the modulated audio.  If we base our signal quality on the amount of noise that we detect at these high frequencies - which are typically above the typical hearing range (usually ultrasonic, above 10 kHz) - we can see that we don't need to know anything about the signal strength at all.

This method works owing to an important property of FM demodulators:  The amount of recovered audio does not change with the signal strength as the demodulator is "interested" only in the amount of frequency change, regardless of the actual amplitude.  What does change is the amount of noise in our signal as thermal noise starts to creep in, causing uncertainty in the demodulation.  In other words, we can gauge the quality of the signal by looking only at the amount of ultrasonic noise coming from our demodulator.
Figure 2: 
An representation of an analog squelch circuit with hysteresis.  The high-pass filter removes the "program" audio modulated onto the carrier (e.g. voice) which is then amplified as necessary and then rectified/filtered to DC to derive a voltage proportional to the amount of ultrasonic noise present:  The higher the voltage, the "weaker" and noisier the signal.
The resulting voltage is then fed with a comparator that includes hysteresis to prevent it from "flapping" when it is near the squelch threshold.
An analog representation of a squelch circuit may be seen in Figure 2.  For the simplest circuit, the high-pass filter could be as simple as an R/C differentiator followed by a single-transistor amplifier, and the same sorts of analogs (pun intended!) could be applied in software.

After getting the mcHF FM demodulator to be functional I tried several different high-pass filter methods - including a very simple differentiator algorithm such as that described in the previous posting - except, of course, that the "knee" frequency was shifted far upwards.  The absolute value was then taken from the output of the high-pass filtered and smoothed and printed on the screen while the input signal, modulated with a 1 kHz audio sine wave and fed to a SINAD meter (read about SINAD here - link) while signal level was varied:  In this way I could see how the output of the noise detection circuit behaved with differing signal conditions.

In doing this testing I noted that a simple differentiator did not work as well as I'd hoped - likely due to the fact that unlike an analog circuit in which the high-frequency energy can continue to increase in intensity with frequencies well into the 10's or 100's of kHz, in the digital domain we have a "hard" limit enforced by Nyquist, stopping at 23 kHz or so on the mcHF with its 48 ksps rate.

With less high frequency spectral noise energy (overall) to work with it is necessary to amplify the output of a simple differentiator more, but this also brings up the lower frequency (audio) components, causing it to be more affected by speech and other content, requiring a better filter.  Ultimately I determined that 6-pole IIR high-pass audio filter with a 15 kHz cut-off frequency, capable of reducing "speech" energy and its second and third harmonics below 9-10 kHz by 40-60dB, worked pretty well:  In testing I also tried a similar filter with an 8 kHz cut-off, but it was more-affected by voice modulation and its immediate harmonics.

If the FM demodulation is working properly the result will be a low-distortion, faithful representation of the original audio source with little/no energy above the low-pass filter's cut-off in the transmitter.  If the signal is distorted in some way - such as with multipath distortion, being off-frequency or with excess deviation, energy from this distortion can appear in the ultrasonic region which cannot be easily distinguished from "squelch" noise.
If this energy is high enough, the squelch can close inadvertently since the signal may be "mistaken" as being weak:  This is referred to as "squelch clamping" and is so-called as it is often seen on voice peaks of signals degraded by multipath and/or off-frequency.

Determining noise energy:

In short, the algorithm to determine the squelch energy was as follows:


   squelch_avg = (1-α) * squelch_avg + sqrt(abs(hpf_audio)) * α
   if(squelch_avg > MAX_VALUE)
      squelch_avg = MAX_VALUE

   α = "smoothing" factor
   hpf_audio = audio samples that have been previously high-pass filtered to remove speech energy
   squelch_avg = the "smoothed" squelch output

If you look at the above pseudocode example you'll notice several things:
  • The square root value of the absolute value of the high-pass noise energy is taken.  It was observed that as the signal got noisier, the noise amplitude climbed very quickly:  If we didn't "de-linearize" the squelch reading based on the noise energy - which already has a decidedly non-linear relationship to the signal level - we would find that the majority of our linear squelch adjustment was "smashed" toward one end of the range.  By taking the square root our value increases "less quickly" with noisier signals than it otherwise would.
  • The value "squelch_avg" is integrated (low-pass filtered) to "smooth" it out - not surprising since it is a measurement of noise which, by its nature, is going to vary wildly - particularly since the instantaneous signal level can be anything from zero to peak values.  What we need is a (comparatively) long-term average.
  • The "squelch_avg" value is capped at "MAX_VALUE".  If we did not do this the value of "squelch_avg" would get very high during periods of no signal (maximum noise) and take quite a while to come back down when a signal did appear, causing a rather sluggish response.  The magnitude of "MAX_VALUE" was determined empirically by observing "squelch_avg" with a rather noisy signal - the worst that would be reasonably expected to open a squelch.
Obtaining a usable basis of comparison:

The above "squelch_avg" value increases as the quieting of the received FM signal decreases which means that we must either invert this value or, if a higher "squelch" setting means that a better signal is required for opening the squelch, that same "squelch setting" variable must have its sense inverted as well.

I chose the former approach, with a few additional adjustments:
  • The "squelch_avg" value was rescaled from its original range to approximately  24 representing no-signal conditions to 3 representing a full-quieting signal with modulation with hard limits imposed on this range (e.g. it is not allowed to exceed 24 or drop below 3).
  • The above number was then "inverted" by subtracting it from 24, setting its range to 2 representing no signal to 22 for one that is full-quieting with modulation.
It is not enough to simply compare the derived "squelch_avg" number after scaling/inversion with the squelch setting, but rather a bit of hysteresis must also be employed or else the squelch is likely to "flap" about the threshold.  I chose a value of 10% of the maximum range, or a hysteresis range of +/-2 which seemed to be about right.

The final step was to make sure that if the squelch was set to zero that it was unconditionally open - this, to guarantee that no matter what, some sort of signal could be heard without worrying about the noise threshold occasionally causing the squelch to close under certain conditions that might cause excess ultrasonic energy to be present.

The result is a squelch that seems to be reasonably fast in response to signals, weak or strong, but very slightly slower in response to weak signals.  This slight asymmetry is actually advantageous as it somewhat reduces the rate-of-change that might occur under weak-signal conditions (e.g. squelch-flapping) - particularly during "mobile flutter."  The only downside that is currently noted is that despite the "de-linearization" the squelch setting is still somewhat compressed with the highest settings being devoted to fairly "quiet" signals" and most of the range representing somewhat noisier signals - but in terms of intelligibility and usability, it "feels" pretty good.

Subaudible tone decoding:

One useful feature in an FM communications receiver is that of a subaudible tone (a.k.a. CTCSS) decoder.  For an article about this method of tone signalling, refer to the Wikipedia article here - link.

In short, this method of signalling uses a low-frequency tone, typically between 67 and 250 Hz, to indicate the presence of a signal and unless the receiver detects this tone on the received signal it is ignored.  In the commercial radio service this was typically used to allow several different users to share the same frequency but to avoid (always) having to listen to the others' conversations.  In amateur radio service it is often used as an interference mitigation technique:  The use of carrier squelch and subaudible tone greatly reduces the probability that the receiver's squelch will falsely open if no signal is present or, possibly, if the wrong signal - in the case where a listener is an an area of overlapping repeaters - is present - but this works only if there is a tone being modulated on the desired signal in the first place.

The Goertzel algorithm:

There are many ways to detect tones, but the method that I chose for the mcHF was the Goertzel algorithm.  Rather than explain exactly how this algorithm works I'll point the reader to the Wikipedia article on the subject here - link.  The use of the Goertzel algorithm has several distinct advantages:
  • Its math-intensive parameters may be calculated before-hand rather than on the fly.
  • It requires only simple addition/subtraction and one multiplication per iteration so it need only take a small amount of processor overhead.
  • Its detection bandwidth is very scalable:  The more samples that are accumulated, the narrower it is - but also slower to respond.
The Goertzel algorithm, as typically implemented, is capable of "looking" at only one frequency at a time - unlike an FFT which looks at many - but since it is relatively "cheap" in terms of processing power (e.g. the most intensive number-crunching is done before-hand) it is possible that one could implement several of them and still use fewer resources than an FFT.

The Goertzel algorithm, like an FFT, will output a number that indicates the magnitude of the signal present at/about the detection frequency, but by itself this number is useless unless one has a basis of comparison.  One approach sometimes taken is to look at the total amount of audio energy, but this is only valid if it can be reasonably assured that no other content will be present such as voice or noise, which may be generally true when detecting DTMF, but this cannot be assured when detecting a subaudible tone in normal communications!

"Differential Goertzel" detection:

I chose to use a "differential" approach in which I set up three separate Goertzel detection algorithms:  One operating at the desired frequency, another operating at 5% below the desired frequency and the third operating at 4% above the desired frequency and processed the results as follows:
  • Sum the amplitude results of the -5% Goertzel and +4% Goertzel detections.
  • Divide the results of the sum, above, by two.
  • Divide the amplitude results of the on-frequency Goertzel by the above sum.
  • The result is a ratio, independent of amplitude, that indicates the amount of on-frequency energy.  In general, a ratio higher than "1" would indicate that "on-frequency" energy was present.
By having the two additional Goertzel detectors (above and below) frequency we accomplish several things at once:
  • We obtain a "reference" amplitude that indicates how much energy there is that is not on the frequency of the desired tone as a basis of comparison.
  • By measuring the amplitude of adjacent frequencies the frequency discrimination capability of the decoder is enhanced without requiring narrower detection bandwidth and the necessarily "slower" detection response that this would imply.
In the case of the last point, above, if we were looking for a 100 Hz tone and a 103 Hz tone was present, our 100 Hz decoder would "weakly-to-medium" detect the 103 Hz tone as well but the +4% decoder (at 104 Hz) would more strongly detect it, but since its value is averaged in the numerator it would reduce the ratiometric output and prevent detection.

Setting the Goertzel bandwidth:

One of the parameters not easily determined in reading about the Goertzel algorithm is that of the detection bandwidth.  This parameter is a bit tricky to discern without using a lot of math, but here is a "thought experiment" to understand the situation when it comes to being able to detect single-frequency (tone) energy using any method.

Considering that the sample rate for the FM decoder is 48 ksps and that the lowest subaudible tone frequency that we wish to detect is 67.0 Hz, we can see that at this sample rate it would take at least 717 samples to fully represent just one cycle at 67.0 Hz.  Logic dictates that we can't just use a single cycle of 67 Hz to reliably detect the presence of such a tone so we might need, say, 20 cycles just to "be sure" that it was really there and not just some noise at a nearby frequency that was "near" 67 Hz.  Judging by the very round numbers, above, we can see that if we had some sort of filter we might need around 15000 samples (at 48 ksps) in order to be able to filter this 67 Hz signal with semi-reasonable fidelity.

As it turns out, the Goertzel algorithm is somewhat similar.  Using the pre-calculated values for the detection frequency, one simply does a multiply and a few adds and subtractions of each of the incoming samples:  Too few samples (fewer than 717 in our example, above) and one does not have enough information with which to work at low frequencies to determine anything at all about our target frequency of 67 Hz, but with a few more samples one can start to detect on-frequency energy with greater resolution.  If you let the algorithm run for too many samples it will not only take much longer to obtain a reading, but the effective "detection bandwidth" becomes increasingly narrow.  The trick is, therefore, to let the Goertzel algorithm operate for just enough samples to get the desired resolution, but not so many that it will take too long to obtain a result!  In experimentation I determined that approximately 12500 samples were required to provide a tradeoff between adequately-narrow frequency resolution and reasonable response time.

This is part of the reason for the "differential" Goertzel energy detection in which we detect energy at, above and below the desired frequency:  This allows us to use a somewhat "sloppier" - but faster - tone detection algorithm while, at the same time, getting good frequency resolution and, most importantly, the needed amplitude reference to be able to get a useful ratiometric value that is independent of amplitude.

Debouncing the output:

While an output of greater than unity from our differential Goertzel detection generally indicates on-frequency energy, one must use a significantly higher value than that to reduce the probability of false detection.  At this point one can sort of treat the output of the tone detector as a sort of noisy pushbutton switch an apply a simple debouncing algorithm:


   if(goertzel_ratio >= threshold)   {
      if(debounce > debounce_maximum)
         debounce = debounce_maximum
   else   {
      if(debounce > 0)
   if(debounce >= detect_threshold)
      tone_detect = 1
      tone_detect = 0


   "goertzel_ratio" is the value "f/((a+b)/2))" described above where:
      f = the on-frequency Goertzel detection amplitude value
      a = the above-frequency Goertzel detection amplitude value
      b = the below-frequency Goertzel detection amplitude value
   "threshold" is the ratio value above which it is considered that tone detection is likely.  I found 1.75 to be a nice, "safe" number that reliably indicated on-frequency energy, even in the presence of significant noise.

   "detect_threshold" = the number of "debounce" hits that it will take to consider a tone to be valid.  I found 2 to be a reasonable number.
   "debounce_maximum" is the highest value that the debounce count should attain:  Too high and the it will take a long time to detect the loss of tone!  I used 5 for this which causes a slight amount of effective hysteresis and a faster "attack" than "decay" (e.g. loss of tone).

With the above algorithm - called approximately once every 12500 samples (e.g. just under 4 times per second with a 48ksps sample rate) - the detection is adequately fast and quite reliable, even with noisy signals.

Putting it all together:

Figure 3:
An FM signal with a subaudible tone being detected, indicated by
the "FM" indicator in red.
If tone decoding is not enabled, the Goertzel algorithms are not called at all (to save processor overhead) and the variable "tone_detect" is set to 1 all of the time.  For gating the audio a logical "AND" is used requiring that both the tone detect and squelch be true - unless the squelch setting is 0, in which case the audio is always enabled.

Finally, if the squelch is closed (audio is muted) the audio from the FM demodulator is "zeroed".

* * *

In a future posting I'll describe how the the modulation part of this feature was accomplished on the mcHF along with the pre-emphasis of audio, filtering and the generation of burst and subaudible tones.


Monday, November 9, 2015

Repairing the TUNE capacitor on the Heathkit HL2200 (SB-220) amplifier

Figure 1:
The front panel of the HL-2200 amplifier - which is really just a slightly
modernized version of the SB-220.
Click on the image for a larger version.
Earlier this year I picked up a Heathkit HL2200 amplifier (the newer, "brown" version of the SB-220) at a local swap meet for a reasonable price.  What made it particularly attractive was that it not only had a pair of new, graphic 3-500Z tubes in (Chinese-made, but RF-Parts Inc. tested/branded) but it also had a Peter Dahl "Hypersil" tm power transformer rather than the "just-adequate" original Heathkit transformer and an already-installed circuit that allowed the milliamp, low-voltage keying rather than the 100-ish volts of the original.

Obligatory Warning:
The amplifier/repair described on this page presents lethal voltages in its circuits during normal operation.  Be absolutely certain to take any and all precautions before working on this or any piece of equipment that contains dangerous voltages!  This amplifier was unplugged and the built-in high-voltage safety shorting bar operated by removing the top cover was verified to be doing its job.

Problems with the amplifier:

While it was servicable as-is, it did have a few known issues, namely a not-quite-working "10 meter" modification (the parts are all there, apparently having been pulled from an old SB-220 or from the previous owner having obtained a "10 meter" kit) but my interest at this time was the tendency of the "Tune" capacitor of the output network to arc over at maximum RF output power.

Figure 2: 
Some "blobs" on several of the rotor plates of the TUNE capacitor.
 Click on the image for a larger version.
If I operated the amplifier in the "CW" position with just 2.4 kV or so on the plates (at idle) everything was fine, but if I switched to the "SSB" position with 3.2 kV (at idle) then the capacitor arced over, causing signal distortion and high grid current - not to mention a loud hissing and the smell of ozone.  In popping the cover (with the power removed and the shorting bar doing its job!) I could see a few "blobs" on some of the capacitor plates which meant that when this had happened to the previous owner, it had probably been in sustained operation - obviously long enough to cause parts of some of the aluminum plates to be melted, further decreasing the distance between plates and increasing the likelihood of even more arcing!

Figure 3: 
In the center of the picture, a rather serious "blob" on one of the stator
Click on the image for a larger version.
After having had this amplifier for several months and operating it only at reduced power I finally got around to taking a closer look at what it would take to extract the TUNE capacitor and effect a repair.  Even though it is slightly cramped, it wasn't that difficult to do:  Remove the front-panel knob,  the left-hand tube, disconnect the blocking cap from the TUNE capacitor, remove the rear screw and nut holding it down and loosening the front screw and nut and pulling out the capacitor.

Disassembling the capacitor:

Fortunately, the capacitors used in these amplifiers are constructed from lots of small pieces rather than, like some "high-end" capacitors, press-fit into finely-machined rods and brazed.  What this meant was that simply by undoing a few bolts and screws the entire tuning capacitor can be reduced to a large pile of spacers and plates!

Figure 4:  A pile of parts from the disassembled rotor.
The still-intact stator is in the background.
Click on the image for a larger version.
The capacitor itself was disassembled in a shallow cookie sheet that I also use for assembling SMD-containing circuits:  It was fairly likely that any small part will be trapped in this pan rather than wander off elsewhere, such as onto my (messy!) workbench or, even worse, disappear into the carpeted floor!  Because this capacitor has several small parts and many spacers I felt it prudent to take this precaution - particularly with respect to the small ball bearings on the main shaft and the single bearing at the back end of the capacitor:  These smallest of parts were carefully sequestered in a small container while I was working on the capacitor.

Once the capacitor was "decompiled" all of the plates were very carefully examined for damage and it was found that there were two rotor plates and just one stator plate with large-ish blobs and some very minor damage to one or two other plates.  As is the nature of these things, it was the blob on the stator plate that was the most serious as it was the "weakest link" in terms of breakdown voltage and was always the smallest distance between two points no matter the setting of the capacitor (rotor) itself.

"Fixing" the damage:
Figure 5: 
The most badly-damaged capacitor plates, with an undamaged stator plate
(upper-left) for comparison.  The surfaces show evidence of oxidation
due to arcing.
Click on the image for a larger version.

If the damage is comparatively minor, as was the case here, then the "fix" is fairly simple:
  • Identify all plates that have any sort of "blob" or sharp edges.
  • Grind down any raised surface so that it is flush with the rest of the plate.
  • Using very fine sandpaper, eliminate any sharp edges or points.
If the plates are hopelessly melted you have the option of finding another capacitor on EvilBay, making our own plates, or simply cutting away the mangled portion and living with somewhat reduced maximum capacitance:  It is unlikely that the loss of even one entire plate would make the amplifier unusable on the lowest band, and it is also unlikely that more than two or three plates would have sustained significant damage, either, as this sort of damage tends to be somewhat self-limiting.

Placing a damaged plate on a piece of scrap wood, a rotary tool with a drum sanding bit was used to flatten out the "blob" on each of the three damaged plates.  Once this was done the plate was flat, but it was not particularly smooth, the rather coarse sandpaper having left marks on the plate, so I attacked the plates that had been "repaired" with 1200 grit wet-dry sandpaper and achieved a very nice luster where the grinding had taken place.  I also took special care to "ease over" the edges of the plates to remove any sharp edges - either from the original manufacturing process (stamping) or from the grinding that was done to remove the blob:  This is important as sharp edges are particularly prone to leading to ionization and subsequent arcing!

Because many of the plates showed some oxidation I decided that, while I had the capacitor apart, to polish every single plate - both rotor and stator - against 1200 grit "wet/dry" paper and, in the process, discovered several small "burrs" - either from minor arcing or from the plate having been stamped out of a sheet of metal.  I also took the trouble of "easing over" all edges of the capacitor plates in the process:  Again, sharp edges or points can be prone to arcing so it is important that this be considered!

Once I was done I piled the plates into an ultrasonic cleaner containing hot water and a few drops of dishwasher soap and cleaned them, removing the residual aluminum powder and oxide.  After 2 or 3 cycles in the cleaner the plates were removed and dried yielding pristine-looking plates - except, of course, for the three that had been slightly damaged.


Figure 6: 
A rotor and stator plate having had the "blobs" ground off, but not yet
having been polished with 1200 grit sandpaper.  A bit of lost
plate material is evident on the left-hand side of the round rotor plate
as evidenced by its assymetry.
Click on the image for a larger version.
I first reassembled the stator, stacking the plates and spacers in their original locations and making sure that none of them got "hug up" on the rods with the last stator plate to be installed being the one that had been damaged.  The rotor was then reassembled, the job being fairly easy since its shaft is hexagonal, "keying" the orientation of the plates.  Because there had been two plates that had been damaged, I placed these on the ends so they were the first and last to be installed:  There is one more rotor plate than stator plate which means that when fully meshed, the two "end" (outside) plates are on the rotor.  Even though I was not particularly worried about it, by placing the "repaired" plates at the ends it would be possible to bend them and increase the distance slightly if they happened to be prone to arc without significantly affecting the overall device capacitance.

Having degreased the bearing mounts and the ball bearings themselves I used some fresh, PTFE-based grease to hold the bearings to the shaft while it was reinstalled, using more of the same grease to lubricate the rear bearing and contact, aligning it carefully with the back plate and finger-tightening the screws and nuts.  Once proper positioning was verified, the screws and nuts holding the end plates in place were fully tightened.

Both the rotor and stator plates are mounted on long, threaded rods with jam nuts on each end and by loosening one side and tightening of the other it is possible to shift the position of the rotor and/or stator plates.  Upon reassembly it was noted that, unmeshed, the rotor plates were not exactly in the centers of the stator plates overall so the nuts on the rotor were loosened and tightened to remedy this.  On fully meshing the plates it was then observed that the stator plates were very slightly diagonal to the rotor plates overall so the appropriate nuts were adjusted to shift the positions of those as well.  The end result was that the majority of the rotor plates were centered amongst the stator plates - the desired result as the capacitor's breakdown voltage is dictated by the least amount of spacing on just one plate.
Figure 7: 
The reassembled TUNE capacitor with a slightly foreshortened
and "repaired" rotor plate at the far end.
Click on the image for a larger version.

Inevitably there will be a few plates that are closer/farther and/or off center from the rest and that was the case here so a few minutes were taken to carefully bend rotor and/or stator plates, using a small blade screwdriver, as needed to center them throughout the rotation.  When I was done all plates were visually centered, likely accurate to within a fraction of a millimeter.

The capacitor was reinstalled quite easily with the aid of a very long screwdriver.  The only minor complication was that the solder joint for the high-frequency end of the tank coil - the portion that consists of silver-plated tubing - broke loose from the rest of the coil, but this was easily soldered by laying the amplifier on its left side so that any drips fell there and not into the amplifier.
"Arc" testing:

After reinstalling the top cover, verifying that it pushed the safety shorting bar out of the way, and installing the many screws that held it and the other covers in place I fired up the amplifier into a load and observed that at maximum plate voltage and with as much input and output power as I could muster, the TUNE capacitor did not arc!

One of these days I need to figure out why the 10 meter position on the band switch isn't making proper contact, but that will be another project!

Sunday, November 1, 2015

Adding FM to the mcHF SDR transceiver

This has been one of those "Rabbit Hole" features.

In the past I have stated several times on the Yahoo Group that I would not be adding FM to the mcHF (a completely stand-alone QRP SDR HF transceiver) any time soon, mostly because I was quite certain that there was simply not enough processor horsepower to do it properly, but quite recently I (sort of) got obsessed with making it work, going from "0 to 60" in just a few evenings of code "hacking".

While it took only an hour or so to get the FM demodulation and modulation working it ultimately took much longer than that to integrate the other features (subaudible tone encode/decode, etc.) and especially the GUI interface with everything!  By the time I was done I'd spent more time than I had hoped (not unexpected!) but at least had something to show for it!

FM on an HF transceiver:

First of all, FM is one of those modes that I have used on HF only a few dozen times in my 30+ years of being a ham - using it most often (albeit rarely) when 10 meters is open and I hear a repeater booming in, but other than that I haven't really had any reason to do so, particularly since there are no local 10 meter repeaters and local activity on "10 FM" is quite sparse.

Figure 1:
Reception of an FM signal as displayed on the waterfall (using the
"blue" palette) showing the sidebands from the 1 kHz tone
modulated onto the carrier being received.  The white background
on the "FM" indicator shows that the (noise) squelch is open.
Click on the image for a larger version.
What might be a practical reason to add FM to the mcHF other than for 10 meter openings or with a transverter (or a modified radio) on a higher band such as 6, 4 or 2 meters where FM is common?  I asked this question on the mcHF Yahoo group and several people noted that in various parts of Europe, FM repeaters and simplex operation on 10 meters is/are (apparently) more common than here in the U.S.


As it turns out I was recently adding/fixing some of the current mcHF features, streamlining some code and I decided to look into what, exactly, it would take to demodulate FM.

Before I begin the description it should be noted that the mcHF, in FM receive mode, must utilize "frequency translation" which shifts the local oscillator by 6 kHz - this, to get away from the "0 Hz hole" intrinsic to many SDR implementations that down-convert the RF to baseband:  If we did not do this the FM carrier would land in this "hole" and hopelessly distort it!

With the signal to be received centered at 6 kHz at the input of the A/D converter, a software frequency conversion shifts it back to "0 Hz" where, in the digital domain, the hole does not exist.  True, the FM signal is now modulated +/- zero and includes "negative" frequencies, but since the signals are quadrature and it is just math at this point we can get away with it!

(Note:  The "0 Hz hole" still exists but is now 6 kHz removed from the center of the carrier so it has no effect at all on demodulated signals in receive bandwidths up to 12 kHz.)

The PLL method of demodulating FM:

The most common way to do this is via a PLL, implemented in software
Figure 2:  PLL implementation of an FM demodulator

Depicted in Figure 2 this works by "tracking" the variations of the input signal's frequency using the PLL:  Differences in instantaneous phase are detected, applied to the VCO's tuning line via the Loop Filter.  The loop filter is present to remove the energy from the original carrier frequency, effectively leaving only the audio modulation behind, a sample of which is high-pass filtered to remove the DC content and used to extract the demodulated audio.

In hardware this type of scheme is used in PLL ICs such as the NE564 and NE565, or even the good old 4046 PLL IC:  Many implementations of hardware-based FM demodulators may be found on the internet using these chips!  (Hint:  Google "NE565 SCA decoder").

In software this method would typically be applied in those instances where the "carrier" frequency was high, compared to the highest modulated frequency contained in the FM signal to be demodulated and the "VCO" would be an NCO (Numerically Controlled Oscillator) - essentially a software-based DDS (Direct Digital Synthesis) "oscillator."  In a typical SDR application the implementation would appear more complex than in the above block diagram, using both the I and Q channels as part of the phase detector (and to avoid ambiguity since both are needed to avoid the "frequency image" anyway) - but the intent is quite clear.

Out of curiosity I decided to implement this on the mcHF, but as expected it was to "expensive" in terms of processing power - plus, the "carrier" frequency (in software) was only 6 kHz - not too far above the highest modulated frequency.  I could hear the audio, but it was somewhat distorted and aliased.

The "arctangent" method of demodulating FM:

Applying another tactic I went for the "arctangent" method.  If you recall your trigonometry, the arctangent is that number that you get when you input the ratio between the two sides of a triangle to yield the angle.  Related to the arctangent is the "atan2" function that appeared in computer languages where not just the ratio is inputted to the function, but the actual lengths of the sides of the triangle (y, x) and the angle is computed from that.

If we consider the instantaneous amplitudes of our I and Q signals to be vectors, we can see how we can use this function to determine the angle between those two vectors at any given instant - and since FM consists of rapidly changing phases (angles) we can therefore derive about the frequency modulation:  The more the angle changes, the more deviation!  (More or less...)

Figure 3:  Speech-modulated audio being received and displayed
on the "Spectrum Scope" showing the width of a typical
voice-modulated, pre-emphasized FM signal.  The red "FM"
indicator shows that the subaudible tone is being received and
properly decoded.
Click on the image for a larger version.

In order to do this you must know how the vector has changed from one sample to the next, using previous information to do so, so a bit of math is first used to accomplish this as described in the following code snippet:

   y = Q * I_old - I * Q_old
   x = I * I_old + Q * Q_old
   audio = atan2(y, x)
   I_old = I
   Q_old = Q

   "I" and "Q" are our quadrature input signals
   "audio" is the demodulated output, prior to de-emphasis (see below.)

Because we supply it with both x and y, "atan2" function has the convenient feature of knowing, without ambiguity, the quadrant from which the two sides of the triangle are derived - something not possible from the normal "atan" function.  For example if our vector is +1, +1 - a ratio of 1 - we know that our angle is 45 degrees in the first quadrant, but if our vector is -1, -1 - our ratio is still 1, but clearly this angle is in the fourth quadrant and the normal "atan" function would give us a completely bogus answer - but "atan2" would faithfully yield the correct answer of 315 (-45, actually) degrees.

If you are coming from the analog world you know that one of the necessary steps in properly demodulating an FM signal is to apply limiting after bandpass filtering, but before the actual FM demodulation - this being a process where incoming signal is typically amplified and then strongly clipped - often several times - to assure that the amplitude of the resulting signal is constant regardless of the strength of the original signal.  The reason for this is that most analog FM schemes are either amplitude sensitive to a degree (e.g. slope detection, the "Foster-Seeley" discriminator - link, or even the so-called "ratio detector" - link) or can operate over only a somewhat limited amplitude range and still maintain "linearity" in their frequency-dependent demodulation.

As it turns out the while both of the aforementioned schemes are amplitude-insensitive with this limiting applied, the "atan2" method can be considered to be the ultimate "ratio detector" in that all it really cares about is the ratio of the vectors from the I and Q channels and not a whit about their amplitudes!  If the signal is reasonably strong, both channels (I and Q) will reflect the instantaneous angle-change of the received signal with respect to the previous sample.  As with any other method of detection of frequency modulation as signals get weaker, noise begins to intrude causing uncertainty and the calculation of the instantaneous angle begins to be contaminated with random bursts of energy which naturally shows up as "popcorn" and/or "hiss" in the recovered audio.

Rather than using the compiler's rather slow, built-in floating-point "atan2()" function I decided to use the "Fixed point Atan2 with Self-Normalization" function (in the public domain) attributed to Jim Shima posted (among other places) on the DSP Guru Site (link here).  Actually, this algorithm was quite familiar to me as I'd unwittingly used a very similar method many years ago on a PIC-based project in which I needed to do an ATAN2-like function using integer math to derive the bearing on a direction-finding (DF) system. (The DF system is decribed here - link)

Needless to say, this algorithm is blazing fast compared to the built-in, floating-point "atan2" function and demodulating FM with this much horsepower simply would not be possible without its "cost savings" and the accuracy of the result easily yields sub 1% THD (distortion) - more than adequate for communications-grade FM.

De-emphasizing the demodulated audio:

The audio that spits out of the "atan2" function is proportional to the magnitude of the frequency deviation at any modulated frequency (within reason, of course!) - but in amateur radio, at least for voice communications, we do not use "FM" per se, but rather "Phase" modulation (PM).  Without going into the math, the only thing that you need to know is that in order to use an FM modulator to generate a PM-compatible signal one must "pre-emphasize" the frequency response of the audio at a rate of 6 dB/octave.  In other words, if you were set a signal generator to produce +/- 2 kHz of deviation with an audio signal of 1 kHz, if you change that audio signal to 2 kHz - but kept the audio level the same - the deviation would increase to +/- 4 kHz of deviation when you are using PM.

If you did not do this pre-emphasis, your audio would sound muffled on an ordinary amateur "FM" receiver.  Conversely, if you use an "FM" receiver for the reception of PM you must apply a 6 dB/octave de-emphasis to it:  Without this the audio tends to sound a bit "sharp" and tinny.

There is a practical reason for doing this and it is in the form of "triangle noise" - that is, as an FM signal gets weaker, the recovered audio does not get quieter, but it gets noisier, instead, with the noise appearing in the high frequencies first as high-pitched hiss.  By using "PM" (or more typically, "true" FM with pre-emphasis on transmit and de-emphasis on receive) we reduce the intensity of this high-pitched noise on weaker signals:  Since we are boosting the "highs" on transmit and then reducing them back to normal on receive, those audio frequencies that would be first affected by noise as the signal weakens are maintained at higher levels while at the same time reducing the amplitude of the high frequencies in which the noise will first appear, preventing the phenomenon by which the high frequency component would otherwise be the first to disappear into the noise with weak signal.  The end result is that with PM, the signal can be weaker - and seem to be noise free - than is possible with "straight FM".

For an explanation of noise in FM signals, in general, read the page "Pre-Emphasis (FM) Explained" - link.

In an analog circuit this de-emphasis is as simple as an "R/C" (resistor-capacitor) low-pass filter (series resistor followed by a capacitor to ground) and it may be simulated in code as follows:

  filtered = old + α * (input + old)
  old = input

   "α" is the "smoothing" parameter
  "input" is the new audio sample
  "output" is the low-pass (integrated) audio data

In the above, the output data is proportional to the previous output and the next input which means that as the rate-of-change increases, the output decreases - effectively forming a single-pole low-pass filter of 6 dB/octave.

If you were to implement this with real components (resistor, capacitor) you would not select them for really low frequency as this would mean that by the time you got to speech frequencies (1-2 kHz) your audio would rolled of by many dB - but this would also mean that low-frequency components (subaudible tones, AC hum, even the low-frequency components of noise) would seem to be artificially amplified and could "blast" the speaker/amplifier.  Instead, one would select a "knee" frequency above which one would start to roll off the audio - typically just below the bottom end of the "speech" range of 300 Hz or so and by doing this the low frequencies are not as (seemingly) amplified as they would by otherwise.  As it turns out, with an "α" setting of 0.05 or so we can achieve a reasonable (low frequency) "knee" frequency at a sample rate of 48 kHz.

Even if we appropriately select a "knee" frequency as above our audio amplifier/speaker will still get blasted by low-frequency noise since we must still amplify the signal by well over 10 dB to get reasonable amplitudes at speech frequencies, but we can - with a "differentiator" - the inverse of (but similar to) the integrator described above, knock off these low-end components.  In software this differentiator function (which, in the analog world, is a series capacitor followed by a resistor to ground) is performed as demonstrated below:

  filtered = α * (old_input + input - old_filtered)
  old_filtered = filtered
  old_input = input

  "α" is the the equivalent of the time constant in that a "small" α implies an R/C circuit with a fast time-constant strongly affecting "low" frequencies.
  "input" is the new audio sample.
  "filtered" is the high-pass filtered (differentiated) audio

Setting "α" to 0.96 (with a sample rate of 48kHz) put the "knee" roughly in the area of 300-ish Hz and with the "low-pass" (integrator) and "high-pass" (differentiator) cascaded the low-frequency speech components were minimally affected, but the combination of the "knee" frequency of the integrator and the nature of the differentiator meant that the very low components (below approximately 200 Hz) were being attenuated at a rate of around 12dB/octave - all by using simple filtering algorithms that take little processing power!  Again, it is important that we do this or else the very low frequencies (subaudible tones, the "rumble" of open-squelch noise) would be of very high amplitude and easily saturate both the audio amplifier and speaker, causing clipping/distortion at even low audio levels.

At some point I may attempt to design an FIR or IIR filter that will both de-emphasize the audio at 6dB/octave and filter out the low frequencies used by subaudible tones but I wanted to at least try the above method which was pretty quick, easy to do, and had a fairly low processor burden.


One factor not mentioned up to this point, but extremely important - especially with FM - is that one must bandwidth-limit the I and Q channels before demodulation.  More than in the case of AM, off-frequency signals will contribute to noise and nonlinearity in the demodulation process and it is easy to see why:  If we are simply using vectors with our "atan2" function to recover the frequency modulator, anything that contaminated that information would distort and/or add noise to the resulting audio so it is important that we feed only enough bandwidth to the demodulator to pass "enough" of the signal.

Defining the bandwidth of a modulated FM signal is rather tricky because it is, in theory, infinitely wide. In practice, the energy drops off rather quickly so the "far out" sidebands soon disappear into the noise, but how quickly can we "clip off" the "close in" sidebands?  Clearly, we must have at least enough bandwidth to pass all of our audio, and since the FM signal is symmetrical about its center, it's twice as wide as that.  There is also the issue of the amount of frequency deviation that is used.  If we take our example of +/- 2.5 kHz deviation in "narrow" mode we know that just because of that, alone, our signal must be at least 5 kHz wide!  It would make sense, therefore, that the actual bandwidth of the signal is related to both the amount of deviation and the audio imposed on it - and it does, and this is called "Carson's Rule" and you can read about it here - link.  This rule is:

 occupied bandwidth = 2 * (highest audio frequency) + 2 * (deviation)

If we have +/-2.5 kHz deviation and our audio is limited to 2.6 kHz the calculated bandwidth would be 10.2 kHz.  It should be remembered that this is considered to be the occupied bandwidth of the signal and generally indicates the minimum spacing between similar signal, but it turns out that if we are willing to put up with minor amounts of signal degradation our receive bandwidth may be narrower.  By cutting off a few extra sidebands the result is a bit of added distortion since part of signal that represents the vectors presented to the "atan2" function have been removed and the representation is understandably less-precise.

In the case of the mcHF, filtering for the FM demodulator done by the same Hilbert transformers that are already present for "wide" SSB and AM demodulation where they can also be configured to provide a low-pass function.  For example, there exist 3.6, 5 and 6 kHz low-pass versions of the Hilbert transformers that provide the 90 degree I/Q phase shift and coupled with the fact that these would operate both above and below center frequency, they yield approximate detection bandwidths of 7.2, 10 and 12 kHz, respectively.  Using a filter wider than 12 kHz (+/- 6 kHz) is problematic in this implementation because, as noted earlier, we are shifting our signal by 6 kHz and with a 12 kHz bandwidth, one edge of the filter actually falls in the "zero Hz hole" of the hardware.  This is not a problem at the 12 kHz bandwidth, but it is a problem at wider bandwidths and can result in significant distortion.

While receive bandwidths more than 12 kHz could be obtained using a shift greater than 6 kHz, testing (both on-the-bench and on-air) has shown that "wide" +/- 5 kHz deviation signals may be received with no obvious distortion with the 12 kHz bandwidth setting - and even the 10 kHz setting is very "listenable".

Surprisingly, if you cram a +/- 5 kHz deviation signal through the 7.2 kHz filter, the results are generally quite usable although distortion and frequency restriction are becoming evident and there may be the risk of clamping if the squelch is set too tight.  One advantage of a 7.2 kHz filter over a 12 kHz filter is that the former, being only 60% as wide, will intercept commensurately less noise on a weaker signal which means that it may be possible to gain an extra dB or two of receiver sensitivity by switching to the narrower bandwidth - if one is willing to accept the trade-off of lower fidelity!

In a later posting I'll talk about the squelch "circuit", subaudible tone detection as well as frequency modulation.

Tuesday, October 27, 2015

6 meter cycloid dipole for circular polarization

A 2-meter version of this type of antenna (the Cycloid Dipole) has been discussed here before - see the August 5, 2013 entry, "A Circularly-polarized 'Omnidirectional' antenna" - link.

Way back in 2000 or thereabouts I slaved over a hot keyboard and bruised my branium with the voluminous numerical output from the NEC2 program - a decidedly user-unfriendly antenna analysis and simulation tool - and derived the dimensions of a 6 meter "cycloid dipole".  I wasn't shooting for 6 meters, specifically, but the initial "stab" at dimensions seemed to indicate via simulation that, in this general frequency range, the structure that I'd inputted exhibited a vague semblance of the desired characteristics - namely, omnidirectional properties near the horizon and circular polarity with a reasonable axial ratio so I ended up with an antenna at that frequency.
Figure 1: 
The "Ring-and-Stub" form of the Cycloid dipole.

This is a strange-looking antenna in either its original round ("ring-and-stub") form (Figure 1) or the easier-to-build "square" shape seen in Figure 2.  As noted in the earlier article, the round version had been used for FM broadcast use but the bending of round elements (not to mention inputting the model into NEC2 manually!) was deemed to be too difficult for "amateur" construction so it was worth the extra effort to crunch some numbers and run a lot of simulations to "emperically" derive the optimal dimensions for a "square" antenna that seemed, on computer, to function identically to the round one.

Figure 2:
 The "square" version of the antenna along with the matching network.
The ultimate result is the form of the antenna seen in Figure 2.

As can be seen, the form is basically the same, but it may be built with with things that you can find at any hardware store - namely copper pipe, couplers, caps and elbows.

Once I had been able to derive the 6 meter dimensions I did a linear rescaling to 2 meters - the frequency range of interest.  According to NEC2 the desired properties (omnidirectionality, axial ratio) were not well preserved so a bit more tweaking of the various dimensions was required to "dial" it in.

This 2 meter antenna was then implemented in hardware in the form of copper water pipe using standard pieces of hardware soldered together.  Because the antenna's feedpoint is a complex match (e.g. not 50 ohms and highly reactive) a 1/2-wave matching line was used, fed with a 200 ohm balanced source constructed using a 1/2 wave section of coaxial cable:  This sort of arrangement is not only very low loss using a "balanced line" consisting of copper pipe as the tuning section, but being fed with a balanced feed it is also quite symmetrical.  Finally, noting that it was very susceptible to detuning, an acrylic plastic shield was formed over the top of the matching network to keep it free of snow and rain.

This antenna was installed in about 2001 on a "temporary" mount consisting of ABS pipe at the mountain cabin belonging to Glen, WA7X, a site at an elevation of approximately 8500 feet (2600 meters) in central Utah, about 75 "air" miles from Salt Lake City.

The antenna seemed to work very well.  Those who had heard the 2 meter beacon when it was using a vertical J-pole and were using horizontally-polarized antennas for reception reported an increase in signal strength.  As of the time of this writing (October, 2015) this "temporary" installation is still in place and no maintenance has been done on the antenna and in the years since, the 2 meter beacon has been heard all over Utah and various parts of the western U.S. via Meteor and, possibly, Auroral and tropospheric propagation.

Shortly after the 2 meter antenna was constructed a 6 meter version was also built, but it was too large and heavy to support itself so it (literally) sat around for over a decade.

Earlier this year the 6 meter J-pole to which that beacon was connected seemed to have failed, exhibiting a high VSWR (around 5:1) and signals were down by 1-2 "S" units.  Rather than repair the J-pole it was decided that the 6 meter Cycloid should be (finally!) put into service - but first, the wobbly 1/2-inch copper pipe structure had to be stabilized.

That was the job of WA7X, the beacon owner.  Since it had held up well on the 2 meter antenna, ABS was used again to support the antenna structure - with more pieces than before.  As with the 2-meter Cycloid a 1/2 wave matching network consisting of parallel sections of copper pipe was used, fed with a 200 ohm coax balun and to keep the various parts of this mechanically stable some Delrin (tm) plastic sheets were obtained at a local distributor, cut, holes drilled into them and used to maintain the spacing.

The end result can be seen in Figure 3, below.  A diagonal piece of ABS connected via a diagonal is used to support the "vertical" elements.  The bottom section of the matching network is attached to the ABS pipe without worry of losses as it is "beyond" the active section and is inert at RF and it is to that section that a ground wire is attached.
Figure 3:  
 The installed 6 meter Cycloid dipole along with its smaller 2-meter cousin.
Click on the image for a larger version.
As with the 2 meter version, the matching network is very sensitive to changes in velocity factor or reactance and it was observed that as a piece of the Delrin (tm) that was used to maintain the spacing was moved around, the tuning was changed so three extra pieces were cut - one on the section above where the feed was attached and two more on the section above that.  When the antenna was finally completed, these pieces were slid back-and-forth to obtain a 1:1 VSWR and then secured in place with blobs of RTV (Silicone (tm)) adhesive.

Finally, a "rain shield" was installed over the top of the matching network, attached to a piece of ABS pipe via a right-angle connector attached to the top of the pipe supporting the antenna.  Getting the antenna "up there" was  a a challenge as it weighs quite a bit, but with a bit of rope and the grunts of three people it was hoisted to its final destination, the cables connected and...

The VSWR was terrible - around 5:1.

As it turned, the J-Pole was fine all along, but the connection of the outer shield of the 1/4" Heliax (tm) to its RF connector had work-hardened due to vibration from wind and broken loose.  Replacing that connector with a carefully-constructed splice using a short length of RG-8X (it's only 50 MHz!) as a jumper showed a 1:1 VSWR and a quick call to an amateur located near Salt Lake City revealed that on a horizontally-mounted Yagi the signal was at least an S-unit higher than before.

Since then, more people have had the opportunity to check out the signal from the beacon.  As expected, those that have horizontally-polarized antennas have reported noticeably stronger signals while those with vertically-polarized antennas reported slightly weaker signals as there is a predicted 3 dB loss (1/2 S-unit) due to polarization losses between the vertical and the circular.

It will be interesting to gauge by the reports during the next 6 meter season how well this antenna works, particularly since the signal that it radiates is now agnostic to the polarization of the antenna being used for reception and the vagaries of propagation  - and also to see how this antenna holds up compared to its smaller, lower wind-load 2-meter relative.

For dimensions of the 6 and 2 meter versions refer to the August, 2013 article linked above.

Wednesday, September 30, 2015

Gate current in a JFET! (The development of a very sensitive, speech-frequency optical receiver.)

Back in 2007-2008 I was working on a equipment "new" ham band - for me at least - the one that is now labeled as "...above 275 GHz" in the FCC rules.  As you might expect the most accessible portion of this infinity of electromagnetic spectrum is that portion containing visible light, and that is where I was directing my interest.

At this time "high power" LEDs were starting to appear on the market at reasonable prices, and by "high power" I mean LEDs that were capable of dissipating up to 5 watts, each.  What this meant was that from a single emitting die of rather small dimensions one could pump into it enough current and, with the good efficiency of the device, obtain a quantity of light that was suitable for long-distance optical communications.

To be sure, I was building on the fine pioneering work of others, including that of two Australians, Dr. Mike Groth (VK7MJ) and Chris Long (now VK3AML) who had determined that it was the noncoherent light produced by LEDs that offered the greatest probability of practical, very long-distance atmospheric optical communications.  (As a primer as to why this is the case, read the article Optical Communications Using Coherent and Noncoherent Light - link).

Optical receiver needed:

In the midst of producing the various pieces of equipment required for experiments in optical communications (e.g. optical transmitters, modulators, receivers, support equipment, etc.)  I was investigating the different circuit topologies of practical optical detectors.  My goal was not to achieve extremely high data speeds, but rather to use audio-frequency signalling (speech, tones) to start with and, perhaps, work up from there.

One of most common such detectors is the phototransistor - but I quickly dismissed that owing to its very small photoactive area and the fact that the various pieces of literature relating to weak-signal optical detection noted that they are very noisy in comparison to practically any other device owing to their intrinsic noise level.  (CdS cells - article here -  were not seriously considered because they are too slow to respond even for audio frequencies.)

One option was the venerable Photomultiplier Tube (article here) and while this was technically possible and, in theory, the best choice, it was ruled out because of its fragility (electrically and mechanically), its large size, and the need for a high voltage supply (around 1000 volts).

While these technical difficulties are surmountable I could not overlook the fact that available literature on these devices - and advice from the Australians, who'd actually used them - pointed out that there were but a few photomultipier tube types that have good sensitivity in the "red" end of the optical spectrum where there is also good atmospheric transparency - and even fewer of these rare types, in known-usable condition, available for a reasonable price on the surplus market!
Figure 1: 
The transimpedance amplifier in its simplest form. 
This circuit converts the photodiode currents
into a proportional output voltage.

The Transimpedance Amplifier:

This left me with the photodiode (article here) and the most commonly-seen circuit using this device is the "TIA" - TransImpedance Amplifier (article here).  As can be seen from Figure 1 this is very simple, consisting of just an operational amplifier with a feedback loop with the photodiode connected directly to the noninverting input.  In this circuit the photodiode currents are converted directly to voltage (hence the name) with the gain set by the feedback resistor with the added capacitor being used to assure stability, compensating for photodiode and op-amp capacitance.

This particular circuit has the advantage that it is very predictable and the frequency response can be determined by the combination of the bandwidth of the op amp and the intrinsic capacitance of the phototransistor.  To a degree, one can even increase the frequency response for a given set of devices by reducing the feedback, but this comes at the expense of gain and ultimate sensitivity.

In other words:  With photodiodes you can have high sensitivity, or you can have wide bandwidth - but not both!
Figure 2:
 A practical, daylight-tolerant TIA optical receiver circuit.  This has good sensitivity in both darkness and light and does not suffer from "saturation" in high ambient light conditions because of a built-in "servo" that self-adjusts the phototiode's virtual ground to offset photon-induced bias currents.  Because of this "servo" action this receiver does not have DC response like the circuit of Figure 1 with the low-end frequency being limited by the values of R104 and C106.
Click on the image for a larger version.

While very simple (there are even single-chip solutions such as the "OPT101" that include the photodiode, amplifier, and even feedback resistor in a clear package) there are some very definite, practical limitations to the ultimate sensitivity of this sort of circuit if the goal is to detect extremely weak, low-frequency currents.  When you get to very low frequencies, "1/f" noise (a.k.a. "flicker noise") becomes dominant from a number of sources and there are various other types of noise sources (thermal, shot, etc.) that can be produced by the various components.

As it turns out, this circuit - with practical op amps - has very definite limitations when it comes to trying to divine the weakest signals at low-ish frequencies (audio, sub-audio):  For an article on why this is so - and some of the means of mitigation, see the January, 2001 Electronic Design article, "What's All This Transimpedance Amplifier Stuff, Anyway?" - link by Robert Pease.

Figure 3: 
The VK7MJ optical receiver using TIA and cascode techniques - used as the "reference" optical detector.
The optional "daylight" circuit provides AC coupling to prevent saturation of the circuit under high ambient
light conditions at the expense of low-light performance.
Click on the image for a larger version.
One can construct transimpedance amplifiers using discrete components that outperform most of the integrated-circuit based designs and for a reference circuit I constructed and used one devised by Dr. Groth, VK7MJ and depicted in Figure 3.  In this circuit one may see the feedback path via R3/R4 with compensating capacitor Cf.  In this particular circuit Q1, the input FET, is rather heavily biased to increase its "bulk current" (a term used in the referenced Robert Pease article) with Q2 acting as a cascode circuit - link (e.g. current-based) amplifier with subsequent follower stages.  Additionally, the photodiode itself (D1) is reverse-biased, reducing its capacitance significantly and thereby improving high frequency response.  By hand-selecting the quietest JFETs one can obtain excellent performance with this circuit and since it is discrete, there is room for adjusting values as necessary to accommodate component variations and for experimentation.

This particular circuit is quite good across the audio range from a few 10's of Hz to several kHz, but above this range it is largely the capacitance of the photodiode that quashes the high frequency response.  Even though the photodiode's capacitance - and that of stray wiring and the JFET itself - may be only in the 10's of picofarads, at hundreds of k-ohms (or megohms) even small amounts of capacitance quickly become dominant - another good reason to implement the aforementioned cascode circuit and its tendency to minimize the "Miller Effect" - link to help optimize frequency response.

The K3PGP circuit and variations:
Figure 4: 
The K3PGP Optical receiver.
Click on the image for a larger version.

Building the above circuit as a "reference" I began testing on a "Photon Range" - a darkened room in my basement with a red LED affixed to the ceiling - where I characterized the various receiver topologies.  In this environment a small and adjustable amount of current (10's of microamps, typically) would be fed to the LED, modulated at an audio frequency, and the receiver under test would be placed on the floor below with its output connected to a computer in an adjacent room running an audio analysis program such as "Spectran" or "Spectrum Lab" to measure the signal-noise ratio at different frequencies.  Before and after each session I would measure the performance of my "standard" optical receiver - the VK7MJ circuit - and use it as a basis of comparison.

So-named after K3PGP (see his web site - link) was the next receiver to be tested.  This receiver is much more sensitive than the VK7MJ receiver - at least at very low audio frequencies (<200 Hz) and as may be seen in Figure 4 it is devoid of a feedback mechanism and the connection between the photodiode and JFET is made directly, with no external biasing components of any kind.

While a seemingly simple circuit, there is more going on here than one might first realize:  Without any feedback or any other components between the FET and photodiode the opportunity to introduce noise from such components or reduce the signal from the photodiode in any way is minimized:  In fact, when constructing this circuit there is the strong admonition that the photodiode-gate connection to the JFET be done in mid-air (and that one clean both components with alcohol to remove residue!) as leakage paths on circuit board material can cause significant signal degradation!

Effectively, the K3PGP circuit acts as a charge integrator with the energy slowly (in relative terms) bleeding off due to the leakage of the photodiode, photoconductivity, and the gate-source leakage currents of the FET itself.  While extremely sensitive at low frequencies - specifically those below 200 Hz - above this, the sensitivity and output suffers due to the rather long R/C constant associated with the high gate-photodiode leakage resistance and capacitance and, to a lesser degree, the Miller effect.  This circuit also functions only in total and near-total darkness conditions:  More light than that, the voltage across the photodiode reaches equalibrium while turning the FET "on", effectively quashing the signal.

Inspired by the above circuit I made the modification indirectly depicted in Figure 5, below:

Figure 5:
 The version "2.02" optical receiver, used as a test bed for various circuit configurations - see text.
For the "K3PGP" configuration the photodiode would be reversed from what is shown
in the drawing above and the anode grounded with nothing else connected at point "C".
Click on the image for a larger version.

This circuit was devised as a "test bed" and although not shown in the diagram, it was configured by connecting the cathode of the photodiode to the gate and grounding the anode and having no other photodiode-gate connections present - just as in the K3PGP receiver.

In this circuit one has a FET input and a cascode circuit - just like that of the VK7MJ circuit - to reduce the Miller effect, but this particular cascode circuit has a modification:  Q3 forms a current source, in parallel with the cascode, that supplies the bulk of the drain current for the JFET - several milliamps.  Because the amount of current provided by the current source - which has a high operating impedance and is largely "invisible" - is fixed (but adjustable by varying R4 to suit specific characteristics of Q1) and it is left up to the cascode to supply the remaining drain current - which varies depending on the gate voltage.  In this particular circuit, due to the "cascode action" the voltage at the drain of Q1 and emitter of Q2 varies very little while the cascode - which is allowed to bias itself at DC, but is bypassed at AC with C3 - produces the recovered modulation at the collector of Q2, greatly amplified.  From the collector of Q2, noninverting amplifier U1a amplifies the signal further and presents a low-impedance output.

In other words, it is mostly the K3PGP circuit, but with a cascode amplifier and much higher FET drain current:  By reducing Miller capacitance with the cascode the frequency response was to be improved somewhat and by increasing the drain current the noise contribution of the FET itself should be reduced as noted in the Pease article mentioned above.

In testing it was observed that this particular circuit was, in fact, several dB more sensitive than the original K3PGP circuit and also that the frequency response was slightly better - but not as much as one might first think, mostly owing to the fact that it is mostly the photodiode capacitance that is limiting the response rather than the Miller effect - but every little bit helps!

I then rewired the circuit using the "Standard Config" noted in Figure 5 which, if you draw in the lines, converts it into a TIA circuit like that of the VK7MJ design with both adjustable reverse bias of the photodiode and adjustable feedback.  In this configuration the performance at very low frequencies was reduced, likely due to the noise contribution of the feedback resistor, increased leakage currents from the photodiode at reverse bias and also signal attenuation caused by the feedback submerging the lowest-level, low-frequency signals into the noise.  At "speech" frequencies it was slightly better than that of the VK7MJ receiver - probably due to the higher JFET current or, perhaps, random component variances - and the frequency response was also comparable to that of the VK7MJ circuit, the parameters varying according to the amount of applied feedback and compensation.

Improving the receiver:

My goal was a circuit that offered the sensitivity of the K3PGP circuit, but usable speech response - the latter not being available from the K3PGP circuit due to the R/C rolloff.  A quick check revealed that this was the typical 6dB/octave rolloff so I reconfigured the circuit, again, as a K3PGP-like circuit and followed it with an op-amp differentiator circuit with a breakpoint calculated to compensate for the measured "knee" frequency (e.g. that at which the 6dB/octave rolloff of the K3PGP circuit) began - the result being that I now had a fairly flat frequency response.  Not unexpectedly, while the signal-noise ratio was quite good at the very low frequencies, it decreased fairly quickly as it went up as that energy was simply submerged in the circuit noise.

In staring at the circuit, with the grounded anode of the photodiode, I wondered about reverse-biasing the photodiode to reduce the capacitance - but if I did this, how would I keep the voltage at the gate from rising without needing to add another (noise generating, signal-robbing) component to clamp it to ground?  Knowing that the gate-source junction of a JFET was much like that of a bipolar transistor in that there would be an intrinsic diode present, I knew also that the gate-source voltage would limit itself to 0.4-0.6 volts, but how would the FET behave?

Using JFET Gate current for "good":

In doing a bit of research on the GoogleWeb when I derived this circuit I could not come up with any sort of useful answer to the "gate current" question, so I simply did it:  The photodiode was reverse-biased with the minute leakage and photoconducting currents being sinked by the gate-source junction.  As expected, the drain current increased noticeably, but the circuit worked extremely well, with both frequency response and apparent gain increasing dramatically!

Putting this "new" circuit back on the photon range I observed that although its low frequency (<200 Hz) sensitivity was somewhat worse than that of the K3PGP circuit (see comment below), the higher speech-range frequencies (300-2500 Hz) were, on average, 10-12dB better than the VK7MJ circuit and approximately 20 dB better than the best, low-noise op-amp based TIA circuit that I'd built to date!

In analyzing the circuit, there are several things happening:
  • Reverse bias of the photodiode:  This reduces the capacitance - typically by a factor of 3-6, depending on the specific device and voltage applied.
  • The photodiode will produce current in the presence of light.
  • Being reverse-biased, the photodiode will also operate in a photo-conductive mode, passing current from the bias supply in response to light.
  • With the gate-source junction conducting, the reverse bias across the photodiode is maintained since the gate-source voltage will never exceed 0.4-0.6 volts.
  • As described above, the amplifier is connected in "cascode" configuration to minimize Miller effects.
  • There are NO other components or signal paths connected to the photodiode-gate junction that can contribute noise or attenuate the signals.
  • In parallel with the cascode circuit is a current source which provides a high-impedance current source to increase the JFET's bulk currents, further reducing its noise.
 About the gate-source conductivity of the JFET, two things surprised me:
  • The "diode action" of the gate-source clamping seems not to be a significant contributor of noise - at least at "dark" currents of the photodiode.
  • There is little or no documentation about using a JFET this way, anywhere else!

It is likely that the main reason that this doesn't perform quite as well at the K3PGP circuit at low (<200 Hz) frequencies is because of the intrinsic leakage current noise endemic to the reverse biasing of the photodiode, particularly in a "1/F" manner:  At higher frequencies, it performed far better. 

In "photon range" testing it was difficult to tell at which frequencies the K3PGP receiver performed better.  My K3PGP exemplar receiverwas certainly better at, say, 20 Hz, but even at 100 Hz or 60 Hz it was a difficult call to make.  At such frequencies and under such conditions careful selection of the "quietest" photodiode and FET can make a significant difference and with most of these circuits, reducing their temperature - while somehow managing to avoid condensation - can help even more!

Plotting Gate current versus Drain and Gate voltages:

Later, I constructed a test fixture to analyze the gate-source voltage and gate-source current response of a 2N5457 JFET and plot this against the drain current - see Figure 6 below.
Figure 6:
 Gate-source voltage and Gate current plotted against drain current for a typical,
real-life JFET - not a simulation!  Note the logarithmic scale of the gate current and
also that the drain current continues to increase linearly with gate-source voltage,
even after the gate-source junction is conducting.
Click on the image for a larger version.
As can be seen, as the gate-source voltage increases, the drain increases linearly - even after the gate-source diode junction starts to conduct:  In fact, there does not appear to be inflection of the drain current curve when this happens!  Following the other line representing gate current we can see that once our gate-source "diode" starts to conduct, the gate current follows the classic logarithmic curve that one associates with diodes - which should not come as a surprise.
Equation 1:
The relationship between drain current and
gate-source voltage.
Vgs= Gate-source voltage
Vp=FET Pinch-off voltage
Idss=Zero gate voltage drain current

According to typical JFET models, in the saturation region the FET operates such that the drain current is generally independent of the drain voltage as can be seen in Equation 1 and the graphs in Figure 6 indicate that this seems to be true even when the gate-source junction is conducting.

So, now we know what is happening.  At first glance, one might presume that with this diode in conduction that the logarithmic response would make the circuit unsuitable for general audio recovery - but this is not so:  At very low light levels the detector has lower than 1% harmonic distortion.

Figure 7:
Test circuit used to derive the curves in Figure 6.
For measuring the voltage at "Vgate Monitor" it will be
required that the negative lead of the voltmeter be referenced
to a regulated, negative (with respect to ground)
voltage source.
In case you are interested, Figure 7 shows the circuit that was used to derive the curves in Figure 6, above.  10.0 volts was used for V+ and the drop across source-follower Q2 was easily characterized so that the drop across R1 - and thus the gate current in Q1 - could be determined.  The drain current was determined by measuring the voltage across R2.  Different values of R1 were used to achieve the measurement range depicted in Figure 6 which accounts for the very slight bend in the "Gate Current" curve.

Putting this into practice:

The following circuit was developed for speech-bandwidth optical communications use:

As can be seen, this looks very similar to the circuit of Figure 4 with the exception that the reverse-biased photodiode is connected to the JFET and that there is the added circuit, U1b, that forms a bandwidth-limited differentiator - the component values chosen to approximately correlate with the low-frequency "knee" of the BPW34 photodiode and also to cease its frequency boost above 5-8 kHz.  (The "Flat" audio output, uncompensated by the differentiator for the 6dB/octave rolloff, is provided for both very low frequency - below 200 Hz - and high frequency - above 5 kHz - signals to be applied to a computer for analysis.)

The circuit in Figure 5 - and minor variations of it - have been replicated many times over the years using different components.  The important considerations are that both Q2 and Q3 be low-noise, high-beta transistors such as the MPSA18 (or 2N5089) and that the JFET used for Q1 be capable of rather high drain current.  In the original design, the 2N5457 was specified as this device is better-characterized that many other, similar FETs and is capable of quite low-noise operation:  The more common MPF102, with its extremely wide variation of parameters, might be suitable if an appropriate device is "cherry picked" from amongst several based both on high zero gate-source voltage drain current and tested "noisiness".  A more modern JFET is the BF862 - available in surface-mount only - that is even better for this application than the 2N5457 and capable of much higher drain ("bulk") current to the point where utilizing its full potential might compromise battery life!
Figure 8:  
Version "3" of the optical receiver.  This receiver must always be operated on its own, completely isolated power supply to avoid feedback.  V+ is 8-15 volts and is typically a 9-volt battery.  D4 and TH1 prevent damage should the applied polarity of the power source be accidentally reversed.  After Q1's drain current has been measured and adjusted, jumper "J1" is closed.
A version of this circuit by the author of this page also appeared in an article published in the SPIE proceedings (#6878) which was presented at the 2008 "Photonics West" conference by another one of the paper's co-authors, Chris Long.
Click on the image for a larger version.

In a circuit such as Figure 8, above, the drain-source voltage will be much lower than one might initially expect - on the order of 0.2-1.0 volts for a JFET such as a 2N5457 and between 0.1 and 0.5 volts for the BF862 - but this is normal operation.  While the setting for Q3 current source circuit set via R5 (in Figure 8) at 120 ohms is suitable for most 2N5457 devices, the current may need to be reduced (e.g. R5 increased in value to 180 or 220 ohms) for some "lower 0 Vgs" current devices such as the MPF102.  In general, the higher the drain current, the lower noise contribution from the FET - but if you exceed the "magic" value and attempt to force too much current, the circuit will suddenly stop working:  Overall it is better to have a bit lower drain current than optimal and have a little bit more noise than to have too much drain current!  (Don't forget that the current source and the JFET itself will also change slightly with temperature.)

Interestingly, the circuit depicted in Figure 8 also works in daylight, albeit with some caveats:  When very high levels of light are present the photoconductivity will shunt the reverse bias to the gate-source junction, the frequency rolloff "knee" associated with the photodiode capacitance will shift upwards due to photoconductive shunting causing the audio to become "tinny" and the audio will become somewhat distorted owing to the different light-to-audio transfer curve that occurs under such conditions in which case the frequency response of the audio on the "flat" output is more suitable than otherwise.  In such situations one does not really need the high sensitivity of this type of receiver, anyway, and a typical TIA circuit with AC coupling such as that depicted in Figure 2 or Figure 3 could be used or one could apply optical attenuation in front of the detector to reduce the light level.

Practical use:
Figure 8:
An as-built "Version 3" optical receiver, constructing using
prototyping techniques and enclosed in a shielded, light-tight
enclosure using pieces of printed circuit board material.  For this
unit "feedthrough" capacitors are used for power and audio
connections to prevent the incursion of RF energy on
the connecting leads.
Click on the image for a larger version.

Entire web pages could be written (and have been - see the Modulated Light web site - link) about through-the-air, free-space optical communications over long distances (well over 100 miles, 160km) using both LEDs and low-power lasers, but even the most sensitive receiver - no matter the underlying technology - requires supporting optics (lenses!) in order to function properly:  It is through such lenses that 10's of dB of noiseless signal gain may be achieved, not to mention directionality and the implied rejection of off-axis light sources.

The circuits described on this page are likely to be suitable only for speech frequencies and low-rate data but this is, in part, due to the medium involved (the atmosphere) and method of transmission, but at the extreme distances that have been achieved with the above equipment (>173 miles, 278km) the signals are weak enough that only low-rate signalling techniques would likely be feasible under typical conditions at safe, practical optical power levels.

Additional web pages on related topics:
The above web pages also contain links to other, related pages on similar subjects.