Wednesday, June 29, 2022

The use case for a blinking infrared LED

Many years ago at a company where I worked, we had two sets of computer systems:  The ones that we used every day for engineering purposes, and the "corporate" computers that were used for things like "official" email and interfacing with accounting.

Figure 1:
The IR (clear) LED and the red blinking LED.
The red LED was uncovered for this picture.  The IR
LED is positioned to stick up, inside the mouse's lens
assembly when it's placed atop the pad.
Click on the image for a larger version.
One day, the edict came from on-high that the corporate computers would log themselves off the network after a ridiculously short amount of time (it may have been as little as 5 minutes) if no mouse or keyboard activity was used.

This was particularly bothersome to the local accountant person who would have to turn away from the corporate computer for a few minutes at a time to do something else (paperwork, answer a phone, etc.) only to find that it had logged off and out of whatever application was running.  To make matters worse, it took several minutes to log back in as the authentication was painfully slow:  This was way back in the late 1990s/early 2000s, you know!


It should go without saying that absurd and draconian "security" measures like those described above are usually self-defeating:  It adds unnecessary frustration to those using the system, causing "creative" means to be derived to circumvent it which can completely defeat the intent of the measures.  It had already been the practice of the person using this computer to log off when stepping out of the office (and locking it!) but we heard of other "interesting" ways that others came up with to circumvent this within the company. 
Fortunately, it was only a few months later that the computer security folks came up with a much more sensible plan and the device described here was no longer "needed".
After several weeks of being frustrated by this - and being denied the request to lengthen the auto-logoff to something more reasonable like 10-30 minutes, I was asked if there was something that I could do.  The first thing that I thought of was some motorized do-hickey that would move the mouse just a little bit to make it "look" like the computer was in use - but something else occurred to me:  Interfere with the optical mouse in some way.

Modern optical mouses (mice?) literally take a picture of the desktop - many times a second - and divine the movement by tracking very small features under them.  Fortunately most surfaces have small-scale features that make this possible - but if you were ever wondering why an optical mouse doesn't work well on a piece of clean glass - now you know!

Figure 2:
The optical mouse atop the IR LED.  The IR LED fits
up inside the lens cavity.  The entire circuit was inserted
and built into a piece of scrap mouse pad.
Click on the image for a larger version.
So, how would one make the think the mouse is moving - without actually moving it?  It occurred to me that a flashing, red LED would accomplish this.  A quick test by sticking the blinking LED up into the lens assembly showed that as it blinked, the cursor would move one "Mickey" (the unit of mouse movement - look it up!) up and to the right on this particular mouse, satisfying the computer that the mouse was actually being moved.  It didn't seem to matter that the mouse cursor would inevitably end up in the top-right corner of the screen - it seemed to stop the time-out nonetheless.

This worked well - but what if a blinking LED bothers you?  The answer is a blinking infrared LED.

Where does one get a blinking Infrared LED?

Of course, no-one makes such a thing (why would they?) - but the work-around is simple:  Place an ordinary infrared LED in series with a visible blinking one and place the latter LED out of sight.

Infrared LEDs - which may be harvested from defunct/surplus infrared remotes - come in two flavors:  Those that operate at 850 nanometers, and those that operate at 940 nanometers.  The 850 nanometer versions are just visible to the human eye (in a dark room) and work best in this situation as they are well within the response curve of the sensor in the mouse:  In fact, some optical mouses (mice?) use Infrared LEDs to be more "stealthy".  I didn't try a 940 nanometer LED, but I know from experience that if something operates on a visible (red) wavelength, it will likely work just fine with an 850 nanometer LED.

The circuit to do this was very simple, and is as follows:

Figure 3:
Diagram of the circuit - pretty simple, actually!
As noted, the voltage can be anywhere between 9 and 15 volts DC - 12 volts nominal.
Click on the image for a larger version.

The power supply used was a random "wall wart":  The one that I'd grabbed was marked 9 volts at 100 milliamps and it put out about 13 volts DC under no load, but any DC voltage between 9 and 15 volts ought to be fine:  5 volts from a USB charger is simply too low!

The way this works is that with the two LEDs in series, the current in the two LEDs MUST be identical (Kirkhoff's law and all of that...) which means that when the blinking LED was on, more current also went through the other LED, making it brighter.  When the blinking LED was off, the other LED doesn't go completely off, but it gets noticeably dimmer - which was enough to make the mouse detect "movement".  The 470 ohm resistor limits the current to a safe value and the 100 uF capacitor provides a bit of bypassing that helps assure that the blinking LED will function properly:  It may work without it - but not all blinking LEDs do.  Because they are in series, it doesn't matter the order in which the LEDs are placed - just that they are in series and connected correctly in terms of polarity.

If you are unsure that the infrared LED is blinking, check it with your cell-phone camera as it will respond adequately to Infrared, particularly up-close, and with 850 nanometer LEDs.

This trick also works with other LEDs:  If you have a cheap, red blinking LED but not one of the color that you might want to blink (say a white or blue LED) this could be substituted for the "IR LED".  Again, the "other" LED may not extinguish completely during the "off"portion and if this bothers you, a resistor could be placed across it to "bridge" some of the current around it (e.g. the non-blinking LED) and drop the voltage below its illumination threshold:  The value would have to be experimentally determined.

There you go:  A use-case for a blinking infrared LED!

* * *

This page stolen from


Saturday, May 28, 2022

Fixing a TS-570G (The tuner couldn't find a match, timing out...)

The TS-570D's front panel

 A couple of months ago I happened to be at a swap meet in Northern Utah and talking to a gentlemen - with whom I had a passing acquaintance - as he was unloading his vehicles.  One of the things that he placed on his table was a Kenwood TS-570D, in its original box, with a price tag on it that seemed to be too good to be true.

Asking about it, he said that it worked fine, but that the "tuner wouldn't stop", so it had to be used with the antenna tuner bypassed.  Visually inspecting it, it looked to be in "good nick" (a 4 out of 5) so I shut up and gave him the money.

After digging out from underneath a few other projects, I finally took a look at it and sure enough, pressing the AT TUNE button started a bout of furious clicking that didn't stop for about 30 seconds with the radio beeping an error.  I couldn't help but notice, however, that there was no SWR or power output indication while the tuner was doing its thing - but if I bypassed the tuner, both of these were true.

Going into the menu (#11 - "Antenna tuner operation while receiving") I set that to "on" and noticed that the receiver went mostly dead - a sure sign that something was amiss with the signal path through the tuner.  Popping the covers, I whacked on the relays with the handle of a screwdriver while the radio was connected to an antenna and could hear signals come and go.  This attempt at "percussive repair" quickly narrowed the culprit to relay K1, the relay that switches the antenna tuner in and out of the signal path.

A few weeks later, after having ordered and receive a new relay, I cleared enough space on the workbench to accommodate the radio and commenced a repair.

The repair:

The antenna tuner is on the same, large circuit board as the final and low-pass filter, which meant that not only were there a zillion screws to take out, but I also had to remove the white thermal heat-sink compound from several devices, un-clip the back panel connectors and un-plug a few signal cables.  Using my trusty Hakko DFR-300 desoldering gun, I was able to cleanly remove both K1 and - because I had two relays, and they were identical - K3 as well, soldering in the replacement.

When I'd pulled the board, I also noticed that components "D10" - which is a glass discharge tube across Antenna connector #2 - had some internal discoloration, possibly indicating that it had seen some sort of stress, so I rummaged about and found two 350 volt Bourns gas discharge tubes and replaced both "D10" and "D11" - the unit on the Antenna #1 connector.  Unlike the originals - which are glass - these are metal and ceramic, requiring that I put a piece of polyamide (a.k.a. Kapton) tape on the board to insulate them from the traces underneath.  The leads of these new devices were also much heavier and would not fit through the board (drilling larger would remove through-plating!) so I soldered short lengths of #24 tinned wire through the holes and used these to attach the straight leads of the new discharge tubes.

After cleaning the board of flux with denatured alcohol and an old toothbrush, I put an appropriately sparse amount of heat sink compound on the required devices, loosely started all of the screws and with everything fitting, I snugged them all down, finishing with the RF output transistors - and then re-checking everything again to make sure that I didn't miss anything.

After plugging the connecting cables back in I noted that the receiver now worked through the tuner and pressed the AT Tune button and was greeted with lots of clicking and varying VSWR - but still, it continued and eventually errored out.

Figuring that the radio's computer may have been messed up, I did a complete CPU reset, but to no avail.  Because the SWR and power indication were working correctly, I knew that this wasn't likely to be a component failure like the reverse power detection circuit, so it had to be something amiss with the configuration, so I referred to the service manual's section about the "Service Adjustment Mode".

Going through the Service Adjustment Mode Menu:

Like most modern radios, this one has a "Service Menu" where electronic calibration and adjustments are performed and to get to it, I inserted a wire between pins 8 and 9 of the ACC2 jack and powered up the radio while holding the N.R. and LSB USB keys and having done this, a new menu appeared.  On a hunch, I quickly moved to menu #18 - the adjustment for the 100 watt power level.

What is supposed to happen is that if you key the radio, it will transmit a 100 watt carrier on 14.2 MHz, but instead, I got about 60 watts, and checking the related settings for 50, 25, 10 and 5 watts, I got very low power levels for each of those as well.  To rule out an amplifier failure, I went back to the 100 watt set-up and pressed the DOWN button, eventually getting over 135 watts of output power, indicating that there was nothing wrong with the finals, but rather that the entire "soft calibration" procedure would have to be followed.

Starting at the beginning of the procedure which begins with receiver calibration, I found everything to be "wrong" in the software calibration, indicating that either it was improperly done, or the original calibration had somehow been lost and replaced with default values.  I checked a few of the hardware adjustments, but found them to be spot on - the exception being the main reference oscillator, which was about 20 Hz off at 10 MHz, which I dialed back in, chalking this up with aging of the crystal.

During the procedure, I was reminded by a few peculiarities - and noticed some likely errors, and here they are in no particular order:

  • Many of these menu items are partially self-calibration, which is to say that you establish the condition called out in the procedure and push the UP or DOWN button.  For example, on menu item #16 where the Squelch knob is calibrated, one merely sets it to the center of rotation, the voltage is shown on the screen in hexidecimal, and you press the button and the displayed value is stored temporarily in memory.
  • I'm a bit OCD when it comes to S-meter calibration, preferring my S-units to be 6 dB apart, S-9 to be represented by a -73dBm signal as noted by the IRU specifications, and for "20 over" to actually be "20 over S-9", or around -53 dBm.  The procedure in the manual - and the radio itself doesn't permit this, exactly.
    • To set the "S1" signal level (menu item #3) would require a signal level -121 dBm, but the receiver's AGC doesn't track a signal below around -113 dBm.  Instead, I noted the no-signal level on the display when menu #3 was selected and then set the signal level to an amplitude that just caused the hexidecimal number to increase and then pushed the button, setting "S1" to be equivalent to the lowest-possible signal level to which the AGC reacts.
    • To set the "S9" signal level (menu item #4) I set the signal generator to -73dBm and pressed the button.
    • To set the "Full scale" level (menu item #5) I set the signal generator to -23 dBm and pressed the button.  If you have followed the math, you'll note that "Full Scale" - which is represented as "60 over" should really be -13 dBm, but I observed that the AGC seemed to compress a bit at this signal level and the "20 over" and "40 over" readings came out wrong:  Using a level of -23 dBm got the desired results.
    • NOTE:  The service menu forces the pre-amp to be enabled when doing the S-meter calibration (e.g. you can't disable it when in the service menu) so the S-meter calibration only holds when the pre-amp is turned on.
  •  For setting menu item #1, "ALC Voltage" I was stumped for a bit.  It mentions measuring "TP1" - but this is not the "TP1" on the transmitter board, but rather the TX/RX unit (the board underneath the radio).
  • I noticed that if step #7 was followed to set the 100 watt power level, it was difficult to properly set menu items 23-28 (the "TGC" parameters).  These adjustments set to 100 watts, but if you have already set menu item #18 at 100 watts, you can't be sure that you've properly done it.
    • The work-around is that prior to step #6 in the procedure that you go to menu item #18 and adjust for higher than 100 watts - say, 125 watts.  If this is done, you can adjust menu items 23-28 (noting that menu #27 is adjusted out-of-order in procedure step #6) to 100 watts.
    • Once procedure steps 6, 7 and 8 are done (but skipping the adjustment for menu #18 in step 7) you can go back to menu #18 and adjust for 100 watts.
  • For procedure steps 16 and 17, I didn't have a 150 ohm dummy load, but I did have several 50 ohm loads, so I put three of them in parallel - which yields 16.67 ohms, which is also a 3:1 VSWR - and completed these steps.  It's worth noting that Yaesu uses 16.67 ohms for the equivalent step in its alignment procedures.  To set the "40 watts" called out in step 17 I used the front-panel power meter, which would have already been calibrated in the procedure.

The result:

As mentioned, the "hardware" calibration seemed to be fine and only the "soft" calibration was off and after following this procedure, the tuner worked exactly as it should.  What I suspect was occurring was a combination of the the output power being too low to calculate an SWR (e.g. setting the radio to "5 watts" yielded less then 2) and that the SWR meter calibration itself was incorrect and that this combination of factors prevented the tuner from being able to find a match.

Since the repair, the TS-570 has been used several times per week and it is working just as it should!

This post stolen from


Saturday, April 30, 2022

Implementing the (functional equivalent of a) Hilbert Transform with minimal overhead

I recently had need to take existing audio and derive a quadrature pair of audio channels from this single source (e.g. the two channels being 90 degrees from each other) in order to do some in-band frequency conversion (frequency shifting).  The "normal" way to do this is to apply a Hilbert transformation using an FIR algorithm - but I needed to keep resources to an absolute minimum, so throwing a 50-80 tap FIR at it wasn't going to be my first choice.  

Another way to do this is to apply cascaded "Allpass" filters.  In the analog domain, such filters are used not to provide any sort of band-filtering effect, but to cause a phase change without affecting the amplitude and by carefully selecting several different filters and cascading them.  This is often done in "Phasing" type radios and this is accomplished with 3 or 4 op amp sections (often Biquad) cascaded - with another, similar branch of op-amps providing the other channel.  By careful selection of values, a reasonable 90 degree phase shift between the two audio channels can be obtained over the typical 300-3000 Hz "communications" bandwidth such that 40+ dB of opposite sideband attenuation is obtainable.


One tool that allows this to be done in hardware using op amps is Tonne Software's  "QuadNet" program which is an interactive tool that allows the input and analysis of parameters to derive component values - see .

I wished to do this in software, so a bit of searching let me to an older blog entry by Olli Niemitalo of Finland, found here:  which, in turn, references several other sources, including:

This very same same technique is also used in the "csound" library (found here) - a collection of tools that allow manipulation of sound in various ways.

My intent was this to be done in Javascript where I was processing audio real-time (hence the need for it to be lightweight) and this fit the bill.  Olli's blog entry provided suitable information to get this "Hilbert" transformation working.   Note the quotes around "Hilbert" indicating that it performs the function - but not via the method of a "real" Hilbert transform in the sense that it provides a quadrature signal.

The beauty of this code is that only a single multiplication is required for each channel's filter - a total of eight multiplications in all for each iteration of the two channels - each with four sections - something that is highly beneficial when it comes to keeping CPU and memory utilization down!

As noted above, this code was implemented in Javascript and the working version is represented below:  It would be trivial to convert this to another language - particularly C:

* * *

Here comes the code!

First, here are the coefficients used in the allpass filters themselves - the "I" and the "Q" channels being named arbitrarily:

// Biquad coefficients for "Hilbert - "I" channel
  var ci1=0.47940086558884;  //0.6923878^2
  var ci2=0.87621849353931; //0.9360654322959^2
  var ci3=0.97659758950819; //0.9882295226860^2
  var ci4=0.99749925593555; //0.9987488452737^2
  // Biquad coefficients for "Hilbert" - "Q" channel
  var cq1=0.16175849836770; //0.4021921162426^2
  var cq2=0.73302893234149; //0.8561710882420^2
  var cq3=0.94534970032911;  //0.9722909545651^2
  var cq4=0.99059915668453;  //0.9952884791278^2

Olli's page gives the un-squared values as it is a demonstration of derivation - a fact implied by the comments in the code snippet above.

In order to achieve the desired accuracy over the half-band (e.g. half of the sampling rate) a total of FOUR all-pass sections are required, so several arrays are needed to hold the working values as defined here:

  var tiq1=[0,0,0];  // array for input for Q channel, filter 1
  var toq1=[0,0,0];  // array for output for Q channel, filter 1
  var tii1=[0,0,0];  // array for input for I channel, filter 1
  var toi1=[0,0,0];  // array for output for I channel, filter 1
  var tiq2=[0,0,0];  // array for input for Q channel, filter 2
  var toq2=[0,0,0];  // array for output for Q channel, filter 2
  var tii2=[0,0,0];  // array for input for I channel, filter 2
  var toi2=[0,0,0];  // array for output for I channel, filter 2
  var tiq3=[0,0,0];  // array for input for Q channel, filter 3
  var toq3=[0,0,0];  // array for output for Q channel, filter 3
  var tii3=[0,0,0];  // array for input for I channel, filter 3
  var toi3=[0,0,0];  // array for output for I channel, filter 3
  var tiq4=[0,0,0];  // array for input for Q channel, filter 4
  var toq4=[0,0,0];  // array for output for Q channel, filter 4
  var tii4=[0,0,0];  // array for input for I channel, filter 4
  var toi4=[0,0,0];  // array for output for I channel, filter 4


The general form of the filter as described in Olli's page is as follows:

 out(t) = coeff*(in(t) + out(t-2)) - in(t-2)

In this case,  our single multiplication is with our coefficient being multiplied by the input sample, and from that we add our output from two operations previous and subtract from that our input value - also  from two operations previous.

The variables "tiq" and "toq" and "tii" and "toi" refer to input and output values of the Q and I channels, respectively.  As you might guess, these arrays must be static as they must contain the results of the previous iteration.

The algorithm itself is as follows - a few notes embedded on each section

  tp0++;        // array counters
  if(tp0>2) tp0=0;

// The code above uses the modulus function to make sure that the working variable arrays are accessed in the correct order.  There are any number of ways that this could be done, so knock yourself out!

// The audio sample to be "quadrature-ized" is found in "tempa" - which should be a floating point number in the implementation below.  Perhaps unnecessarily, the output values of each stage are passed in variable "di" and "dq" - but this was convenient for initial testing.

  // Biquad section 1
  di=ci1*(tii1[tp0] + toi1[tp2]) - tii1[tp2];

  dq=cq1*(tiq1[tp0] + toq1[tp2]) - tiq1[tp2];

  // Biquad section 2

  di=ci2*(tii2[tp0] + toi2[tp2]) - tii2[tp2];

  dq=cq2*(tiq2[tp0] + toq2[tp2]) - tiq2[tp2];

  // Biquad section 3

  di=ci3*(tii3[tp0] + toi3[tp2]) - tii3[tp2];

  dq=cq3*(tiq3[tp0] + toq3[tp2]) - tiq3[tp2];

  // Biquad section 4

  di=ci4*(tii4[tp0] + toi4[tp2]) - tii4[tp2];

  dq=cq4*(tiq4[tp0] + toq4[tp2]) - tiq4[tp2];

// Here, at the end, our quadrature values may be found in "di" and "dq"

* * *

Doing a frequency conversion:

The entire point of this exercise was to produce quadrature audio so that it could be linearly shifted up or down while suppressing the unwanted image - this being done using the "Phasing method" - also called the "Hartley Modulator" in which the quadrature audio is mixed with a quadrature local oscillator and through addition or subtraction, a single sideband of the resulting mix may be preserved.

An example of how this may be done is as follows:

  i_out = i_in * sine + q_in * cosine;
  q_out = q_in * sine - i_in * cosine;

In the above, we have "i_in" and "q_in" - which are the I and Q audio inputs, which could be our "di" and "dq" samples from our "Hilbert" transformation and with this is an oscillator with both sine and cosine outputs (e.g. 90 degrees apart).

These sine and cosine value would be typically produced using an NCO - a numerically-controlled oscillator - running at the sample rate of the audio system.  In this case, I used a 1k (1024) entry sine wave table with the cosine being generated by adding 256 (exactly 1/4th of the table size) to its index pointer.

If I needed just one audio output from my frequency shifting efforts, I could use either "i_out" or "q_out" so one need not do both of the operations, above - but if one wanted to preserve the quadrature audio after the frequency shift, the code snippet shows how it could be done.

* * *

Does it work?

Olli's blog indicates that the "opposite sideband" attenuation - when used with a mixer - should be on the order of -43 dB at worst - and actual testing indicated this to be so from nearly DC.  This value isn't particularly high when it comes to the "standard" for communications/amateur receivers where the goal is typically greater than 50 or 55 dB, but in casual listening, the leakage is inaudible.

One consequence of the attenuation being "only" 43 dB or so is that if one does frequency shifting, a bit of the local oscillator used to accomplish this can bleed through - and even at -43 dB, a single, pure sine wave can often be detected by the human ear amongst the noise and audio content - particularly if there is a period of silence.  Because this tone is precisely known, can be easily removed with the application of a moderately sharp notch filter tuned to the local oscillator frequency.

This page stolen from


Sunday, February 27, 2022

High power Tayloe (a.k.a. Wheatstone) absorptive bridge for VSWR indication and rig protection.

Figure 1:  The completed absorptive VSWR bridge.
Last year, I was "car camping" with a bunch of friends - all of which happened to be amateur radio operators.  Being in the middle of nowhere where mobile phone coverage was not even available, we couldn't resist putting together a "portable" 100 watt HF station.  While the usual antenna tuner+VSWR meter would work fine, I decided to build a different piece of equipment that would facilitate matching the antenna and protecting the radio - but more on this in a moment.

A bit about the Wheatstone bridge:

The Wheatsone bridge is one of the oldest-known types of electrical circuits, first having been originated around 1833 - but popularized about a decade later by Mr. Wheatstone itself.  Used for detecting electrical balance between the halves of the circuit, it is useful for indirectly measuring all three components represented by Ohm's law - resistance, current and voltage.

Figure 2:  Wheatstone bridge (Wikipedia)
It makes sense, then, that an adaptation of this circuit - its use popularized by Dan Tayloe (N7VE) - can be used for detecting when an antenna is matched to its load.  To be fair, this circuit has been used many decades for RF measurement in instrumentation - and variations of it are represented in telephony - but  some of its properties that are not directly related to its use for measurement that make it doubly useful - more on that shortly.

Figure 2 shows the classic implementation of a Wheatstone bridge.  In this circuit, balance of the two legs (R1/R2 and R3/Rx) results in zero voltage across the center, represented by "Vg" which can only occur when the ratio between R1 and R2 is the same as the ratio between R3 and Rx.  For operation, that actual values of these resistors is not particularly important as long as the ratios are preserved.

If you think of this is a pair of voltage dividers (R1/R2 and R3/Rx) its operation makes sense - particularly  if you consider the simplest case where all four values are equal.  In this case, the voltage between the negative lead (point "C") and point "D" and points "C" and "B" will be half that of the battery voltage - which means the voltage between points "D" and "B" will be zero since they must be at the same voltage.

Putting it in an RF circuit:

Useful at DC, there's no reason why it couldn't be used at AC - or RF - as well.  What, for example, would happen if we made R1, R2, and R3 the same value (let's say, 50 ohms), instead of using a battery, substituted a transmitter - and for the "unknown" value (Rx) connected our antenna?

Figure 3:  The bridge, used in an antenna circuit.

This describes a typical RF bridge - known when placed between the transmitter and antenna as the "Tayloe" bridge, the simplified diagram of which being represented in Figure 3.

Clearly, if we used, as a stand-in for our antenna, a 50 ohm load, the RF Sensor will detect nothing at all as the bridge would be balanced, so it would make sense that a perfectly-matched 50 ohm antenna would be indistinguishable from a 50 ohm load.  If the "antenna" were open or shorted, voltage would appear across the RF sensor and be detected - so you would be correct in presuming that this circuit could be used to tell when the antenna itself is matched.  Further extending this idea, if your "Unknown antenna" were to include an antenna tuner, looking for the output of the RF sensor to go to zero would indicate that the antenna itself was properly matched.

At this point it's worth noting that this simple circuit cannot directly indicate the magnitude of mismatch (e.g. VSWR - but it can tell you when the antenna is matched:  It is possible to do this with additional circuitry (as is done with many antenna analyzers) but for this simplest case, all we really care about is finding when our antenna is matched.  (A somewhat similar circuit to that depicted in Figure 3 has been at the heart of many antenna analyzers for decades.)

Antenna match indication and radio protection:

An examination of the circuit of Figure 3 also reveals another interesting property of this circuit used in this manner:  The transmitter itself can never see an infinite VSWR.  For example, if the antenna is very low resistance, we will present about 33 ohms to the transmitter (e.g. the two 50 ohm resistors on the left side will be in parallel with the 50 ohm resistor on the right side) - which represents a VSWR of about 1.5:1.  If you were to forget to connect an antenna at all, we end up with only the two resistors on the left being in series (100 ohms) so our worst-case VSWR would, in theory, be 2:1.

In context, any modern, well-designed transmitter will be able to tolerate even a 2.5:1 VSWR (probably higher) so this means that no matter what happens on the "antenna" side, the rig will never see a really high VSWR.

If modern rigs are supposed to have built-in VSWR protection, why does this matter?

One of the first places that the implementation of the "Tayloe" bridge was popularized was in the QRP (low power) community where transmitters have traditionally been very simple and lightweight - but that also means that they may lack any sophisticated protection circuit.  Building a simple circuit like this into a small antenna tuner handily solves three problems:  Tuning the antenna, being able to tell when the antenna is matched, and protecting the transmitter from high VSWR during the tuning process.

Even in a more modern radio with SWR protection there is good reason to do this.  While one is supposed to turn down the transmitter's power when tuning an antenna, if you have an external, wide-range tuner and are quickly setting things up in the field, it would be easy to forget to do so.  The way that most modern trasmitter's SWR protection circuits work is by detecting the reflected power, and when it exceeds a certain value, it reduced the output power - but this measurement is not instantaneous:  By the time you detect excess reflected power, the transmitter has already been exposed - if only for a fraction of a second - to a high VSWR, and it may be that that brief instant was enough to damage an output transistor.

In the "old" days of manual antenna tuners with variable capacitors and roller inductors, this may have not been as big a deal:  In this case, the VSWR seen by the transmitter might not be able to change too quickly (assuming that the inductor and capacitors didn't have intermittent connections) but consider a modern, automatic antenna tuner full of relays:  Each time the internal tuner configuration is changed to determine the match, these "hot-switched" relays will inevitably "glitch" the VSWR seen by the radio, and with modern tuners, this can occur many times a second - far faster than the internal VSWR protection can occur meaning that it can go from being low, with the transmitter at high power, to suddenly high VSWR before the power can be reduced, something that is potentially damaging to a radio's final amplifier.  While this may seem to be an unlikely situation, it's one that I have personally experienced in a moment of carelessness - and it put an abrupt end to the remote operation using that radio - but fortunately, another rig was at hand.

A high-power Tayloe bridge:

It can be argued that these days, the world is lousy with Tayloe bridges as they are seemingly found everywhere - particularly in the QRP world, but there are fewer of them that are intended to be used with a typical 100 watt mobile radio - but one such example may be seen below:

Figure 4:  As-built high-power Tayloe bridge

Figure 4 shows a variation of the circuit in Figure 2, but it includes two other features:  An RF detector, in the form of an LED (with RF rectifier) and a "bypass" switch, so that it would not need to be manually removed from the coax cable connection from the radio.

In this case, the 50 ohm resistors are thick-film, 50 watt units (about $3 each) which means that they are capable of handling the full power of the radio for at least a brief period.  Suitable resistors may be found at the usual suppliers (Digi-Key, Mouser Electronics) and the devices that I used were Johanson P/N RHXH2Q050R0F4 (A link to the Mouser Electronics page is here) - but there is nothing special about these particular devices:  Any 50-100 watt, TO-220 package, 50 ohm thick-film resistor with a tolerance of 5% or better could have been used, provided that its tab is insulated from the internal resistor itself (most are). 

How it works:

Knowing the general theory behind the Wheatstone bridge, the main point of interest is the indicator, which is, in this case, an LED circuit placed across the middle of the bridge in lieu of the meter shown in  Figure 1.  Because RF is present across these two points - and because neither side of this indicator is ground-referenced, this circuit must "float" with respect to ground.

If we presume that there will be 25 volts across the circuit - which would be in the ballpark of 25 watts into a 2:1 VSWR - we see that the current through 2k could not exceed 25 mA - a reasonable current to light an LED.  To rectify it, a 1N4148 diode - which is both cheap and suitably fast to rectify RF (a garden-variety 1N4000 series diodes is not recommended) along with a capacitor across the LED.  An extra 2k LED is present to reduce the magnitude of the reverse voltage across the diode:  Probably not necessary, bit I used it, anyway.  QRP versions of this circuit often include a transformer to step up the low RF voltage to a level that is high enough to reliably drive the LED, but with 5-10 watts, minimum, this is simply not an issue.

While there are many examples of this sort of circuit - all of them with DPDT switches to bypass the circuit - every one that I saw wired the switch in such a way that if one were to be inadvertently transmitting while the switch was operated, there would be a brief instant when the transmitter was disconnected (presuming that the switch itself is a typical "break-before-make") that could expose the transmitter to a brief high VSWR transient.  In Figure 3, this switch is wired differently:

  • When in "Bypass" mode, the "top" 50 ohm resistor is shorted out and the "ground" side of the circuit is lifted.
  • When in "Measure" mode, the switch across the "top" 50 ohm resistor is un-bridged and the bottom side of the circuit is grounded.

Figure 5:  Inside the bridge.
Wired this way, there is no possible condition during the operation of the switch where the transmitter will be exposed to an extraordinarily high VSWR - except, of course, if the antenna itself is has an extreme mismatch - which would happen no matter what if you were to switch to "bypass" mode.

An as-built example:

I built my circuit into a small die-cast aluminum box as shown in Figure 5.  Inside the box, the 50 ohm resistors are bolted to the box itself using countersunk screws and heat-sink paste for thermal transfer.  To accommodate the small size of the box, single-hole UHF connectors were used and the circuit itself was point-to-point wired within the box itself.

Figure 6:  The "switch" side of the bridge.
For the "bypass" switch (see Figure 6) I rescued a 120/240 volt DPDT switch from an old PC power supply, choosing it because it has a flat profile with a recessed handle with a slot:  By filing a bevel around the square hole (which, itself was produced using the "drill-then-file" method) one may use a fingernail to switch the position.  I chose the "flush handle" type of switch to reduce the probability of it accidentally being switched, but also to prevent the switch itself from being broken when it inevitably ends at the bottom of a box of other gear.
On the other side of the box (Figure 7) the LED is nearly flush-mounted, secured initially with cyanoacrylate (e.g. "Super") glue - but later bolstered with some epoxy on the inside of the box.
It's worth noting that even though the resistors are rated for 50 watts, it's unlikely that even this much power will be output by the radio will approach that in the worst-case condition - but even if it does, the circuit is perfectly capable of handling 100 watts for a few seconds.  The die-cast box itself, being quite small, has rather limited power dissipation on its own (10-15 watts continuous, at most) but it is perfectly capable of withstanding an "oops" or two if one forgets to turn down the power when tuning and dumps full power into it.  It will, of course, not withstand 100 watts for very long - but you'll probably smell it before anything is too-badly damaged!

As on might posit from the description, the operation of this bridge is as follows:

  • Place this device between the radio and the external tuner.
  • Turn the power of the radio down to 10-15 watts and select FM mode.
  • Disable the radio's built-in tuner, if it has one.
  • If using a manual tuner, do an initial "rough" tuning to peak the receive noise, if possible.
  • Switch the unit to "Bridge" (e.g. "Measure") mode.
  • Key the transmitter.
  • If you are using an automatic tuner, start its auto-tune cycle.  There should be enough power coming through the bridge for it to operate (most will work reliably down to at about 5 watts - which means that you'll need the 10-15 watts from the radio for this.) 
  • If you are using a manual tuner, look at both its SWR meter (if it has one) and the LED brightness and adjust for minimum brightness/reflected power.  A perfect match will result in the LED being completely extinguished.
  • After tuning is complete, switch to "Bypass" mode.
 * * *
This page stolen from


Saturday, January 22, 2022

Testing the FlyDog SDR (KiwiSDR "clone")

As noted in a previous entry of this blog where I discussed the "Raspberry Kiwi" SDR - another clone of the KiwiSDR - there is also the "FlyDog" receiver - yet another clone - that has made the rounds.  As with the Raspberry Kiwi, it would seem that the sources of this hardware are starting to dry up, but it's still worth taking a look at it.

I had temporary loan of a FlyDog SDR to do an evaluation, comparing it with the KiwiSDR - and here are results of those tests - and other observations.

Figure 1:
The Flydog SDR.  On the left are the two "HF" ports and
the port for the GPS antenna.  Note the "bodge" wires
going through the shielded area in the upper left.
The dark squares in the center and to its right are the A/D
converter and the FPGA.  The piece of aluminum attached
to the oscillator is visible below the A/D converter.
Click on the image for a larger version.

How is this different from the Raspberry Kiwi?

Because of its common lineage, the FlyDog SDR is very similar to the Raspberry Kiwi SDR - including the use of the same Linear Technologies 16 bit A/D converter - and unlike the Raspberry SDR that I reviewed before, it seems to report a serial number, albeit in a far different range (in the 8000s) than the "real" KiwiSDRs which seem to be numbered, perhaps, into the 4000s.

The most obvious difference between the FlyDog and the original KiwiSDR (and the Raspberry Kiwi) is the addition of of a second HF port - which means that there is one for "up to 30 MHz" and another that is used for "up to 50 MHz" - and therein lies a serious problem, discussed below.

Interestingly, the FlyDog SDR has some "bodge" wires connecting the EEPROM's leads to the bus - and, unfortunately, these wires, connected to the digital bus, appear to run right through the HF input section, under the shield!  Interestingly, these wires might escape initial notice because they were handily covered with "inspection" stickers. (Yes, there were more than two covering each other - which was suspicious in its own right!  To be fair, there's no obvious digital "noise" as a result of the unfortunate routing of these bodge wires.) 

Why does it exist?

One would be within reason to ask why the FlyDog exists in the first place - but this isn't quite clear.  I'm guessing that part of this was the challenge/desire to offer a device for a the more common, less-expensive and arguably more capable Raspberry Pi (particularly the Pi 4) - but this is only a guess.

Another reason would have been to improve the performance of the receiver over the KiwiSDR by using a 16 bit A/D converter - running at a higher sampling rate - to both improve dynamic range and frequency coverage - this, offering usable performance up through the 6 meter amateur band.  Unfortunately, the Flydog does neither of these very well - the dynamic range problem being the same as the Raspberry Kiwi in the linked article compounded by the amplitude response variances and frequency stability issues discussed later on.


Getting immediately to one of the aspects of this receiver I'll discuss the two HF ports - and their implementation can be stated in two words:  Badly implemented.

When I first saw the FlyDog online with its two HF ports, I wondered "I wonder how they selected between the two ports - with a small relay, PIN diodes, or some sort of analog MUX switch, via hardware?" - but the answer is neither:  The two ports are simply "banged" together at a common point.

When I heard this, I was surprised - not because of its simplicity, but because it's such a terrible idea.  As a few moments with a circuit simulator would show you, simply paralleling two L/C networks that cover overlapping frequency ranges does not result in a combined network sharing the features/properties of the two, but a terrible, interacting mess with wildly varying impedances and the potential for wild variations of insertion loss.

The result of this is that the 30 MHz input is, for all practical purposes, unusable.  Additionally, if one checks the band-pass response of the receiver using a calibrated signal generator against the S-meter reading, you will soon realize that the resulting frequency response across the HF spectrum is anything but flat.

For example, one will see a "dip" in response (e.g. excess loss) around 10 MHz on the order of 20 dB if one puts a signal into the 50 MHz port, effectively making it unusable for the 30 meter amateur band and the 31 meter shortwave broadcast band.  Again, there is nothing specifically wrong with the low-pass filter networks themselves - just the way that they were implemented:  You can have only one such network connected to the receiver's preamplifier input at a time without some serious interaction!


Having established that out-of-the-box that the FlyDog has some serious issues when used as intended on HF, one might be wondering what can be done about it - and there are two things that may be done immediately:

  • Do microsurgery and disconnect one of the HF input ports.  If you have the skills to do so, the shield over the HF filter may be unsoldered/removed and the circuit reverse-engineered enough to determine which component(s) belong to the 30 MHz and 50 MHz signal paths - and then remove those component(s).  Clearly, this isn't for everyone!
  • Terminate the unused port.  A less-effective - but likely workable alternative - would be to attach a 50 ohm load to the unused port.  On-bench testing indicated that this seemed to work best when the 50 MHz port was used for signal input and the 30 MHz port was connected to a 50 ohm load:  The frequency of the most offensive "null" at about 10 MHz shifted down by a bit more than 1 MHz and reduced in depth, allowing still-usable response (down by only a few dB) at 10 MHz, and generally flattening response across the HF spectrum:  Still not perfect, but likely to be adequate for most users.  (In testing, the 30 MHz port was also shorted, but with poorer results than when terminated.) 

In almost every case, the performance (e.g. sensitivity) was better on the 50 MHz port than the 30 MHz port, so I'm at a loss to find a "use case" where its use might be better - except for a situation where its lower performance was outweighed by its lower FM broadcast band rejection - more on that later.

The other issue - which is shared with the RaspberryKiwi SDR - is that the low-pass filter (on the 50 MHz port) is insufficient to prevent the incursion of aliases of even moderately strong FM broadcast signals which appear across the HF spectrum as broad (hundreds of kHz wide) swaths of noise with a hint of distorted speech or music.  This is easily solved with an FM broadcast band filter (NooElec and RTL-SDR blog sell suitable devices) - and it is likely to be a necessity.

Other differences:

  • Lower gain on the FlyDog SDR:  Another difference between the FlyDog and KiwiSDR is the RF preamplifier.  On the KiwiSDR and Raspberry Kiwi, a 20 dB gain amplifier (the LTC6401-20) is used, but a 14 dB gain amplifier (LTC6400-14) is used instead - a gain reduction of about 6 dB, or one S-unit - and the effects of this are evident in the performance as described below.  Was this intentional, a mistake, or was it because the 14 dB version was cheaper/more available?
From a purely practical stand point, this isn't a huge deal as gain may be added externally - and it's generally better to have a too-little gain in a system and add it externally rather than to try to figure out how to reduce gain without impacting noise performance.  As it is the gain of the receiver is insufficient to hear the noise floor of an antenna in a "rural quiet" station on 20 meters and above (when the bands are closed) without amplification.  This also means that it is simply deaf on 10 and 6 meters, requiring additional filtering and amplification if one wishes to use it there for weak signal work.  The KiwiSDR and Raspberry SDRs have a similar issue, of course, but the 6 dB gain deficit of this receiver exacerbates the problem.
  • "X1.5/X1.0" jumper:  There is, on the silkscreen, indication of a jumper that implies the changing of the gain from "1.5" to "1.0" when J1 is bridged.  I didn't reverse-engineer the trace, but it appears to adjust the gain setting of the LNA of the A/D converter - and sure enough, when jumpered, the gain drops by about 4 dB - precisely what a "1.5x" factor would indicate.  Despite the gain reduction, the absolute receiver sensitivity was unchanged, implying that the system's noise floor is set either by the LNA itself (the LTC6400-14) or noise internal to the the A/D converter.  If there's any beneficial effect at all I would expect it to occur during high signal conditions, in which case the "1.0" setting might make it slightly more susceptible to overload.
  •  "Dith/NA" jumper:  Also on the board is a jumper with this nomenclature marked J2 - and this (apparently) disables the A/D converter's built-in "dither" function - one designed to reduce spurious/quantization effects of low-level signals on the A/D converter, which defaults to "on" with the jumper removed as shipped.   Although extensive testing wasn't done, there was no obvious difference with this jumper bridged or not - but then, I didn't expect there to be on a receiver where the noise limit is likely imposed by the LNA rather than the A/D converter itself.
  • Deaf GPS receiver:  I don't know if it's common to these units, but I found the Flydog being tested to be very insensitivity to GPS signals as compared to other devices (including Kiwi and Raspberry SDRs) that I have around, requiring the addition of gain (about 15dB) to the signal path to get it to lock reliably.  This has apparently been observed with other FlyDog units and it is suspected that a harmonic of a clock signal on the receive board may land close enough to the GPS frequency to effectively jam it - but this is only a guess.

Clock (in)stability:

The Flydog SDR uses a 125 MHz oscillator to clock the receiver (A/D converter) - but there is a problem reported by some users:  It's a terrible oscillator - and it's bad enough that it is UNSUITABLE for almost any digital modes - particularly WSPR, FT-8, and FT-4 - to name but a few unless the unit is in still air and in an enclosure that is very temperature stable.

Figure 2:
Stability of the "stock" oscillator in the Flydog at 125 MHz in "still" air, on the workbench.  The
amount of drift - which is proportional to the receive frequency - makes it marginally usable for
digital modes and is too fast/extreme to be GPS-corrected.
Click on the image for a larger version.

Figure 2, above, is an audio plot from a receiver (a Yaesu FT-817) loosely coupled and tuned to the 125 MHz oscillator on the Flydog's receive board:  Due to the loose coupling (electrical and acoustic), other signals/noises are present in the plot that are not actually from the Flydog.  The horizontal scale near the top has 10 Hz minor divisions and the red has marks along the left side of the waterfall represent 10 seconds.

From this plot we can see over the course of about half a minute the Flydog's main receiver clock moved well over 50 Hz, representing 5 Hz at 12.5 MHz or 1 Hz at 2.5 MHz.  With this type of instability, it is probably unusable for WSPR on any band above 160 meters much of the time - and it is likely only marginally usable on that band as WSPR can tolerate only a slight amount of drift, and that's only if its change occurs in about the same time as the 2 minute WSPR cycle.  The drift depicted above would cause a change of 1 Hz or more on bands 20 meters and above within the period of just a few WSPR - or FT8 - symbols, rendering it uncopiable.

"The Flydog has GPS frequency correction - won't this work?"

Unfortunately not - this drift is way too fast for that to possibly work as the GPS frequency correction works over periods of seconds. 

What to do?

While replacing the 125 MHz clock oscillator with another device (I would suggest a crystal-based oscillator rather than a MEMs-based unit owing to the former's lower jitter) is the best option, one can do a few things "on the cheap" to tame it down a bit.  While on the workbench, I determined that this instability appeared to be (pretty much) entirely temperature-related, so two strategies could be employed:

  • Increase the thermal mass of the oscillator.  With more mass, the frequency drift would be slowed - and if we can slow it down enough, large, fast swings might be slowed enough to allow the GPS frequency correction to compensate.  With a slow enough drift, the WSPR or FT-8 decoders may even be able to cope without GPS correction.
  • Thermally isolate the oscillator.  Because it's soldered to the board, this is slightly difficult so our goal would be to thermally isolate the mass attached to the oscillator.

To add thermal mass I epoxied a small (12x15mm) piece of 1.5mm thick aluminum to the top of the oscillator itself.  The dimensions were chosen to overlap the top of the oscillator while not covering the nearby voltage regulator, FPGA or A/D converter and the thickness happens to be that of a scrap piece of aluminum out of which I cut the piece:  Slightly thicker would be even better - as would it being copper.

The epoxy that I used was "JB Weld" - a metal-filled epoxy with reasonable thermal conductivity, but "normal" clear epoxy would probably have been fine:  Cyanoacrylate ("CA" or "Super" glue) is NOT recommended as it is neither a good void filler or thermal conductor.

Comment:  If one wishes to remove a glued-on piece of metal from the oscillator during experimentation, do not attempt to remove it physically as this would likely tear it from and damaging the circuit board, but slowly heat it with a soldering iron:  The adhesive should give way before the solder melts.

The "thermal isolation" part was easy:  A small piece of foam was cut to cover the piece of aluminum - taking care to avoid covering either the FPGA or the A/D converter, but because it doesn't produce much heat - and is soldered to the board itself - the piece of foam also covered the voltage regulator.

The result of these two actions may be seen in the plot below:

Figure 3:
The stability of the oscillator after the addition of the thermal mass and foam.  Still not great,
but more likely to be usable.
Click on the image for a larger version.
Figure 3, above, shows the result, the signal of interest being that around 680-700 Hz and again, the loose coupling resulted in other signals being present besides the 125 MHz clock.
Over the same 30 second period the drift was reduced to approximately 10 Hz - but more importantly, the period of the frequency shift was significantly lengthened, making it more likely that drift correction of the onboard GPS frequency stabilization and/or the WSPR/FT8 decoding algorithm would be able to cope.
Not mentioned thusfar is that adding a cooling fan may dramatically impact the frequency stability of the Flydog":  I did not put the test unit in an enclosure or test it with a fan blowing across it - with or without the added thermal mass and isolation - so that is territory yet to be explored.
Is the Flydog SDR usable?

Out-of-the-box and unmodified:  Only marginally so.  While the issue with frequency stability is unlikely to be noticed unless you are using digital modes, the deep "notch" around 10 MHz and lower sensitivity are likely to be noticed - particularly in a side-by-side comparison with a KiwiSDR.

IF you are willing to do a bit of work (remove the components under the shield connecting the 30 MHz receiver input, modify/replace the 125 MHz oscillator) the Flydog can be a useful device, provided a bit of gain and extra filtering (particularly to remove FM broadcast signals' ingress past the low-pass filter) is appropriately applied.

Finally, it must be noted that the Flydog - like the Raspberry Kiwi (which works fine, out of the box, by the way) is a "clone" of the original KiwiSDR.  Like the Raspberry Kiwi, there are factors related to the support available to it as compared to the KiwiSDR:  The latter is - as of the time of posting - an ongoing, actively-supported project and there are benefits associated with this activity whereas with the clones, you are largely on your own in terms of software and hardware support.

For more information about this aspect, see a previous posting:  Comparing the "KiwiSDR" and "RaspberrySDR" software-defined receiver" - link.
I have read that the Flydog SDR is no longer being manufactured - but a quick check of various sites will show it (or a clone) still being available.  The Flydog is easily identified by the presence of three SMA connectors (30 MHz, 50 MHz and GPS) while the more-usable Raspberry Kiwi SDR has just two and is a black case with a fan. 
Unless you absolutely must have 6 meter coverage on your Kiwi-type device (doing so effectively would be an article by itself) I would suggest seeking out and obtaining a Raspberry Kiwi - but if you don't care about 6 meters, the original KiwiSDR is definitely the way to go.
This page stolen from

Friday, December 3, 2021

The case of the Clicky Carrier (that can sometimes clobber the upper part of 20 meters)

Note:  As of 9 February, 2022, this signal is still there, doing what it was doing when this post was originally written.

* * *

Listening on 20 meters, as I sometimes to, I occasionally noticed a loud "click" that seemed to pervade the upper portion of the band.  Initially dismissing it as static or some sort of nearby electrical discharge, my attention was brought to it again when I also noticed it while listening on the Northern Utah WebSDR - and then, other WebSDRs and KiwiSDRs across the Western U.S.  Setting a wide waterfall, I determined that the source of this occasional noise was not too far above the 20 meter band, occasionally being wide/strong enough to be heard near the top of the 20 meter band itself.

Figure 1:
The carrier in question - with a few "clicks".  In this case,
the signal in question was at 14.390 MHz.
Click on the image for a larger version.

During the mornings in Western North America, this signal is audible in Colorado, Alberta, Utah, Oregon, Idaho, Washington - and occasionally in Southern California.  It is only weakly heard at some of the quieter receive sites on the eastern seaboard and the deep southeast, indicating that its source is likely in the midwest of the U.S. or Canada, putting much of the continent inside the shadow of the first "skip" zone. 

From central Utah, a remote station with a beam indicates that the bearing at which this carrier peaks is somewhere around northeast to east-northeast, but it's hard to tell for certain because of the normal QSB (fading) and the fact that the antenna's beamwidth is, as are almost all HF beams, 10s of degrees wide.  Attempts were made to use the KiwiSDR "ARDF" system, but because it is effectively unmodulated, the results were inconclusive.

What is it?

The frequency of this signal appears to vary, but it has been spotted on 14.378 and 14.390 kHz - although your mileage may vary.  If you listen to this signal sounds perfectly stable at any given instant - with the occasional loud "click" that results in what looks like a "splat" of noise across the waterfall display (see Figure 1), with it at the epicenter

Comment:   If you go looking for this signal, remember that it will be mostly unmodulated - and that it will be subject to the vagaries of HF propagation. 

When a weird signal appears in/near the amateur bands - particularly 20 meters - the first inclination is to presume that it is an "HFT" transmitter - that is, "High Frequency Trading", a name that refers not to the fact that they are on the HF bands, but that it's a signal that conveys market trades over a medium (the ionosphere) that has less latency/delay than conventional data circuits, taking advantage of this fact to eke margins out of certain types of financial transactions.  Typically, the signals conveying this information appear to be rather conventional digital signals with obvious modulation - but this particular signal does not fit that profile.  Why blame HFT?  Such signals have, in the past, encroached in the 20 meter band and distrupted communications - see the previous blog post "Intruder at the top of the 20 meter amateur band?" - link.

Why might someone transmit a (mostly) unmodulated carrier?  The first thing that comes to mind would be to monitor propagation:  The amplitude and phase of a test carrier could tell something about the path being taken, but an unmodulated signal isn't terribly useful in determining the actual path length as there is nothing about it that would allow correlation between when it was transmitted, and when it was received.

Except, that this signal isn't unmodulated:  It has those very wideband "clicks" could help toward providing a reference to make such a measurement.

What else could it be?  A few random thoughts:

  • Something being tested.  It could be a facility testing some sort of HF link - but if so, why the frequency change from day to day?  The "clicks"?  Perhaps some sort of transmitter/antenna malfunction (e.g. arcing)?
  • Trigger for high-frequency trading (HFT).  Many high-frequency trading type signals are fairly wide (10 kHz or so) - possibly being some sort of OFDM - but any sort of coding imposes serialization delays which can negate some of the minimization of propagation delay being attained via the use of HF as compared to other means of conveying data over long distances.  Likely far-fetched, but perhaps the "clicks" represent some sort of trigger for a transaction, perhaps arranged beforehand by more "conventional" means.  After all, what possible means of conveying that "something should happen" exists than a wide-bandwidth "click" over HF?  Again, unlikely - but seemingly so did something like HFT in the first place!

A bit of analysis:

A bit of audio of this carrier, complete with "clicks" was recorded via a KiwiSDR.  To do this, the AGC and audio compression were disabled, the receiver set to "I/Q" mode and tuned 1 kHz below the carrier and the bandwidth set to maximum (+/- 6 kHz) and the gain manually set to be 25 dB or so below where the AGC would have been.  Doing this assures that we capture a reference level from the signal itself (the 1 kHz tone from the carrier) at a low enough level to allow for a very much stronger burst of energy (the "click") to be detected without worrying too much about clipping of the receive signal path.

The result of this is the audio file (12 kHz stereo .WAV) that you may download from HERE.

Importing this file into Audacity, we can zoom in on the waveform and at time index 13.340, we can see this:

Figure 2:
Zoomed-in view of the waveform from the off-air recording linked above.
These "clicks" seem to come in pairs, approximately 1 msec apart, and have an apparent
amplitude hundreds of times higher than the carrier itself.
Click on the image for a larger version.

Near the baseline (amplitude zero) we see the 1 kHz tone at a level of approximately 0.03 (full-scale being normalized to 1.0) but we can see the "clicks" represented by large single-sample incidents, one of which is at about 0.83.  Ignoring the fact that the true amplitude and rise-time of this "click" is likely to be higher than indicated owing to band-pass filtering and the limited sample rate, we see that the ratio between the peak of the "click" and the sine wave is a factor of 27.7:1 or, converted to a power relationship, almost 29dB higher than the CW carrier.

This method of measuring the peak power is not likely to be very accurate, but it is, if anything, under-representing the amplitude of the peak power of this signal.  It's interesting to note that these clicks seem to come in pairs, separated by 12-13 samples (approximately 1 millisecond - about the distance that it takes a radio signal 300 km/186 miles) - and this "double pulse" has been observed over several days.  This double pulse might possibly an echo (ionospheric, ground reflection), but it seems to be too consistent.  Perhaps - related to the theoretical possibility of this being some sort of HFT transmission - it may be a means of validation/identification that this pulse is not just some random, ionospheric event.

Listening to it yourself:

Again, if you wish to listen for it, remember that it is an unmodulated CW carrier (except for the "clicks") and that you should turn all noise blanking OFF.  Using an SSB filter, these clicks are so fast that they may be difficult to hear, particularly if the signal is weak.  So far, it has been spotted on 14.378 and 14.390 MHz (try both frequencies) which means that in USB, you should tune 1 kHz lower than this (e.g. 14.377 and 14.389) hear a 1 kHz tone.  Once you have spotted this signal, switching to AM may make hearing the occasional "click" easier. 

Remember that depending on propagation, your location - and your local noise floor - you might not be able to hear this signal at all.  Keep in mind that the HF bands are pretty busy, and there are other signals near these two frequencies with other types of signals (data, RTTY, etc.) - but the one in question seems to be an (almost!) unmodulated carrier.

It's likely that this carrier really isn't several hundred kHz wide, so it may not actually be getting into the top of 20 meters, but the peak-to-average power is so high that it may be audible on software-defined radios:  Because the total signal power across 20 meters may be quite low, the "front end AGC" may increase the RF signal level to the A/D converter and when the "click" from this transmitter occurs, it may cause a brief episode of clipping, disrupting the entire passband.

* * * * *

If anyone has any ideas as to what this might be, I'd be interested in them.  If you have heard this signal and have other observations - particularly if you can obtain a beam heading for this signal, please report them as well in the comments section, below.


This page stolen from


Thursday, November 25, 2021

Fixing the CAT Systems DL-1000 and AD-1000 repeater audio delay boards

Figure 1:
The older DL-1000 (top) and the newer
AD-1000, both after modification.
Click on the image for a larger version.

A few weeks ago I was helping one of the local ham clubs go through their repeaters, the main goal being to equalize audio levels between the input and output to make them as "transparent" as possible - pretty much a matter of adjusting the gain and deviation appropriately using test equipment.  Another task was to determine the causes of noises in the audio paths and other anomalies which were apparent to a degree at all of the sites.

All of the repeater sites in question use CAT-1000 repeater controllers equipped with audio delay boards to help suppress the "squelch noise" and to ameliorate the delay resulting from the slow response of a subaudible tone decoder.  Between the sites, I ran across the older DL-1000 and the newer AD-1000 - but all of these boards had "strange" issues.

The DL-1000:

This board uses the MX609 CVSD codec chip which turns audio into a single-bit serial stream at 64 kbps using a 4-bit encoding algorithm, which is then fed into a CY7C187-15 64k x 1 bit RAM, the "old" audio data being read from the RAM and converted back to audio just before the "new" data is written..  To adjust the amount of delay in a binary-weighted fashion, a set of DIP switches are used to select how much of this RAM is used by enabling/disabling the higher-order address bits.

The problem:

It was noticed that the audio from the repeater had a bit of an odd background noise - almost a squeal, much like an amplifier stage that is on the verge of oscillation.  For the most part, this odd audio property went unnoticed, but if an "A/B" comparison was done between the audio input and output - or if one inputted a full-quieting, unmodulated carrier and listened carefully on a radio to the output of the repeater, this strange distortion could be heard.

Figure 2:
The location of C5 on the DL-1000.  A 0.56 uF capacitor was
used to replace the original 0.1 (I had more of those than
I had 0.47's)
and either one would probably have been fome
As noted below, I added another to the bottom of the board.
Click on the image for a larger version.

This issue was most apparent when a 1 kHz tone was modulated on a test carrier and strange mixing products could be heard in the form of a definite "warble" or "rumble" in the background, superimposed on the tone. Wielding an oscilloscope, it was apparent that there was a low-frequency "hitchhiker" on the sine wave coming out of the delay board that wasn't present on the input - probably the frequency of the low-level "squeal" mixing with the 1 kHz tone.  Because of the late hour - and because we were standing in a cold building atop a mountain ridge - we didn't really have time to do a full diagnosis, so we simply pulled the board, bypassing the delay audio pins with a jumper.

On the workbench, using a signal tracer, I observed the strange "almost oscillation" on pin 10 of the MX609 - the audio input - but not on pin 7 of U7B, the op-amp driver.  This implied that there was something amiss with the coupling capacitor - a 0.1uF plastic unit, C5, but because these capacitors almost never fail, particularly with low-level audio circuits, I suspected something fishy and checked the MX609's data sheet and noted that it said "The source impedance should be less than 100 ohms.  Output channel noise levels will improve with an even lower impedance."  What struck me was that with a coupling capacitor of just 0.1uF, this 100 ohm impedance recommendation would be violated at frequencies below 16 kHz - hardly adequate for voice frequencies!

Figure 3:
The added 2.2uF tantalum capacitor on the bottom of
the board across C5.  The positive side goes toward
the MX609, which is on the right.
Click on the image for a larger version.

Initially, I bridged C5 with a 0.1uF plastic unit and the audible squealing almost completely disappeared.  I then bridged C5 it with a 0.47uF capacitor which squashed the squealing sound and moved the 100 ohm point to around 4 kHz, so I replaced C5 with a 0.56uF capacitor - mainly because I had more of those than small 0.47uF units.

Not entirely satisfied, I bridged C5 with a 10uF electrolytic capacitor, moving the 100 ohm impedance point down to around 160 Hz - a frequency that is below the nominal frequency response of the audio channel - and it caused a minor, but obvious quieting of the remaining noise, particularly at very low audio frequencies (e.g. the "hiss" sounded distinctly "smoother".)   Because I had plenty of them on-hand, I settled on a 2.2 uF tantalum capacitor (100 ohms at 723 Hz) - the positive side toward U2 and tacked to the bottom of side of the board - which gave a result audibly indistinguishable from 10 uF.  In this location, a good-quality electrolytic of 6.3 volts or higher would probably work as well, but for small-signal applications like this a tantalum is an excellent choice, particularly in harsh temperature environments.

At this point I'll note that any added capacitance should NOT be done with ceramic units.  Typical ceramic capacitors in the 0.1uF range or higher are of the "Z5U" type or similar and their capacitance changes wildly with temperature meaning that extremes may cause the added capacitance to effectively "go away" and the squealing noise may return under those conditions.  Incidentally, these types of ceramic capacitors can also be microphonic, but unless you have strapped your repeater controller to an engine, that's probably not important.

Were I to do this to another board I would simply tack a small tantalum capacitor - anything from 1 to 10 uF, rated for 6 volts or more - on the bottom side of the board, across the still-installed, original C5 (as depicted in Figure 3) with the positive side of the capacitor toward U2, the MX609.


One of the repeater sites also had a "DL-1000A" delay board - apparently a later revision of the DL-1000.  A very slight amount of the "almost oscillation" was noted on the audio output of this delay board, too, but between its low level and having limited time on site, we didn't investigate further. 
This board appears to be similar to the DL-1000 in that it has many of the same chips - including the CY7187 RAM, but it doesn't have a socketed MX609 on the top of the board, and likely a surface-mount codec on the bottom.  It is unknown if this is a revision of the original DL-1000 or closer to the DL-1000C which has a TP4057 - a codec functionally similar to the MX609.

The question arises as to why this modification might be necessary?   Clearly, the designers of this board didn't pay close enough attention to the data sheet of the MX609 codec otherwise they would have probably fitted C5 with a larger value - 0.47 or 1 uF would have probably been "good enough".  I suspect that there are enough variations of the MX609 - and that the level of this instability - is low enough that it would largely go unnoticed by most, but to my critical ears it was quite apparent when an A/B comparison was done when the repeater was passing a full-quieting, unmodulated carrier and made very apparent when a 1 kHz tone was applied.

* * * * * * * * * * * * * * *

The AD-1000:

This is a newer variant of the delay board that includes audio gating and it uses a PT2399, a chip commonly used for audio echo/delay effects in guitars pedals and other musical instrument accessories as it has an integrated audio delay chip that includes 44 kbits of internal RAM.

The problems:

This delay board had two problems:  An obvious audio "squeal", very similar to that on the older DL-1000, but extremely audible, but there was a less obvious problem - something that sounded like "wow" and flutter of an old record on a broken turntable in that the pitch of the audio through the repeater would warble randomly.  This problem wasn't immediately obvious on speech, but this pitch variation pretty much corrupted any DTMF signalling that one attempted to pass through the system, making the remote control of links and other repeater functions difficult.

RF Susceptibility:

Figure 4:
The top of the modified AD-1000 board where the
added 1k resistor is shown between C11/R13 and
pin 2 of the connector, the board trace being severed.
Near the upper-right is R14, replaced with a 10 ohm resistor,
but simply jumpering this resistor with a blob of solder
would likely have been fine.
Click on the image for a larger version.
This board, too, was pulled from the site and put on the bench.  There, the squealing problem did not occur - but this was not unexpected:  The repeater site is in the near field of a fairly powerful FM broadcast and high-power public safety transmitters and it was noticed that the squealing changed based on wire dressing and by moving one's hand near the circuit board.  This, of course, wasn't easy to recreate on the bench, so I decided to take a look at the board itself to see if there were obvious opportunities to improve the situation.

Tracing the audio input, it passes through C1, a decoupling capacitor, and then R2, a 10k resistor - and this type of series resistance generally provides pretty good resistance to RF ingress, mainly because a 10k resistor like this has several k-ohms of impedance - even at VHF frequencies, which is far higher impedance than any piece of ferrite material could provide!

The audio output was another story:  R13, another 10k resistor, is across the output to discharge any DC that might be there, but the audio then goes through C11, directly to pin 1 of U2, the output of an op-amp.  While this may be common practice under "normal" textbook circumstances, sending the audio out from an op-amp into a "hostile" environment must be done with care:  The coupling capacitor will simply pass any stray RF - such as that from a transmitter - into the op amp's circuitry, where it can cause havoc by interfering/biasing various junctions and upsetting circuit balance.  Additionally, having just a capacitor on the output of an op amp can be a hazard if there also happens to be an external RF decoupling capacitor - or simply a lot of stray capacitance (such as a long audio cable) as this can lead to amplifier instability - all issues that anyone who has ever designed with an op amp should know!

Figure 5:
The added 1000pF cap on the audio gating lead.
A surface-mount capacitor is shown, soldered to the
ground plane on the bottom of the board, but a small disk-
ceramic of between 470 and 1000 pF would likely be fine.
Click on the image for a larger version.
An easy "fix" for this, shown in Figure 4, is simply to insert some resistance on the output lead, so I cut the board trace between the junction of C11/R13 and connector P1 and placed a 1k resistor between these two points:  This will not only add about 1k of impedance at RF, but it will decouple the output of op amp U2 from any destabilizing capacitive loading that might be present elsewhere in the circuit.  Because C11, the audio output coupling capacitor is just 0.1uF, the expected load impedance in the repeater controller is going to be quite high, so the extra 1k series resistance should be transparent.

Although not expected to be a problem, a 1000pF chip cap was also installed between the COS (audio gate) pin (pin 5) and ground - just in case RF was propagating into the audio path via this control line - this modification being depicted in Figure 5.

Of course, it will take another site visit to reinstall the board to determine if it is still being affected by the RF field and take any further action.

And no, the irony of a repeater's audio circuitry being adversely affected by RF is not lost on me!

 The "wow" issue:

On the bench I recreated the "wow" problem by feeding a tone into the board, causing the pitch to "bend" briefly as the level was changed, indicating that the clock oscillator for the delay was unstable as the sample frequency was changing between the time the audio entered and exited the RAM in the delay chip.  Consulting the data sheet for the PT2399 I noted that its operating voltage was nominally 5 volts, with a minimum of 4.5 volts - but the chip was being supplied with about 3.4 volts - and this changed slightly as the audio level changed.  Doing a bit of reverse-engineering, I noted that U4, a 78L05, provided 5 volts to the unit, but the power for U2, the op amp and U3, the PT2399, was supplied via R14 - a 100 ohm series resistor:  With a nominal current consumption of the PT2399 alone being around 15 milliamps, this explained the 1.6 volt drop.

The output at resistor R14 is bypassed with C14, a 33 uF tantalum capacitor, likely to provide a "clean" 5 volt supply to decouple U14's supply from the rest of the circuit - but 100 ohms is clearly too much for 15 mA of current!  While testing, I bridged (shorted) R14 and the audio frequency shifting stopped with no obvious increase in background noise, so simply removing and shorting across R14 is likely to be an effective field repair, but because I had some on hand, I replaced R14 with a 10 ohm resistor as depicted in Figure 4 and the resulting voltage drop is only a bit more than 100 millivolts, but retaining a modicum of power supply decoupling and maintaining stability of the delay line.

Figure 6:
Schematic of the AD-1000, drawn by inspection and with the aid of the PT2399 data sheet.
Click on the image for a larger version.

Figure 6, above, is a schematic drawn by inspection of an AD-1000 board with parts values supplied by the manual for the AD-1000.  As for a circuit description, the implementation of the PT2399 delay chip is straight from the data sheet, adding a dual op-amp (U2) for both input and output audio buffering and  U1, a 4053 MUX, along with Q1 and components, were added to implement an audio gate triggered by the COS line.

As can be seen, all active circuits - the op-amp, the mux chip and delay line - are powered via R14 and suffer the aforementioned voltage drop, explaining why the the supply voltage to U3 varied with audio content, causing instability in audio frequencies and difficulty in decoding DTMF tones passed through this board - and why, if you have one of these boards, you should make the recommended change to R14!


What about the "wow" issue?  I'm really surprised that the value of R14 was chosen so badly.  Giving the designers the benefit of the doubt, I'll ignore the possibility of inattention and chalk this mistake, instead, to accidentally using a 100 ohm resistor instead of a 10 ohms resistor - something that might have happened at the board assembly house rather than being part of the original design. 

After a bit of digging around online I found the manual for the AD-1000 (found here) which includes a parts list (but not a schematic) that shows a value of 100 ohms for R14, so no, the original designers got it wrong from the beginning!

While the RF susceptibility issue will have to wait until another trip to the site to determine if more mitigation (e.g. addition of ferrite beads on the leads, additional bypass capacitance, etc.) is required, the other major problems - the audio instability on the DL-1000 and the "wow" issue on the AD-1000 have been solved.

* * * * * * * * * * * * * * *

Comments about delay boards in general:

  • Audio delay boards using the PT2399 are common on EvilBay, so it would be trivial to retrofit an existing CAT controller with one of these inexpensive "audio effects" boards to add/replace a delay board - the only changes being a means of mechanically mounting the new board and, possibly, the need to regulate the controller's 12 volt supply down to whatever voltage the "new" board might require..  The AD-1000 has, unlike its predecessor, an audio mute pin which, if needed at all, could be accommodated by simple external circuitry.
  • In bench testing, the PT2399 delay board is very quiet compared the MX609 delay board - the former having a rated signal-noise ratio of around 90 dB (I could easily believe 70+ dB after listening) while the latter, being based on a lossy, single-bit codec, has a signal-noise ratio of around 45 dB - about the same as you'd get with a PCM audio signal path where 8 bit A/D and D/A converters were being used.

A signal/noise ratio of around 45 dB is on par with a "full quieting" signal on a typical narrowband FM communications radio link so the lower S/N ratio of the MX609 as compared with the PT2399 would likely go unnoticed.  Were I to implement a repeater system with these delay boards I would preferentially locate the MX609-based delay boards in locations where the noise contribution would be minimized (e.g. the input of the local repeater) while placing the quieter PT2399-based board in signal paths - such as a linked system - where one might end up with multiple, cascaded delay lines on link radios as the audio propagates through the system.  Practically speaking, it's likely that only the person with a combination of a critical ear and OCD is likely to even notice the difference!

This page stolen from