Thursday, October 22, 2020

Using the jt9 executable to receive FST4W signals


The following post is specific to WSJT-X v2.3.0-rc1.  It may not apply to subsequent versions.

* * *

As a heavy user of K1JT's WSPR and operating on the 2200 and 630 meter bands, I have noted with interest the introduction of the "FST4W" mode in the recent (v2.3.0-rc1) wsjt-x release.  Operating using the same detection bandwidth as WSPR (when FST4W is operated in the 120 second mode) it offers a theoretical 1.4dB improvement in detection sensitivity.

Being involved with wsprdaemon (link to that project here ) - an open-source project that automates and optimizes reception of WSPR signals on all bands, particularly if multiple receivers/antennas are used - we have been watching this development with interest, particularly since FST4W has the likelihood of supplanting conventional WSPR operation, especially on the lowest amateur bands (2200, 630 and possibly 160 meters) where minimal of Doppler shift is expected.

Internally, WSJT-X  uses the subordinate wsprd program as the decoding (and encoding) engine.  As a stand-alone program, the wsprd executable code may be invoked with a command line to decode signals contained within a .wav file that was captured during the standard two minute interval - aligned with the even UTC minutes - and produce a text file containing the decoded signals.

Why use the executable rather than the entire wsjt-x suite?  The fact is that the use of the wsjt-x suite does not lend itself easily to script-driven, bare-minimum, lightweight implementations where further processing of the decoded data (to remove duplicate decodes from multiple receivers, antennas and to use this same data for further analysis of signal/noise) is desired.

The "jt9" executable:

After a bit of digging about, it was "discovered" that FST4W - being an offshoot of the JT9 protocol - was handled not by the wsprd executable, but the jt9 executable.  Simply executing this program with no arguments will yield a list of command-line arguments which, on the face of it, made it appear that updating the wsprdaemon would be a relatively simple matter.

Except that it didn't work.

Initial testing with strong, off-air FST4W signals that was known to be decodable (because farther-flung stations were able to decode the very same transmissions) yielded no results when the .wav file was applied to the jt9 program - but automatic execution over many hours yielded the occasional decode.  Confused by this, I sought help on the WSJT-X forum.  Fortunately, Joe Taylor and several of the developers offered a clue:  The "-f" parameter of the jt9 executable, described minimally as "Receive Frequency Offset".

Apparently, the default center frequency of the jt9 executable - at least when in FST4W mode (and maybe others) is 1500 Hz - a fact implied when one gets the display of command-line arguments.  What is not so clear - and only alluded to in the available documentation - is that the apparent bandwidth of the decoding - at least in the 120 second mode - is on the order of 40 Hz (+/- 20 Hz).

At a quick glance through the source code (file "jt9.f90"), this bandwidth setting appears to be hard-coded into a shared variable (apparently accessible by other programs in the WSJT-X suite) called "ntol" (likely a number referring to the "frequency tolerance" setting in the GUI) that is not available via the jt9 command line - at least, not without modification of the source code.  (The possibility of directly accessing these shared variables exists - but this is platform-specific, a bit messy and somewhat dangerous!)

Unfortunately, this fixed +/-20Hz bandwidth does not appear to be compatible with the way that the FST4W mode has (already!) found use on 2200 and 630 meters where it is used along-side the WSPR mode in the 200 Hz subbands.

A hell of a kludge:

This fact implies that in order to use something other than the GUI version of the wsjt-x software, a work-around must be invoked.  The following is a bare-minimum example of how one might do this via the command line:

jt9 -W -p 120 -f 1420 <wav file to be processed> 

jt9 -W -p 120 -f 1460 <wav file to be processed>

jt9 -W -p 120 -f 1500 <wav file to be processed>

jt9 -W -p 120 -f 1540 <wav file to be processed>

jt9 -W -p 120 -f 1580 <wav file to be processed>

(One might include the -H, -L and -d parameters in actual practice.)

In other words, in order to cover the entire 200 Hz WSPR subband, the JT9 executable (v2.3.0-rc1) must be executed - processing the same .wav file - at least five times:  The results of the decoding will, in each case, be found in the file "decoded.txt".  If one wishes to implement an equivalent of the -w parameter of the wsprd executable (e.g. +/- 150 Hz "wideband" mode), you will need even more invocations than above.

The result from the above mess will be five different decoding results, each of which must be saved (e.g. renamed) between subsequent executions to prevent overwriting by the previous instance.  After this, the five results must be concatenated to yield a single file - but there is a catch:  It is likely - particularly if the signal is strong - that the same signal will be decoded more than once.  Apparently, the "+/- 20Hz" limit isn't the result of a "brick-wall" filter:  Signals beyond this frequency range may be decoded, but the reported S/N values will likely be reduced as distance of the received signal from the specified center frequency increases.  In short, this means that the results of the concatenated version of the "decoded" file(s) must be sorted and all but the single, strongest decode for each station must be discarded.

If one wishes to integrate the FST4W decodes into the existing WSPR captures for processing, yet another step must be undertaken:  "Fixing" the formatting.  Not surprisingly, the output in the "decoded.txt" is not formatted the same as the results of the decoding from the wsprd executable meaning that one will need to do a few things, after the fact, to "fix" them - particularly if you wish to forward them to, including:

  • Supply the date.  The "decoded.txt" includes the time - but not the date.  Because date of the .wav file may not be the same as the system date (e.g. later processing of the .wav files - or the interval being processed occurred just before the new day) - one must use the actual date of the recording.  The obvious place to obtain this is from the name of the .wav file being processed.
  • Frequency offset.  The information that one might send to must include the carrier frequency of the received signal, but the output in the "decoded" file has only the audio frequency:  One must obtain the LO frequency of the receiver being used from "somewhere else" and calculate this on the fly.
  • Supply missing information.  The "decoded.txt" file does not have all of the same information fields that one might supply when uploading WSPR spots, so this information must be added as necessary.
  • Arrange the fields in the proper order.  Once the needed information is applied, one will probably want to use "awk" or similar to produce the same order as the wsprd data - assuming this wasn't already done in the process.

* * *

There you have it:  The germ of what would be needed if one wishes to supplement the existing WSPR decodes with the newer FST4W mode using just the bare executables.  If one wishes to decode other than the 120 second FST4W mode, things get even more complicated!

* * *

P.S.:  While it would be pretty trivial tweak the code to allow modification of the ntol variable via command line, this would complicate the ongoing maintenance of the wsprdaemon code.  We can only hope that the current authors see fit to include a means by which the entire wspr subband can be monitored with a single invocation of the jt9 executable.

This page stolen from


Saturday, September 26, 2020

Revisiting the "Limited Attenuation High Pass" filter - again.

In several previous posts (See:  "A Limited Attenuation High Pass Filter" and "Revisiting the Limited Attenuation High Pass Filter" I described a "high pass" filter that offered low attenuation at high HF frequencies, but a controlled amount of attenuation at lower frequencies - this, to accommodate a fundamental fact about both HF propagation and direct-sampling Software Defined Radios (SDRs):  The two don't play nice with each other!

As noted in the previous post(s), the problem is two-fold when it comes to broad-band SDRs that are intended to cover the entire HF spectrum all at once:
  • HF noise power and signal level is (generally) inversely proportional to frequency.  At lower frequencies - say, 2-8 MHz - the noise power is far higher than it typically is at around 20-30 MHz.
  • A direct-sampling SDR - or any receiver, for that matter - can tolerate only so much RF power on its front end.  Traditionally, this is a mitigated with the use of narrow-band RF band-pass filters, but this can't be done if one intends to be able to cover the amateur radio bands 160 through 10 meters (1.8-30 MHz).
With the aforementioned issues is yet another one:  Because the noise floor at 10 meters when it is "quiet" is so much lower than 80 meters (perhaps 40-50 dB during noisy nighttime conditions, 25 dB or so during quiet daytime conditions) there is an intrinsic disparity between the amount of sensitivity that is need to "hear everything" at the opposite ends of of the HF spectrum - but since a typical direct-sampling SDR is pretty much "flat", we end up with what might seem like a pair of intractible problems:
  •  To accommodate the very strong signals and high noise levels at lower HF frequencies, the RF signal gain in front of the A/D converter must be carefully set to prevent overload.
  • In order to "hear" the noise floor at 10 meters, the system gain must be set fairly high.

What these two factors, together, imply is that if we have enough gain to comfortably detect the noise floor at 10 meters, our receiver will be badly overloaded during strong-signal conditions on the lower bands.  Conversely, if we scale (e.g. attenuate) the input to accommodate the very large signal excursions, the receiver will simply be unable to detect signals at/near the "quiet" 10 meter noise floor.

There will (hopefully) be the day that the upper HF propagation conditions improve greatly with the arrival of solar cycle 25 and at that time, strong signals will appear on the bands >=15 MHz.  When this happens, we will likely be faced with a problem similar to that which we are trying to solve here (e.g. very strong signals overloading the A/D converter).  At this time, the only recourse will likely be a means of using an external device to adjust the gain/attenuation in front of the receiver, probably using the existing I/O lines under receiver control.

A revised circuit:

Why talk about this issue a THIRD time?  I decided to make one that provided a better 50 ohm match across all frequencies than the previous versions.  This revised circuit may be seen in the figure below:

Figure 1:
Generic pre-emphasis network set for about 50 ohms.
Click on the image for a slightly larger version.

Some readers will recognize the topology of the circuit in Figure 1 as the classic pre-emphasis network found in the signal  path of FM video transmitters.  Whereas those circuits are typically designed for 75 ohms, this one is intended for a 50 ohm system - but careful observers will notice that 47 ohm resistors are used, instead:  For receive-only purposes, I have chosen the components in this article to be standard values at the expense of a slight increase in mismatch - but the VSWR of these circuits, when terminated at 50 ohms - is likely to be no more than about 1.1:1.

This circuit - compared with the previous versions - has the advantage that it presents a consistent source and load impedance across the frequency range, making it a bit more "friendly" in systems that may be impedance sensitive (e.g. following a band-pass filter, long coaxial cable runs, following/preceding conditionally-stable RF amplifiers.)  The obvious trade-off is that as compared to the previous version (which was based on a high-pass filter and some resistive bypassing) this circuit has definite limitations on how sharp and deep the "knee" may be at any given frequency as only a single inductor and capacitor are used.

By tweaking the values of R1, R4, C1 and L1 we can adjust both the amount of low-frequency attenuation and the frequency of the "knee" where the attenuation takes place - but for our purposes, we will be placing the center of that "knee" around 10 MHz to provide both the minimal loss at 30 MHz and adequate attenuation at and below 7 MHz.

Here are a few examples of values of R1, R4, C1 and L1 using standard-value components and approximate attenuation values at various frequencies:

R1 = 68 ohms  R4 = 39 ohms
C1 = 390pF  L1 = 1uH
DC attenuation:  7.3dB
@ 2 MHz: 7.0dB  @4 MHz: 6dB
@ 7 MHz: 4.6dB  @10 MHz: 3.4dB
@ 14 MHz: 2.3dB  @28 MHz: 0.8dB
R1 = 120 ohms  R4 = 20 ohms
C1 = 330pF  L1 = 0.82uH
DC attenuation:  10.8dB
@ 2 MHz: 9.8dB  @4 MHz: 8.1dB
@ 7 MHz: 5.6dB  @10 MHz: 3.9dB
@ 14 MHz: 2.5dB  @28 MHz: 0.8dB
R1 = 120 ohms  R4 = 20 ohms
C1 = 270pF  L1 = 0.68uH
DC attenuation:  10.8dB
@ 2 MHz: 10.1dB  @4 MHz: 8.7dB
@ 7 MHz: 6.5dB  @10 MHz: 4.8dB
@ 14 MHz: 3.3dB  @28 MHz: 1.2dB
R1 = 100 ohms  R4 = 27 ohms
C1 = 270pF  L1 = 0.68uH
DC attenuation:  9.4dB
@ 2 MHz: 8.9dB  @4 MHz: 8dB
@ 7 MHz: 6.3dB  @10 MHz: 4.8dB
@ 14 MHz: 3.6dB  @28 MHz:1.3dB

Figure 2:
Table showing some possible values for the circuit of Figure 1 and the example attenuation values.


In practice, several of these sections will likely need to be cascaded to achieve the desired amount of attenuation at the lower HF frequencies which brings up the question:  Could you not choose components to do this for a single section?  The answer is theoretically, yes - but the fact is that practical inductors - particularly the molded type - are quite lossy and achieving high amounts of lower-frequency attenuation with a single stage can become problematic - so it's probably better to cascade several of these networks together, instead.

A practical example:

Figure 3:
The exterior of the four channel filter network.
Click on the image for a larger version

A practical example of such a network is one that is to be installed in the KFS (Half Moon Bay, CA) KiwiSDR/WSPRDaemon system.  There, four wideband antennas are available to feed the KiwiSDRs on site, so a box was constructed with four, identical pre-emphasis networks, each to feed its own receiver stack.

As is the case at the Northern Utah WebSDR, noise and signals at the lower end of the HF spectrum is often very much stronger than at the high end:  If amplification is added to allow the detection of the noise floor at 10 meters, there is the very high probability that the receiver will badly overload on HF signals from the lower end of the spectrum.

Each "channel" of the device depicted in Figure 3 is identical, consisting of two cascaded sections.  The first section is that from the upper-left quadrant of the table (R1=68 ohms, C1 = 390 pf) and the upper-right quadrant (R1=120 ohms, C1 = 330pF).  Rather than the use of molded chokes, the individual inductors were wound using 30 AWG wire on T25-2 toroids:  17 and 15 turns for the 1 uH and 0.82 uH inductors, respectively.

Figure 4:
The interior of the four-channel network.
The circuit is simple enough to be wired "Manhattan"
style on glass-epoxy PC board material between the
two center pins of the BNC connectors.
Click on the image for a larger version

As can be seen in Figure 4, the construction is very simple, requiring no circuit board at all when using standard, through-hole components.  The circuit was built into a die-cast aluminum box with the BNC connectors holding the piece of PCB material in place.

To secure the components - particularly the small, toroidal inductors - RTV sealant (white) was used to hold components in place and to prevent adjacent wires of C1/R1 and R2/R3 from coming into contact with each other.

This method of construction is very simple and effective, offering good performance into the VHF range when reasonable care is taken.  With the 20mm high dividers between the sections installed as shown, the channel-to-channel isolation exceeded 85dB (the limit of convenient measurement) at 30 MHz.

Figure 5, below, shows the typical response of the sections:

Figure 5:
The response of one of the sections as measured on a DG8SAQ VNA.
Click on the image for a larger version.

Because it can be a bit difficult to read, the values of attenuation and VSWR in the upper-left corner are reproduced below:

Frequency (MHz) Insertion Loss (db) VSWR
0.474 21.4 1.09
1.812 19.9 1.09
3.592 16.6 1.08
5.324 13.4 1.08
7.038 10.8 1.08
10.12 7.4 1.07
14.06 4.7 1.07
18.16 3.2 1.07
21.08 2.4 1.07
24.94 1.8 1.09
28.18 1.4 1.10
50.0 0.4 1.19
Figure 6:
Attenuation and VSWR of the network at amateur band frequencies.
Practical usage:
For large, broadband antennas and small, active E-field whip antennas, the tendency will be for a relatively "flat" frequency response - but with a small E-field whip antenna, the typical high-frequency roll-off can exacerbate the aforementioned low-HF band overload issue, making a filter network such as the above, even more indispensable.  While an attenuation value of about 17dB at 80 meters may seem to be rather extreme, unless your antenna system has severe low-frequency roll-off at the low end, the noise floor on 80 meters - even during a quiet winter day when the band is dead - should be at least several dB above the receiver's noise floor.

 For specifics relating to a wideband direct-sampling SDR like the KiwiSDR or Red Pitaya, refer to the earlier article linked above - "A Limited Attenuation High Pass Filter".


As mentioned above, a direct-sampling receiver like the KiwiSDR does not have enough sensitivity to "hear" the 10 meter quiet band noise floor at a very quiet receive site. In terms of overall system gain adjustment, a few comments are warranted:
  • A good test is to see if, on 10 meters when it is "dead", you are hearing your local noise floor.  Note the S-meter with the antenna connected and disconnected - preferably, with the input to the receive system terminated with a 50 ohm load when disconnected.  If you do not see an increase in the S-meter reading and on the waterfall by 3-5 dB, the overall system gain is too low to allow the receiver to see the noise floor at your antenna system.
  • If you do not see an increase in noise when the receiver is connected to an antenna, a bit of extra gain is recommended.  Given an ideal isotropic antenna at a very quiet receive site, it will probably take about 12 dB of gain to comfortably "see" the antenna's noise floor - assuming no other losses (coax, splitter, etc.)
  • The preferred location of an amplifier is after the filter described above as it, too, will be protected against the very strong lower-frequency HF signals - even though a device like the above will increase the loss by about 1.4dB.
  • In cases where there are splitting losses (e.g. feeding multiple receivers) it may be beneficial to split the gain.  A modest-gain amplifier (10-14dB) might precede the splitters - the modest gain being enough to overcome splitting losses and to maintain system noise figure.
  • In the case of a low noise level receive site, the splitting losses may put the 10 meter noise floor below the detection threshold of the receiver and, if necessary, another amplifier may be placed just after the filter described above to make up for it.
  • It's worth noting that if you can detect a 3-5dB increase in noise floor with the antenna connected (versus disconnected) than even more gain will NOT further-improve system performance:  On the contrary, more gain than necessary will increase the probability of receiver overload - particularly on a direct-sampled SDR that has no AGC in its signal path like the KiwiSDR.  If one has more than 3-5dB of noise floor increase with the antenna connected on 10 meters when it is quiet, it's suggested that several dB of attenuation be added.  The preferred place to add this attenuation is in front of the amplifier to maximize its strong-signal handling - but only if one can still detect the noise floor on the antenna after doing so.  If one has a very high gain amplifier (say 20-25dB) and the gain is excessive, judicious addition of attenuation on both the input and output of the amplifier may be required.
  • When an amplifier is to be considered for HF use, it should have clearly-defined ratings - one of the most important of these is the output power capability (often "P1dB" which is the output power at 1dB compression) which should be in excess of +20dBm.  Second to this would be the 3rd order intercept point, which should be stated as being in excess of +30dBm - and the higher the better.  Both of these parameters are indicative of an amplifier that can deal with multiple, strong signals that may be present at the antenna.
This page stolen from



Tuesday, September 15, 2020

Comparing the "KiwiSDR" and the "RaspberrySDR" software-defined receivers

Update (20201002): 

The RaspberrySDR schematic and a fork of the source code is now available - see the end of this article for additional analysis.

Any reader who has perused these blog pages will be aware that I have been using the KiwiSDR for some time now (I personally own four of them and I manage two more!) and have been happy with their performance, finding various ways to maximize their usefulness.  I was intrigued when a "similar" device appeared that might prove to be useful - the "RaspberrySDR".

Figure 1:
The exterior of the RaspberrySDR.
The case is well-built and compact, housing both the
SDR board and a Raspberry Pi 3+.
Click on the image for a larger version.


For those not familiar with the KiwiSDR, it is a Linux-based, stand-alone software-defined radio capable of receiving from (nearly) DC to at least 30 MHz using a variety of modes (SSB, AM, FM, Synchronous AM) and has several "extensions" that allow reception of several digital modes - including CW, RTTY, and WSPR - as well as provide a means of viewing FAX transmissions and SSTV.  It also includes a provision for TDOA (Time Difference of Arrival) determination of transmitter location in conjunction with other similarly-equipped receivers.

This receiver does not have a front panel, but rather it is entirely used via a web interface.  What this means is that it may be used remotely, by several people, simultaneously - each person getting their own, virtual receiver that they may independently tune.

Originally introduced as a Kickstarter project around 2016, the hardware has been augmented with continually-improved open-source software with the lions share of the work having been done by John Seamons.  Using a 14 bit A/D converter clocked at about 66.66 MHz, an FPGA (Field Programmable Gate Array) and a GPS front-end chip, most of the number-crunching is done before the data is handed off to a single-board computer - originally the BeagleBone Green, but now also BeagleBone AI (BBAI):  Both the KiwiSDR receiver board and the BeagleBone Green (BBG) have been sourced by Seeed Studios.

For a variety of reasons, the supply of KiwiSDRs has been a bit fickle - both due to the limited capacity by Seeed in response to demand of these devices and issues which have been impacted the supply of some critical parts.  Another possible issue may be that there is likely to be a bit of "fatigue" on the part of some of the key people related to the KiwiSDR:  Careful readers of blog entries from several years ago can see that a similar thing happened to me on a project on which I had previously worked - coincidentally, also an open-source SDR device.

Figure 2:
Inside the case showing the "RaspberrySDR" board.  The phyiscal layout is quite similar to that of the KiwiSDR.  The small heat sink is affixed atop the LTC2208 A/D converter and the fan is controlled by a simple transistor circuit on the acquisition board.  Unlike the KiwiSDR, only one row of headers (top) is used to connect to the host computer, and unlike the KiwiSDR, power is supplied via the host (Raspberry Pi) processor.   In noting the logo on the board, I can't help but wonder if its intent was that of parody, along the lines of the fair use doctrine?
Click on the image for a larger version.


The present day:

Because the KiwiSDR is based on open-source design of its hardware and software - ostensibly to encourage participation in enhancement of all aspects of its design - one may freely copy it within the constraints of the open-source license.  It is not surprising, then, that several derivative versions have recently appeared on the scene, more or less following the "open source" philosophy - a topic that will be discussed later.

Using the base code found on GitHub and the openly-published schematics as a starting point, the "RaspberrySDR" has appeared - using, as you may have surmised, the Raspberry Pi - specifically the Raspberry Pi B 3+.  This single-board computer is of similar size as the BeagleBone Green and roughly similar capabilities - albeit a bit more powerful - and is certainly better-known than the other fruit, so it was a natural choice as the hardware interface between it and the receiver board is fairly trivial to adapt with a simple "conversion" board that adapted to the Pi's interface.

Also to be expected, a revised board (like those pictured above) specifically designed to interface with the Raspberry Pi has appeared from Chinese sellers, packaged with a Raspberry Pi, in a small, aluminum enclosure (with an external fan) for approximately $100 less than the KiwiSDR+Enclosure combination - and this revised version uses a higher-speed (125 MHz versus 66.66 MHz) and higher resolution (16 bits versus 14 bits) A/D converter so that its receive range is extended to a bit over 60 MHz, including the 6 meter amateur band - and there is the potential for improved receive performance in terms of dynamic range and distortion.


Having gotten my hands on one of these "RaspberrySDRs" - and already having available some KiwiSDRs for testing, I decided to put them side-by-side to compare the differences - specifically, to measure:

  • Apparent noise floor and sensitivity
  • Appearance of spurious signals
  • Large signal handling capability
  • Image response (Nyquist filtering)


Noise floor comparisons of the KiwiSDR using the BBAI and the RaspberrySDR using the Raspberry Pi3B+:

The KiwiSDR using the BeagleBone AI:

Figure 3 shows the noise floor of a KiwiSDR using the Beaglebone AI with no connected antenna.  As is typical of this device, there is a slight increase in the noise floor starting around 18 MHz:  The reason for this is unknown, but it is surmised that this is intentional - an artificial boost in "software" that is used to compensate somewhat for the Sin(x)/x roll-off intrinsic to any analog-to-digital sampling scheme as one approaches the Nyquist limit - which, given the 66.66 MHz sampling rate of the KiwiSDR, would be 33.333 MHz.

This assumption would appear to be supported by the fact that as one approaches the Nyquist frequency, the S-meter reading does not drop as one might expect, but remains fairly constant and appears to be close to +/- 1 dB from below 1 MHz to 30 MHz as shown in the data below.

Figure 3:
The noise floor of the KiwiSDR running on a BeagleBone AI (BBAI).
Note the slight rise around 18 MHz - the possible result of the data being "cooked" in the pipeline to offset Sin(x)/x losses near the Nyquist limit.
Click on the image for a larger version.

In numerical form, the measured noise floor of the KiwiSDR/BBAI combination using a 10 kHz AM bandwidth is:

  • -117dBm @ 1 MHz
  • -117dBm @ 5 MHz
  • -116dBm @ 10 MHz
  • -116dBm @ 15 MHz
  • -114dBm @ 25 MHz
  • -115dBm @ 30 MHz (29.9 MHz)
A broad 3dB peak is indicated in the noise floor:  We will attempt to determine if this peak is "real" in our later analysis.

The RaspberrySDR using the Raspberry Pi3+:

The RaspberrySDR has a 125 MHz sampling clock and a 16 bit A/D converter so the landscape looks a bit different as it can (theoretically) receive to 62.5 MHz, but is limited to 62.0 MHz in firmware.  Comparing the 0-30 MHz noise floor to that in Figure 2 (from the KiwiSDR) we can see some interesting differences in Figure 4:

Figure 4:
Noise floor of the RaspberrySDR running on a Raspberry Pi3+.  There is a similar rise in frequency - although it looks a bit different.
Click on the image for a larger version.

We can see a similar rise in the noise floor, but we get the impression that limiting our range to just 30 MHz hides its nature, so Figure 5 shows the noise floor over the full frequency range of 0-62 MHz:

Figure 5:
Noise floor of the same receiver as depicted in Figure 4, but showing the full 0-62 MHz frequency range.  Very evident is a rise in the noise floor centered at approximately 36 MHz.  This spectrum is unchanged if the SPI frequency is changed from the default 48 to 24 MHz.
Click on the image for a larger version.

In comparison, the noise floor of the RaspberrySDR+Raspberry Pi3+ as measured using AM with a 10 kHz bandwidth is as follows:

  • -118dBm @ 1 MHz
  • -118dBm @ 5 MHz
  • -118dBm @ 10 MHz
  • -116dBm @ 15 MHz
  • -115dBm @ 25 MHz
  • -112dBm @ 30 MHz
  • -113dBm @ 40 MHz
  • -116dBm @ 50 MHz
  • -116dBm @ 60 MHz

The magnitude of the broad peak is more significant than on the KiwiSDR - being on the order of 6 dB rather than 3 dB.  Because the RaspberrySDR is based on the (open source) KiwiSDR, it is expected that a the RF processing will be similar - and this would appear to be borne out by the presence of the rise in the noise floor, centered around approximately 36 MHz.  It would seem likely that the adaptation to the higher sample rate hardware is not fully realized as the pre-emphasis in firmware - if it exists - is not properly implemented as is evidenced by the numbers.

Comparison of S-meter calibrations:

To provide and additional data point in our measurement, we'll check the S-meter calibration.  For our purposes we will use a frequency of 10 MHz and a level of -50dBm as our reference as that appears to be below the apparent amplitude peaking of either type of receiver.  Using a known-consistent signal source, the results are as follows:

Frequency KiwiSDR with BeagleBone AI RaspberrySDR with Raspberry Pi3+
1 MHz -51dBm -50dBm
5 MHz -51dBm -50dBm
10 MHz -50dBm -50dBm
15 MHz -49dBm -49dBm
20 MHz -48dBm -48dBm
25 MHz -48dBm -46dBm
30 MHz -50dBm -45dBm
50 MHz --- -50dBm
60 MHz --- -57dBm

The effects of what appears to be pre-emphasis can clearly be seen:  The effects of the Sin(x)/x roll-off on the A/D converter seems to have been more-or-less compensated on the KiwiSDR, but the attempt to do this seems to be misapplied on the RaspberrySDR.  Based on the apparent noise floor and the absolute response to the -50dBm signals, we can make an estimate of the absolute sensitivity of the KiwSDR and RaspberrySDR on the various bands with simple math.

Frequency KiwiSDR Noise floor (dBm/Hz) RaspberrySDR Noise floor (dBm/Hz)
1 MHz -158dBm -158dBm
5 MHz -158dBm -158dBm
10 MHz -156dBm -158dBm
15 MHz -157dBm -157dBm
25 MHz -156dBm -159dBm
30 MHz -155dBm -157dBm
50 MHz --- -156dBm
60 MHz --- -111dBm

The chart above compares the apparent noise floor of both receivers, compensating for the 10 kHz AM detection bandwidth (40dB) and the measured offset of the S-meter at each frequency.

It is worth noting that above about 20 MHz - and given zero line and antenna losses - neither receiver has sufficient sensitivity to detect the expected ITU noise floor given a unity gain antenna:  Approximately 10dB of additional low-noise gain is required at 30 MHz to "hear" the noise floor in that case and even more gain would be appropriate at 6 meters.

This very topic has been discussed at this blog in the past - see the blog post "Limited Attenuation High-Pass filter" - LINK and its follow-up article "Revisiting the Limited Attenuation High Pass Filter" - Link - and their related articles for a discussion. 

For information about the expected ITU noise floor under various "idealized" conditions, see the article:  Recommendation ITU-R p.372-8 link.


Overload signal level comparison:

Another test was done - the determination of the RF level at which the "OVL" message on the S-meter would show, indicating overload of the A/D converter.  This test was done for both units at 10 MHz - the same frequency for which the S-meter was calibrated - and in the "Admin" tab the unit was configured so that just one "OV" occurrence per 64k cycles would be detected.

KiwiSDR OV indication:

The KiwiSDR's "OV" indication just started to indicate at -14dBm.

RaspberrySDR OV indication:

The RaspberrySDR's "OV" indication just started to indicate at -9dBm.

The apparent difference between these is 5dB.

"Wait - shouldn't there be another 12dB of dynamic range with two more bits of A/D resolution?"

In theory, two additional bits of A/D conversion  should yield and additional 12 dB of dynamic range - but this is not readily apparent in the numbers given above (at 10 MHz, 142dB between the noise floor and the "OV" indication for the KiwiSDR and 149dB for the RaspberrySDR) - so what's the deal?

First off, all things being equal (e.g. the same reference voltage for the A/D converter) one would expect the additional range to occur at the bottom of the signal range rather than the top, but this difference can be a matter of scaling via careful adjustment of the amount of amplification preceding the A/D converter and how the code is written.

Ideally, one would carefully balance the signal path so that the intrinsic noise of the amplification preceding the A/D converter would be comparable to the signal level required to "tickle" an LSB (Least Significant Bit) or two with no signal applied:  A higher level than this risks "wasting" dynamic range on internal noise.  Judging by the "even-ness" of the noise across the spectrum, I suspect that the output of the input amplifier is enough to light up at least two LSBs of the A/D converter.  If the signal path were highly "gain starved", low-level spurious signals would likely appear when very low-level signals were applied:  The sort of distortion resulting from the A/D converter being too-lightly driven can be witnessed when using a receiver like the RTL-SDR in "direct" mode and just increasing the signal levels to the point where they start to appear and many spurious signals show up.

It's possible that the "extra" 5 dB at the high end of the signal range is real and that signal dynamics have been juggled a bit with some the extra 2 bits worth of range being present at the bottom end, but this would be difficult to divine without more thorough testing and without the availability of a schematic diagram.  My preference would been to have the unit be slightly "gain starved" so that the the LSBs of the A/D converter would be subject to the action of external amplification to provide maximum flexibility when it comes to managing the signal path.


Note:  The schematic of the RaspberrySDR has become available since this was posted - see the analysis at the bottom of this article.

Without the availability of a schematic diagram of the front end of the receiver there are several unknowns:

  • The LTC2208 has a pin that indicates an overload condition.  It is presumed that in spite of other issues with the firmware (see below) that the "OV" indicator is working properly.
  • The LTC2208 A/D converter has a low-level dither generator built into it:  It is unknown if this feature is active.
  • Shorting the RF signal path  at the A/D converter to eliminate the contribution of the pre-converter amplification would be instructional to ascertain noise contribution from that device.
  • Probing the input of the A/D converter at/near overload to divine the actual range of the receiver itself to determine if the full dynamic range of the 16 bits is properly utilized would be revealing.
  • The LTC2208 has a programmable gain amplifier and full-scale input voltage may be selected as being either 1.5 or 2.25 volts:  The hardware configuration is unknown.
  • At the time of writing this, I have not found in the code any modification of the FPGA image that takes advantage of the extra two bits of A/D resolution.  This does not mean that no modification has been done, but rather that I have not (yet?) discovered it.

Nyquist Image response:

Any receiver has an image response - and an SDR is no exception. In this case, signals above the Nyquist frequency (half the sampling rate) will appear to "wrap around" and show up in the desired frequency range.  Because it is impractical to build true a "brick wall" low-pass filter, there are always compromises when designing such a filter, including:

  • Complexity:  How "fancy" should such a filter be in terms of component count?  More components can mean improved performance, but this implies a more difficult design, higher expense and more performance-related issues such as loss, ripple, sensitivity to source/load impedance, etc.
  • Trade-off of frequency coverage:  It can be difficult to weigh the pros and cons of a filter in terms of its cut-off frequency.  For example, setting the cut-off near the Nyquist frequency will improve performance at the high end of the available range, but at the risk of poorer image rejection.  Conversely, setting it much lower than Nyquist may sacrifice desired coverage.  A case in point would be that for the KiwiSDR, with a Nyquist frequency of about 33.33 MHz, coverage to 30 MHz (the "top" of HF) is desirable, so a bit of compromise is warranted in terms of absolute image rejection.

How bad/good is it?

The image response of both the KiwiSDR and RaspberrySDR were measured and determined to be as follows: 

The KiwiSDR:

Generator frequency (% of Nyquist) Nyquist image frequency on RX
KiwiSDR Nyquist image attenuation
37 MHz  (111%) 29.667 MHz 10dB
42 MHz  (126%) 24.66 MHz 20dB
47 MHz  (141%) 19.66 MHz 30dB
52 MHz  (156%) 14.66 MHz 39dB
59 MHz  (177%) 9.66 MHz 47dB
62 MHz  (186%) 4.66 MHz 55dB
66 MHz  (198%) 0.66 MHz 60dB
70 MHz  (210%) -3.33 MHz 65dB

 The RaspberrySDR:

Generator frequency  (% of Nyquist) Nyquist image frequency RaspberrySDR Nyquist image attenuation
64 MHz  (102%) 61 MHz 9dB
75 MHz  (120%) 50 MHz 18dB
85 MHz  (136%) 40 MHz 27dB
95 MHz  (152%) 30 MHz 36dB
105 MHz  (168%) 20 MHz 46dB
115 MHz  (184%) 10 MHz 54dB
120 MHz  (192%) 5 MHz 58dB
124 MHz  (198%) 1 MHz 60dB
130 MHz  (208%) -5 MHz 65dB

Doing a direct comparison between the two receivers one can see that based on the percentage of the frequency of the unwanted signal with relation to to the Nyquist frequency, the two receivers are pretty much identical in terms of image rejection, implying a very similar filter in each:  I suspect that the RaspberrySDR's Nyquist filter is pretty much that of the KiwiSDR, but with its frequency having been rescaled proportionally.

Because the Nyquist frequency of the RaspberrySDR is approximately twice that of the KiwiSDR, in terms of "dB per MHz" the attenuation of RasperrySDR's Nyquist filter performance will be noticeably worse.  For example, we know from the above information that for the U.S. FM broadcast band that the attenuation of the KiwiSDR's Nyquist filter will be at least 65dB - but we can see that for the RaspberrySDR that this attenuation will likely vary between about 30dB at the bottom end of the band (88 MHz) and 50dB at the top end (108MHz) meaning that it is likely that strong, local FM broadcast signals will cause some interference to the RaspberrySDR in the 37-17 MHz range implying that a simple blocking filter for this frequency range should have been built in   The work-around for this problem - should it arise - is pretty simple:  Install an FM broadcast band "blocking" filter such as those sold for the RTL-SDRs:  An example of such a filter may be found HERE.

Update:  Schematics for the RaspberrySDR are now available - see below.

Effects of receiver noise floor with a strong, off-frequency signal:

Any receiver is affected by other strong signals within its front-end passband - and with direct-sampling SDRs such as these, any signal appearing at the antenna port can and will have an effect elsewhere within the receiver's passband - primarly due to nonlinearity of the A/D converter and, to a lesser extent, the phase noise of the various oscillators - real and virtual.

For this test, a very strong signal from a 10 MHz OCXO (that of an HP Z-3801 GPS receiver - likely a variant of an HP 10811) was used as its output has respectably good phase noise performance.  Two tests were done - at -15dBm and another at -25dBm - each time measuring the change in the noise floor at different frequencies distant from 10 MHz.

With the 10 MHz signal set to -25dBm, NO change was observed in the noise floor at the frequencies listed below on either receiver - but there was a bit of increase in the noise floor with the application of the -15dBm signal - the magnitude of the increase of the noise floor is indicated in square brackets [] in the chart below:

Noise floor frequency Noise floor of KiwiSDR [degradation]
Noise floor of RaspberrySDR [degradation]
11 MHz -114dBm  [2dB] -116dBm  [0dB]
15 MHz -114dBm  [2dB] -114dBm  [2dB]
25 MHz -113dBm  [1dB] -112dBm  [2dB]

Assuming that the 10 MHz signal source is "clean", the above information shows that the two receivers behaved quite similarly.  It also shows that if there are two additional bits of A/D resolution available in the signal pipeline on the RaspberrySDR, their effect is not readily apparent in the measurements above.

All is not well:  A few glaring bugs!

There are several "features" that are readily apparent in this version of RaspberrySDR firmware (Version 1.402) that cause a few operational problems:

  • Inconsistent RF level calibration.  Occasionally, when powered up, the RF signal level calibration (S-meter, waterfall) will be way off requiring a setting of about -30dBm to yield correct S-meter calibration at 10 MHz rather than -19 - which is within a few dB of the setting of the KiwiSDR:  Simply rebooting the KiwiSDR server will likely correct this.
  • No obvious improvement in dynamic range or sensitivity due to the "extra" 2 bits of A/D resolution.  As discussed, one would expect to see clear evidence of improved performance due to the additional two bits of A/D converter resolution, but either this is masked by the low-level noise of the input amplifier, problems in the processing of the A/D data itself, or issues related to handling of high signal levels (see the next topic, below).  What difference there may be appears to be at the top end of the signal range rather than at the bottom.
  • "Broken" S-meter at higher signal levels.  The S-meter seems to be incapable of reading properly above about -33dBm:  Signals higher than this will yield widely-varying numbers that have little to do with the actual signal level.
  • "Motorboating" on strong, narrowband signals.  It has been observed that at about the same time that the S-meter starts to malfunction (above about -33dBm) one will hear odd noises on a strong signal (unmodulated carrier received in AM mode using a 10 kHz bandwidth) indicative of a malfunctioning bit of code somewhere - likely related to the broken S-meter.  The nature (sound) of this effect appears to change depending on the applied signal level.  It is not (yet) known to what extent this issue has a "global" effect:  That is, does a single, strong signal cause this effect on other/all signals within the receiver's 0-62 MHz passband?
  • The "Firmware Update" function in the "Admin" screen doesn't work at all.  Make of that what you will.


An interesting notion - Direct reception of the 2 meter amateur band:

In theory it should be possible to modify the RaspberrySDR to directly receive the amateur 2 meter band - and any other signals from above 125 MHz to at least 174 MHz.  Because the sample rate of the receiver's A/D converter is 125 MHz, one can undersample the 144-148 MHz 2 meter band, which would appear in the range of 19-23 MHz.  Because this is just above the sampling frequency, the "direction" of the frequency conversion (e.g. increasing frequency at the antenna will show as increasing on the display) will be correct which means that a standard transverter offset (e.g. a local oscillator frequency of 125 MHz) could be used.

To do this one would need to - at the very least - bypass the Nyquist low-pass filter, and with the noise floor likely to be much worse than the 158dBm/Hz seen at HF so significant low-noise amplification AND strong band-pass filtering (to quash spurious responses) would be required - probably something on the order of 25dB.

Based on the specifications of the LTC2208, undersampling should work into the hundreds of MHz, possibly covering other VHF and UHF amateur bands - but the need for appropriate amplification and filtering applies!


The "elephant" in the room

It is immediately obvious - particularly from the board layout and screen shots - that the RaspberrySDR has heavily "borrowed" its design from the KiwiSDR and this is, to a large extent, entirely fair game since the KiwiSDR is a self-declared open-source hardware and software design.  Having said this, a few issues have been raised considering the RaspberrySDR:

  • Is the RaspberrySDR being produced entirely in accordance with the KiwiSDR Open Source license?  Likely not.  For example, elements of the KiwiSDR "branding" appear all over the RaspberrySDR - from the derivative (parody?) logo on the board (see Figure 2) to the name "KiwiSDR" being present within the web interface itself.  The former may be an intentional, perhaps perceived as a slight - and the latter is rather hard to eliminate entirely - particularly if one wishes to maintain a branch of the code that echoes the continued development of the KiwiSDR - not to mention effort flowing in the other direction (e.g. improvements by others being incorporated into the KiwiSDR base).
  • Another board with similar capabilities (16 bit A/D, 125 MSPS) has appeared - apparently similar to this RaspberrySDR board - but it interfaces with the BeagleBone:  I have not used one or seen one in person, nor do I know anything about hardware/software support.
  • The RaspberrySDR source code itself seems to be somewhat obscured.  While there is a RaspberrySDR fork on Github (see note below), it is apparently not the very same code that is what is made available as a Raspberry Pi image only (as far as is known at the time of writing) from a link provided by the online seller.  In other words, I have not been able to find any sort of equivalent of a Github repo for the RaspberrySDR - a fact not exactly in keeping with the spirit of "open source". 
    • Github user "FlyDog" has produce the fork mentioned above and his repo may be found HERENow found HERE.  As mentioned above, I haven't been able to get this code to work with this board.
    • UPDATE (20200927):  Github user "howard0su" had produced a fork (found HERE) more likely to be relevant to the RaspberrySDR hardware.  Based on posts to the "raspsdr" list on, this appears to be a legitimate, open-source fork of the KiwiSDR code.  The owner of this fork has stated on that group that he is not involved with the production of the RaspberrySDR hardware.  As of the date of this update, I have not attempted to build from this source.
  •  Update:  Schematics and source for the RaspberrySDR are now available - see below.  The schematic of the RaspberrySDR does not seem to be available at the time of this writing - again, not in the spirit of open source.  
  • The primary author of the KiwiSDR code announced recently on the KiwiSDR message board that certain parts of the KiwiSDR's code - presumably elements not previously released under an open-source license - would be available only as binary "blobs" in the future.  The intent of this action - as it seems to be interpreted by many of the readers (including me)  - is, in addition to protect certain elements, is to increase the difficulty of replicating the software in the future - and some might argue that this goes against the spirit of "open source" that was embodied in the original Kickstarter definition of the KiwiSDR.  Whether this is true or not, the reader should not overlook the fact that the primary author has spent (and continues to spend) a lot of time, effort and money in the maintaining of the KiwiSDR software, hardware manufacture and certain elements of infrastructure about the KiwiSDR (Proxies, DDNS, TDOA to name three) and there is an understandable desire to "encourage" involvement (including buying "official" KiwiSDR boards, for example) that would go toward maintaining this.  The presumed argument is that follow-on versions based on the open source hardware and code - whether strictly adherent to the open-source licenses or not - are not compatible with his intent going forward.

Is it worth getting?

Is the RaspberrySDR a good deal?  It all depends on what you want to do with it.  At the moment, the ongoing support for it in terms of software development is a bit ambiguous as the "open source" nature of this fork seems to be a bit opaque, which is unfortunate.

For a general-purpose receiver that does not need the (useful!) facilities unique to the KiwiSDR network (proxy, TDOA, etc.) and for a receiver that includes the 6 meter band, this unit may fill a niche.

Again, the reader is cautioned that the "official" KiwiSDR brings to the amateur community several valuable features - including the TDOA - that require ongoing support which translates directly to people buying the "official" KiwiSDR boards, as I have clearly done.

Final comments:

  • For a general-purpose web-enabled remote receiver with decent performance, both the KiwiSDR and RaspberrySDR seem to be a good deal and the RaspberrySDR works reasonably well despite the bugs mentioned above.  The RaspberrySDR has the advantage that it also covers the 6 meter amateur band and has the potential of improved performance by virtue of its 16 bit (versus 14 bit) A/D converter.
  • The KiwiSDR kit with the Beaglebone Green and case - even though it costs more (approximately US$100 more than the RaspberrySDR) - has the distinct advantage of ongoing support along with the other infrastructure features mentioned above.  Like any open-source project, there will come the day when such support will cease and it will be up to others to try to build on what is in the repository at that time.
  • The current hardware of the KiwiSDR is starting to show its age for the reasons mentioned above and the existence of the RaspberrySDR shows that a relatively minor modification can potentially improve performance without a major rework of either hardware or software.
  • With the understanding that the time and resources of the primary author and frequent contributors to the KiwiSDR are limited in the ability to undertake such a change, I believe that it would be a mistake to overlook the potential (and "inspiration") of parallel work being done by others when it comes to keeping the KiwiSDR project up to date and relevant.

Addendum - 20201002 - RaspberrySDR schematics and sources are now available:

An email from another amateur radio operator informed me that the schematic diagrams of the RaspberrySDR - and a reference to a Github repo of the source - were posted on the "RaspberrySDR" group on "" - the link to that posting is HERE(Membership in that group may be required to see it.)

A brief analysis of the diagram has revealed several things:

  • GPS receiver:  The GPS receiver is identical - but that's not too surprising.  While the GPS receiver chip is not specified on the RaspberrySDR schematic, it has the same pin-out as that of the KiwiSDR, although a 66.666 MHz oscillator is shown rather than the 16.384 MHz oscillator on the KiwiSDR.
  • Front end filter:  As I'd surmised, the low-pass (Nyquist) filter is of identical topology as that of the KiwiSDR with a note on the diagram stating "LPF change to 64M" - but the values are the same as those of the KiwiSDR.  Clearly, a change in the components was made, but the schematic was not updated.
  • As with the KiwiSDR, the RF amplifier is shown as being an LTC6401-20.  This device has a fixed gain of 20dB (voltage gain of 10) and has a differential output:  The data sheet depicts it being used to drive an LTC2208 - the same A/D converter as is used on the Raspberry SDR.
  • The A/D converter is shown as being an "LTC2208CUPPBF" - a 130 MHz, 16 bit A/D converter.  The diagram shows all 16 of the "A" bus being connected as well as the "DITH" (used for enabling internal dither generator), "MODE" (used to set the output data format), "PGA" (used to set the gain of the A/D converter to either 1.5 or 2.25 volts full-scale) and "RAND" (used to randomize the output data to minimize possible noise contribution) pins being connected to the FPGA - and like the KiwiSDR, the "OFA" pin is also used to detect over/underrange of the A/D converter.
  • Maybe I missed it, but the circuit used to control the cooling fan does not appear to be on the schematic, nor did I find and obviously-named pin that might be used to control it:  Because the fan does not spin up unless the RaspberrySDR software service is running, it's clearly under software control - likely via a GPIO pin from the Raspberry Pi itself.


  • When time permits, I will probe about to determine the state of the DITH, MODE, PGA and RAND pins on the A/D converter.
  • Because the "PGA" pin may be controlled by the FPGA, it is possible that the A/D's input voltage range can be increased to 2.25 volts - a theoretical increase of about 3.5dB in signal input.
    • If this pin is set to the "low" state, this would - all things being equal - increase the "OV" (overload) threshold from the -14dBm of the KiwiSDR to about -10dBm - very close to the "-9dBm" that was observed in the test, above.
    • In other words, given the otherwise-identical circuitry, it is entirely possible that the increase in the "OV" threshold is entirely due to changing of the A/D converter's PGA setting.
    • A back-of-the-envelope calculation shows that assuming a 1dB loss in the low-pass filtering, a -14dBm signal - that required to cause an "OV" indicator on the KiwiSDR, amplified by 20dB would yield about 1.12 volts peak-to-peak - a value that correlates well with a presumed 1.25 volt A/D maximum input voltage.
    • Similarly, another calculation shows that - also assuming a 1 dB loss in the low-pass filtering - a -9 dBm signal - that required to cause an "OV" indication on the RaspberrySDR, amplified by 20dB would yield about 2.0 volts peak-to-peak - a value that also correlates pretty well if the "PGA" pin is set to configure the A/D converter for a 2.25 volt range.
    • Because 2 extra A/D bits (theoretically) correspond to about 12 dB more usable range, that would, in theory, indicate about 8 dB more dynamic range for the RaspberrySDR over the KiwiSDR.  How well this hypothetical gain is distributed is certainly a topic for more detailed analysis.
  • Because it is (presumably) under software control, I would like to see the settings of the DITH and PGA pins of the A/D converter being made available to the user in the configuration screen.  Because the amplitude of the dither is only on the order of 0.5dB (according to the data sheet) it is unlikely that its effect would be seen when a real-world antenna - and its noise - is connected to the receiver:  Anyway, it seems likely that the noise floor of the input amplifier may be the limiting factor.
  • It's worth pointing out that, according to the data sheets, the SFDR (Spurious-Free Dynamic Range) and  S/(N+D) (Signal to Noise+distortion) specifications of the 14 bit LTC2248 in the KiwiSDR are typically specified as being 90dB and 74.2dB respectively at 30 MHz while the same spec for the 16 bit LTC2208 (in the RaspberrySDR) - assuming a PGA setting of 2.25 volts - are 94dB and 77.5dB:  Not quite the "theoretical" 12 dB afforded from two extra bits!  (The SFDR of the LTC2208 actually goes up to 100dB when the PGA is set for 1.5 volts.)


This page stolen from



Tuesday, July 7, 2020

An automatic transfer relay for UPS/Critical loads, for the ham shack, generator backup, and home

It is quite common to use a UPS (Uninterruptible Power Supply) to keep critical loads - typically computers or NAS (Network Attached Storage) devices - online when there is a power failure as even a brief power failure can be inconvenient.  Like any device, a UPS occasionally needs to be maintained - especially the occasional replacement of batteries - and doing so often necessitates that everything be shut down.

A simple transfer relay can make such work easier, allowing one to switch from the UPS to another load - typically unprotected mains, or even another UPS - without "dumping" the load or needing to shut down.

This type of device is also useful when one is using a generator to provide power:  Rather than dumping the load when refueling the generator, another generator could be connected to the "other" port, the load transferred to it, and the original generator be shut down and safely refueled - such as during amateur radio Field Day operations.
Figure 1:
Exterior view of the  "simple" transfer relay depicted in Figure 2, below.
The "Main" power source is shown as "A" on the diagram.
Click on the image for a larger version.

But first, a few weasel words:
  • The project(s) described below involve dangerous mains voltages which can be hazardous/fatal if handled improperly:  Please treat them with respect and caution.
  • Do NOT attempt a project like this unless you have the knowledge and experience to do so.
  • While this information is provided in good faith, please do your own research to make sure that it suited to your needs in terms of applicability and safety.
  • Do not presume that this circuit or its implementation is compliant with your local electrical codes/regulations - that is something that  you should do. 
  • There are no warranties expressed or implied regarding these designs:  It is up to YOU to determine the safety and suitability of the information below for your applications:  I cannot/will not take any responsibility for your actions or their results.
  • You have been warned!

The simplest transfer relay:

The simplest version of this is a DPDT relay, the relay's coil being powered from the primary power source - which we will call "A" - as depicted in the drawing below:

Figure 2:
The simplest version(s) of load transfer relays - the load transferred to "A" ("Main") upon its presence, switching to "B" (Aux) in its absence.
The version on the left uses a relay with a mains-voltage coil while that on the right uses a low-voltage transformer and relay coil - otherwise they are functionally identical.
Click on the image for a larger version.

How it works:

Operation is very simple:  When the primary power source "A" is energized, the relay will pull in, connecting the load to source "A".  Conversely, when power source "A is lost, the relay will de-energize and the load will be transferred to the back-up power source, "B".  In every case that was tried, the relay armature moved fast enough to keep the load "happy" despite the very brief "blink" as the load was transferred from one source to another.

Two versions of this circuit are depicted:  The one on the left uses a relay with a mains-voltage coil while the one on the right uses a low-voltage coil - typically 24 VAC.  These circuits are functionally identical, but because low-voltage coil relays are common - as are 24 volt signal transformers - it may be easier to source the components for the latter.
Figure 3:
The interior of the "simple" transfer relay.  Tucked behind the outlet is the
DPDT relay with the 120 volt coil, the connections made to the relay.
using spade female spade lugs. The frame of a discarded light switch
is used as a mounting point for a standard "outlet + switch" cover plat
with neon panel lights being mounted in the slot for a light switch.
The entire unit is housed in a plastic dual-gang "old work" box.
Click on the image for a larger version.

The actual transfer takes only a few 10s of milliseconds:  I have not found a power supply that wasn't able to "ride through" such a brief outage but if a UPS is the load, it will probably see the transfer as a "bump" and briefly operate from battery.

Why a DPDT relay?

One may ask why use a DPDT (Double-Pole, Double-Throw)  relay if there is a common neutral:  Could you not simply switch the "hot" side from one voltage source to another?

The reasons for completely isolating the two sources with a double-pole relay is multi-fold:
  • This unit is typically constructed with two power cords - one for each power source.  While it is unlikely, it is possible that one or more outlets may be wired incorrectly, putting the "hot" side on the neutral prong.  Having a common "neutral" by skimping on the relay would connect a hot directly to a neutral or, worse, two "hot" sides of different phases together.
  • It may be that you are using different electrical circuits for the "A" and "B" power in which case bonding the neutrals together may result in circulating currents - particularly if these circuits are from disparate locations (e.g. long cord.
    • For readers outside North America:  While typical outlets are 120 volts, almost every location with power has 240 volts available which is used to run larger appliances.  This is made available via a split phase arrangement from a center tap on the distribution transformer which yields 120 volts to the neutral.  It is because of this that different circuits will be on different phases meaning that the voltage between two "hot" terminals on outlets in different locations may be 240 (or possibly 208) volts.
  • There is no guarantee that a UPS will "play nice" if its neutral output is connected somewhere else.  In some UPSs or inverters the "neutral" side may not actually be near ground potential - as a neutral is supposed to be - so it's best to let it "do its thing."

How it might be used:

With such a device in place, one simply needs to make sure that source "B" is connected, and when  load "A" - typically the UPS, but it could be a generator -  is disconnected, everything will get switched over, allowing you to performs the needed maintenance.

UPS maintenance:

When used with a UPS, I have typically plugged "A" (Main) into the UPS and "B" (Aux) into a non-UPS outlet.  If you need to service the UPS, simply unplug "A" and the load will be transferred instantly to "B".  Having "B" as a non-UPS source is usually acceptable as it is unlikely that a power failure will occur while on that input - but if you choose not to take that risk, another UPS could be connected to the "B" port.

I have typically kept input "B" (Aux) plugged into non-protected (non-UPS) power as a failure of a UPS would not likely interrupt the power to the backed-up device(s) - but if you do this you must keep an eye on everything as unless it is monitored, the failure of a UPS may go unnoticed until there is a power failure! 

This same device has also been used in a remote site with two UPSs for redundancy, not to mention ease of maintenance.  One must, of course, weigh the risk of adding yet another device (another possible point of failure, perhaps?) if one does this.

Generator change-over:

During in-the-field events like Amateur Radio Field Day such a switch is handy when a generator is used.  It is generally not advisable to refuel a generator while it is running even though I have seen others do it.  If, while gear is running on a generator, it is necessary to refuel it - another generator can be connected to input "B" and once it is up to speed (and switched out of "Eco" mode if using an inverter generator) input "A" is un-plugged  for refueling, checking the oil, etc.

If you are of the "OCD" type, two generators can be used:  The generator on "A" would be running the gear most of the time, but if it drops out, a generator on "B" - which will have been under no load up to that point - will take over.

Disadvantages of this "simple" version of the transfer relay:

For typical applications, the above arrangement works pretty well - particularly if power outages and maintenance needs are pretty infrequent - and it works very well in the "generator scenario" where one might wish to seamlessly transfer loads from one generator to another.

It does have a major weak point in its design - and that's related to how the relay pulls in or releases.

For example, many UPSs or generators - especially the "inverter" types - do not turn instantly "on", but rather they may ramp up the voltage comparatively slowly, but by its nature the relay coil may pull in at a much lower voltage than nominal - say, 80 volts.  When a load is transferred at this lower voltage, it may momentarily cause the power source to buckle, causing the load to be dropped and/or the relay to chatter briefly or, possibly simply cause the load to drop owing to too-low battery voltage.  The typical "work around" for this is to allow the "A" source to come up fully before plugging back into it - which is fine in many applications.

A "slow" pull-in on a relay can also be hard on relay contacts - particularly a "slow" rise the voltage from power source "A" - in which the contacts may not close quickly enough to prevent extensive wear.  In severe conditions, this can even result in one or more of the contacts welding (sticking together) which is not at all a desirable condition.  For this reason it is a good idea to use a relay with a significantly higher current rating than you are planning to pull.

A slightly more complicated version:

What can help this situation would be the addition of a short delay, after power source "A" is applied  but before the load is transferred to it - and better yet, we would like this load to be transferred only if its voltage is above a minimum value:  The circuit in the diagram below does this.

Figure 4:
This version of the transfer relay offers a short delay in transferring to load "A" as well as providing a low-voltage lock-out/detect.
The relay is a Dayton 5X847N - a 40 amp (resistive load) DPDT contactor with a 120 VAC coil.  This relay is likely overkill, but it should handle about anything one can throw at it - including capacitor-input power power supplies that tend to be very hard on relay contacts due to inrush current.
Click on the image for a larger version.
How it works:

This circuit is based on the venerable TL431 - a "programmable" (via resistors) Zener diode/voltage reference - U1 in the above diagram.  A sample of the mains voltage is input via T1 which, in this case, provides 9-12 volts AC which is then half-wave rectified by D1 and then smoothed with capacitor C1.  LED D2 was included on the board mostly for testing and initial adjustment - but it also establishes a 8-12 milliamp static load to help discharge C1 when the mains voltage goes low - although the current consumption of the relay does this quite well.
Figure 5:
An exterior view of the version of the transfer relay depicted in Figure 4,
above.  The unit is mounted in a 6x6x4" electrical "J" box.
The 10 amp rating is a bit arbitrary and conservative considering that
the contactor itself is rated for 40 amps and the fact that capacitor-input
supplies are likely to be connected to it.
Click on the image for a larger version.

The DC voltage is divided down via R2 and R3 and this is further filtered with capacitor C2, with R3 being adjustable to provide a variable threshold voltage to U1.  The combination of R2 and C2 causes the voltage at their junction to rise comparatively slowly, taking a couple seconds to stabilize.

When power is first applied, C2 is at zero volts, and will take a couple seconds to charge.  When the wiper of R3 exceeds 2.5 volts, U1 will suddenly turn on (conduct), pulling the "low" side of the coil of relay RLY2 to ground, turning it on which, in turn, will apply current to the coil of RLY1.  When it does, the base of transistor Q1 is pulled toward ground via R6, turning it on and when current passes through R4 into the junction of R2 and R3, the voltage will rise slightly, resulting in some hysteresis.  For example, if R3 is adjusted so that RLY2 will be activated at 105 volts, once activated the voltage threshold for U1 will be effectively lowered to about 90 volts.

If power source "A" disappears abruptly, RLY1 will, of course, lose power to its coil and open immediately - and a similar thing will happen if the voltage goes below approximately 90 volts when RLY2 will open, disconnecting power to RLY1 - and at this point Q1 will be turned off and it will require at least 105 volts (as in our example) for RLY1 to be activated again.  Diode D4 may be considered optional as it will more-quickly discharge C2 in the even the power on "A" goes away and suddenly comes back, but it is unlikely that its presence will usefully speed response.

As noted in the caption of Figure 4, the relay used is a Dayton 5X847N which has a 120 volt coil and 40 amp (resistive load), self-wiping contacts.  While 40 amps may seem overkill for a device with an ostensible 10 amp rating as depicted in Figure 5, it is good to over-size the relay a bit, particularly since many loads these days (computer equipment, in particular) can have very high inrush currents due to capacitor-input rectifier, so a large relay is justified.

Note:  The 5X848 is the same device, but with a 240 volt AC coil while the 5X846 has a 24 volt AC coil:  All of three of these devices are suitable for both 50 and 60 Hz operation.

Circuit comments:

Figure 5:
Inside the transfer relay unit.  The large, open-frame DPDT relay is in the
foreground while the 12 volt AC transformer is tucked behind it.  Mounted
to the wall behind it (upper-left in the box) is the piece of prototype
board with the smaller relay and delay/voltage sense circuitry.
Click on the image for a larger version.
U1, the TL431, is rated to switch up to 200 milliamps, but it's probably a good idea to select a relay that will draw 125 milliamps or less.  Because the contacts of the relay are simply switching power to the main relay (RLY1), RLY2 need only be a light-duty relay.

When I built this circuit I used a 5 amp relay with a 9 volt coil because I had a bunch of them in my junk box and in checking it out, I found the coil resistance to be 150 ohms meaning that at its rated voltage, it would draw 60 milliamps.  The voltage across C1 when RLY1 was not active was measured at about 16 volts so it was presumed that with the load of the relay that this would drop by a volt or two meaning that a series resistor that would pass 60 milliamps across 6 volts (the difference between the 15 volt supply and 9 volt coil voltage) should be used - and Ohms law tells us that a 100 ohm, 0.5-1 watt resistor would do the job.


A variable AC supply (e.g. a "Variac") is essential for proper adjustment.  To start, the wiper of R3 is adjusted all of the way to the "ground" and then the applied AC voltage is set to 105 volts - a nice, minimum value for U.S. power mains.  Then, R3 is adjusted, bringing the voltage on its wiper upwards until RLY2 and RLY1 just close.  At this point one can lower the input voltage down to 80-90 volts and after capacitor C2 discharges, the relays will again open and one can then move the voltage back up, slowly, and verify the pull-in voltage.

Figure 6:
The back side of the front panel of the J box:  A large, square hole was cut
in the front and an plastic dual gang "old work" box with its back
cut away was used to facilitate mounting of the two outlets  to the front panel.
Adhesive was used around the periphery to prevent the box from sliding
around on the front panel.
Click on the image for a larger version.
If less hysteresis is desired, the value of R4 can be increased to, say, 22k.  Note that despite the operation of Q1, some of the hysteresis is cancelled out by the voltage across C1 decreasing under load when the circuit is triggered, by the current through RLY1, so a bit of hysteresis is absolutely necessary or else the relays will chatter!


As can be seen in figures 5 and 6, a 6x6x4 inch gray plastic electrical "J" box was used to house the entire unit - a common item found in U.S. home improvement stores.  A pair of "duplex" outlets were mounted in the front cover by cutting a large square hole in it and using a modified "old work" box with its back removed, giving a proper means of mounting the outlets.

A pair of front panel neon indicators indicate the current state:  The "B" indicator simply indicates the presence of mains voltage on that input while the "A" indicator is wired across the relay's mains-voltage coil and is thus indicative of the delay in the relay's closure.

The circuitry with the TL431 and RLY2 is constructed on a small piece of prototype board, mounted to the side of the box using stand-offs.  The 9-12 volt AC transformer - the smallest that I could find in my junk box (it's probably rated for 200 milliamps) is also bolted to the side of the box.  Liberal use of "zip" ties are used to tame the internal wiring with special care being taken to absolutely avoid any wire from touching the armature of the relay itself to prevent any interference with its operation!

Final comments:

Both versions work well and the "simple" version depicted in figures 1 and 2 is suitable for most applications.  For more demanding applications - particularly those where a transfer may occur frequently and/or the mains voltage may rise "slowly", the more complicated version is recommended.

Again, if you choose to construct any of these devices, please take care in doing so, being aware of the hazards of mains voltages.  As mentioned in the "Weasel Words" section, please make sure that this sort of device is appropriate to your situation.

This page stolen from