Tuesday, September 15, 2020

Comparing the "KiwiSDR" and the "RaspberrySDR" software-defined receivers

Any reader who has perused these blog pages will be aware that I have been using the KiwiSDR for some time now (I personally own four of them and I manage two more!) and have been happy with their performance, finding various ways to maximize their usefulness.  I was intrigued when a "similar" device appeared that might prove to be useful - the "RaspberrySDR".

Figure 1:
The exterior of the RaspberrySDR.
The case is well-built and compact, housing both the
SDR board and a Raspberry Pi 3+.
Click on the image for a larger version.


For those not familiar with the KiwiSDR, it is a Linux-based, stand-alone software-defined radio capable of receiving from (nearly) DC to at least 30 MHz using a variety of modes (SSB, AM, FM, Synchronous AM) and has several "extensions" that allow reception of several digital modes - including CW, RTTY, and WSPR - as well as provide a means of viewing FAX transmissions and SSTV.  It also includes a provision for TDOA (Time Difference of Arrival) determination of transmitter location in conjunction with other similarly-equipped receivers.

This receiver does not have a front panel, but rather it is entirely used via a web interface.  What this means is that it may be used remotely, by several people, simultaneously - each person getting their own, virtual receiver that they may independently tune.

Originally introduced as a Kickstarter project around 2016, the hardware has been augmented with continually-improved open-source software with the lions share of the work having been done by John Seamons.  Using a 14 bit A/D converter clocked at about 66.66 MHz, an FPGA (Field Programmable Gate Array) and a GPS front-end chip, most of the number-crunching is done before the data is handed off to a single-board computer - originally the BeagleBone Green, but now also BeagleBone AI (BBAI):  Both the KiwiSDR receiver board and the BeagleBone Green (BBG) have been sourced by Seeed Studios.

For a variety of reasons, the supply of KiwiSDRs has been a bit fickle - both due to the limited capacity of Seeed in response to demand of these devices and issues which have been impacted the supply of some critical parts.  Another possible issue may be that there is likely to be a bit of "fatigue" on the part of some of the key people related to the KiwiSDR:  Careful readers of blog entries from several years ago can see that a similar thing happened to me on a project on which I had previously worked - coincidentally, also an open-source SDR-based device.

Figure 2:
Inside the case showing the "RaspberrySDR" board.  The phyiscal layout is quite similar to that of the KiwiSDR.  The small heat sink is affixed atop the LTC2208 A/D converter.  The fan itself is controlled by a simple transistor circuit on the acquisition board.  Unlike the KiwiSDR, only one row of headers is used to connect to the host computer and power is supplied via the host.   In noting the logo on the board, I can't help but wonder if its intent was that of parody, along the lines of the fair use doctrine?
Click on the image for a larger version.


The present day:

Because the KiwiSDR is based on open-source design of its hardware and software - ostensibly to encourage participation in enhancement of all aspects of its design - one may freely copy it within the constraints of the open-source license.  It is not surprising, then, that several derivative versions have recently appeared on the scene, more or less following the "open source" philosophy - a topic that will be discussed later.

Using the base code found on GitHub and the openly-published schematics as a starting point, the "RaspberrySDR" has appeared - using, as you may have surmised, the Raspberry Pi - specifically the Raspberry Pi 3+.  This single-board computer is of similar size as the BeagleBone Green and roughly similar capabilities - albeit a bit more powerful - and is certainly better-known than the other fruit, so it was a natural choice as the hardware interface between it and the receiver board is fairly trivial to adapt with a simple "conversion" board that adapted to the Pi's interface.

Also to be expected, a revised board specifically designed to interface with the Raspberry Pi has appeared from Chinese sellers, packaged with a Raspberry Pi, in a small, aluminum enclosure (with an external fan) for approximately $100 less than the KiwiSDR+Enclosure combination - and this revised version uses a higher-speed (125 MHz versus 66.66 MHz) and higher resolution (16 bits versus 14 bits) A/D converter so that its receive range is extended to a bit over 60 MHz, including the 6 meter amateur band - and there is the potential for improved receive performance in terms of dynamic range and distortion.


Having gotten my hands on one of these "RaspberrySDRs" - and already having available some KiwiSDRs for testing, I decided to put them side-by-side to compare the differences - specifically, to measure:

  • Apparent noise floor and sensitivity
  • Appearance of spurious signals
  • Large signal handling capability
  • Image response (Nyquist filtering)


Noise floor comparisons of the KiwiSDR using the BBAI and the RaspberrySDR using the Raspberry Pi3+:

The KiwiSDR using the BeagleBone AI:

Figure 3 shows the noise floor of a KiwiSDR using the Beaglebone AI with no connected antenna.  As is typical of this device, there is a slight increase in the noise floor starting around 18 MHz:  The reason for this is unknown, but it is surmised that this is intentional - an artificial boost in "software" (likely in the FPGA) that is used to compensate somewhat for the Sin(x)/x roll-off intrinsic to any analog-to-digital sampling scheme as one approaches the Nyquist limit - which, given the 66.66 MHz sampling rate of the KiwiSDR, would be 33.333 MHz.

This assumption would appear to be supported by the fact that as one approaches the Nyquist frequency, the S-meter reading does not drop as one might expect, but remains fairly constant and appears to be close to +/- 1 dB from below 1 MHz to 30 MHz as shown in the data below.

Figure 3:
The noise floor of the KiwiSDR running on a BeagleBone AI (BBAI).
Note the slight rise around 18 MHz - the possible result of the data being "cooked" in the pipeline to offset Sin(x)/x losses near the Nyquist limit.
Click on the image for a larger version.

In numerical form, the measured noise floor of the KiwiSDR/BBAI combination using a 10 kHz AM bandwidth is:

  • -117dBm @ 1 MHz
  • -117dBm @ 5 MHz
  • -116dBm @ 10 MHz
  • -116dBm @ 15 MHz
  • -114dBm @ 25 MHz
  • -115dBm @ 30 MHz (29.9 MHz)
A broad 3dB peak is indicated in the noise floor:  We will attempt to determine if this peak is "real" in our later analysis.

The RaspberrySDR using the Raspberry Pi3+:

The RaspberrySDR has a 125 MHz sampling clock and a 16 bit A/D converter so the landscape looks a bit different as it can (theoretically) receive to 62.5 MHz, but is limited to 62.0 MHz in firmware.  Comparing the 0-30 MHz noise floor we can see some interesting differences in Figure 4:

Figure 4:
Noise floor of the RaspberrySDR running on a Raspberry Pi3+.  There is a similar rise in frequency - although it looks a bit different.
Click on the image for a larger version.

We can see a similar rise in the noise floor, but we get the impression that limiting our range to just 30 MHz hides its nature, so Figure 5 shows the noise floor over the full frequency range of 0-62 MHz:

Figure 5:
Noise floor of the same receiver as depicted in Figure 4, but showing the full 0-62 MHz frequency range.  Very evident is a rise in the noise floor centered at approximately 36 MHz.  This spectrum is unchanged if the SPI frequency is changed from the default 48 to 24 MHz.
Click on the image for a larger version.

In comparison, the noise floor of the RaspberrySDR+Raspberry Pi3+ as measured using AM with a 10 kHz bandwidth is as follows:

  • -118dBm @ 1 MHz
  • -118dBm @ 5 MHz
  • -118dBm @ 10 MHz
  • -116dBm @ 15 MHz
  • -115dBm @ 25 MHz
  • -112dBm @ 30 MHz
  • -113dBm @ 40 MHz
  • -116dBm @ 50 MHz
  • -116dBm @ 60 MHz

The magnitude of the broad peak is more significant than on the KiwiSDR - being on the order of 6 dB rather than 3 dB.  Because the RaspberrySDR is based on the (open source) KiwiSDR, it is expected that a the RF processing will be similar - and this would appear to be bourne out by the presence of the rise in the noise floor, centered around approximately 36 MHz.  It would seem likely that the adaptation to the higher sample rate hardware is not fully realized as the pre-emphasis in firmware - if it exists - is not properly implemented as is evidenced by the numbers.

Comparison of S-meter calibrations:

To provide and additional data point in our measurement, we'll check the S-meter calibration.  For our purposes we will use a frequency of 10 MHz and a level of -50dBm as our reference as that appears to be below the apparent amplitude peaking of either type of receiver.  Using a known-consistent signal source, the results are as follows:


Frequency KiwiSDR with BeagleBone AI RaspberrySDR with Raspberry Pi3+
1 MHz -51dBm -50dBm
5 MHz -51dBm -50dBm
10 MHz -50dBm -50dBm
15 MHz -49dBm -49dBm
20 MHz -48dBm -48dBm
25 MHz -48dBm -46dBm
30 MHz -50dBm -45dBm
50 MHz --- -50dBm
60 MHz --- -57dBm


The effects of what appears to be pre-emphasis can clearly be seen:  The effects of the Sin(x)/x roll-off on the A/D converter seems to have been more-or-less compensated on the KiwiSDR, but the attempt to do this seems to be misapplied on the RaspberrySDR.  Based on the apparent noise floor and the absolute response to the -50dBm signals, we can make an estimate of the absolute sensitivity of the KiwSDR and RaspberrySDR on the various bands with simple math.

Frequency KiwiSDR Noise floor (dBm/Hz) RaspberrySDR Noise floor (dBm/Hz)
1 MHz -158dBm -158dBm
5 MHz -158dBm -158dBm
10 MHz -156dBm -158dBm
15 MHz -157dBm -157dBm
25 MHz -156dBm -159dBm
30 MHz -155dBm -157dBm
50 MHz --- -156dBm
60 MHz --- -111dBm

The chart above compares the apparent noise floor of both receivers, compensating for the 10 kHz AM detection bandwidth (40dB) and the measured offset of the S-meter at each frequency.

It is worth noting that above about 15 MHz, neither receiver has sufficient sensitivity to detect the expected IRU noise floor given a unity gain antenna:  Approximately 10dB is required at 30 MHz to "hear" the noise floor in that case and even more gain would be appropriate at 6 meters.

This very topic has been discussed at this blog in the past - see the blog post "Limited Attenuation High-Pass filter" - LINK and its related articles for a discussion.


Overload signal level comparison:

Another test was done - the determination of the RF level at which the "OVL" message on the S-meter would show, indicating overload of the A/D converter.  This test was done for both units at 10 MHz - the same frequency for which the S-meter was calibrated - and in the "Admin" tab the unit was configured so that just one "OV" occurrence per 64k cycles would be detected.

KiwiSDR OV indication:

The KiwiSDR's "OV" indication just started to indicate at -14dBm.

RaspberrySDR OV indication:

The RaspberrySDR's "OV" indication just starts to indicate at -9dBm.

The apparent difference between these is 5dB.

"Wait - shouldn't there be another 12dB of dynamic range with two more bits of A/D resolution?"

In theory, two additional bits of A/D conversion  should yield and additional 12 dB of dynamic range - but this is not readily apparent in the numbers given above (at 10 MHz, 142dB between the noise floor and the "OV" indication for the KiwiSDR and 149dB for the RaspberrySDR) - so what's the deal?

First off, all things being equal (e.g. the same reference voltage for the A/D converter) one would expect the additional range to occur at the bottom of the signal range rather than the top, but this difference can be a matter of scaling via careful adjustment of the amount of amplification preceding the A/D converter and how the code is written.

Ideally, one would carefully balance the signal path so that the intrinsic noise of the amplification preceding the A/D converter would be comparable to the signal level required to "tickle" the LSB (Least Significant Bit) with no signal applied:  A higher level than this risks "wasting" dynamic range with internal noise.  Judging by the "even-ness" of the noise across the spectrum, I suspect that the output of the input amplifier is enough to light up one or two LSBs of the A/D converter.

It's possible that the "extra" 5 dB at the high end of the signal range is real and that signal dynamics have been juggled a bit with some the extra 2 bits worth of range being present at the bottom end, but this would be difficult to divine without more thorough testing.


Without the availability of a schematic diagram of the front end of the receiver there are several unknowns:

  • The LTC2208 has a pin that indicates an overload condition.  It is presumed that in spite of other issues with the firmware (see below) that the "OV" indicator is working properly.
  • The LTC2208 A/D converter has a low-level dither generator built into it:  It is unknown if this feature is active.
  • Shorting the RF signal path to eliminate the contribution of the pre-converter amplification would be instructional to ascertain noise contribution from that device.
  • Probing the input of the A/D converter at/near overload to divine the actual range of the receiver itself to determine if the full dynamic range of the 16 bits is properly utilized would be revealing.
  • The LTC2208 has a programmable gain amplifier and full-scale input voltage may be selected as being either 1.5 or 2.25 volts:  The hardware configuration is unknown.
  • At the time of writing this, I have not found in the code any modification of the FPGA image that takes advantage of the extra two bits of A/D resolution.  This does not mean that no modification has been done, but rather that I have not (yet?) discovered it.

Nyquist Image response:

Any receiver has an image response - and an SDR is no exception. In this case, signals above the Nyquist frequency (half the sampling rate) will appear to "wrap around" and show up in the desired frequency range.  Because it is impractical to build a "brick wall" low-pass filter, there are always compromises when designing such a filter, including:

  • Complexity:  How "fancy" should such a filter be in terms of component count?  More components can mean improved performance, but this implies a more difficult design, higher expense and more performance-related issues such as loss, ripple, sensitivity to source/load impedance, etc.
  • Trade-off of frequency coverage:  It can be difficult to weigh the pros and cons of a filter in terms of its cut-off frequency.  For example, setting the cut-off near the Nyquist frequency will improve performance at the high end of the available range, but at the risk of poorer image rejection.  Conversely, setting it lower may sacrifice desired coverage.  A case in point would be that for the KiwiSDR, with a Nyquist frequency of about 33.33 MHz, coverage to 30 MHz (the "top" of HF) is desirable, so a bit of compromise is warranted in terms of absolute image rejection.

How bad/good is it?

The image response of both the KiwiSDR and RaspberrySDR were measured and determined to be as follows: 

The KiwiSDR:

Generator frequency (% of Nyquist) Nyquist image frequency on RX
KiwiSDR Nyquist image attenuation
37 MHz  (111%) 29.667 MHz 10dB
42 MHz  (126%) 24.66 MHz 20dB
47 MHz  (141%) 19.66 MHz 30dB
52 MHz  (156%) 14.66 MHz 39dB
59 MHz  (177%) 9.66 MHz 47dB
62 MHz  (186%) 4.66 MHz 55dB
66 MHz  (198%) 0.66 MHz 60dB
70 MHz  (210%) -3.33 MHz 65dB

 The RaspberrySDR:

Generator frequency  (% of Nyquist) Nyquist image frequency RaspberrySDR Nyquist image attenuation
64 MHz  (102%) 61 MHz 9dB
75 MHz  (120%) 50 MHz 18dB
85 MHz  (136%) 40 MHz 27dB
95 MHz  (152%) 30 MHz 36dB
105 MHz  (168%) 20 MHz 46dB
115 MHz  (184%) 10 MHz 54dB
120 MHz  (192%) 5 MHz 58dB
124 MHz  (198%) 1 MHz 60dB
130 MHz  (208%) -5 MHz 65dB

Doing a direct comparison between the two receivers one can see that based on the percentage of the frequency of the unwanted signal with relation to to the Nyquist frequency, the two receivers are pretty much identical in terms of image rejection, implying a very similar filter in each:  I suspect that the RaspberrySDR's Nyquist filter is pretty much that of the KiwiSDR, but with its frequency having been rescaled proportionally.

Because the Nyquist frequency of the RaspberrySDR is approximately twice that of the KiwiSDR, in terms of "dB per MHz" the attenuation of RasperrySDR's Nyquist filter performance will be noticeably worse.  For example, we know from the above information that for the U.S. FM broadcast band that the attenuation of the KiwiSDR's Nyquist filter will be at least 65dB - but we can see that for the RaspberrySDR that this attenuation will likely vary between about 30dB at the bottom end of the band (88 MHz) and 50dB at the top end (108dB) meaning that it is likely that strong, local FM broadcast signals will cause some interference to the RaspberrySDR in the 37-17 MHz range. implying that a simple blocking filter should have been built in   The work-around for this problem - should it arise - is pretty simple:  Install an FM broadcast band "blocking" filter such as those sold for the RTL-SDRs:  An example of such a filter may be found HERE.

Effects of receiver noise floor with a strong, off-frequency signal:

Any receiver is affected by other strong signals within its front-end passband - and with direct-sampling SDRs such as these, any signal appearing at the antenna port can and will have an effect elsewhere within the receiver's passband - primarly due to nonlinearity of the A/D converter and, to a lesser extent, the phase noise of the various oscillators - real and virtual.

For this test, a very strong signal from a 10 MHz OCXO (that of an HP Z-3801 GPS receiver - likely a variant of an HP 10811) was used as its output has respectably good phase noise performance.  Two tests were done - at -15dBm and another at -25dBm - each time measuring the change in the noise floor at different frequencies distant from 10 MHz.

With the 10 MHz signal set to -25dBm, NO change was observed in the noise floor at the frequencies listed below on either receiver - but there was a bit of increase in the noise floor with the application of the -15dBm signal - the magnitude of the increase of the noise floor is indicated in square brackets [] in the chart below:

Noise floor frequency Noise floor of KiwiSDR [degradation]
Noise floor of RaspberrySDR [degradation]
11 MHz -114dBm  [2dB] -116dBm  [0dB]
15 MHz -114dBm  [2dB] -114dBm  [2dB]
25 MHz -113dBm  [1dB] -112dBm  [2dB]

Assuming that the 10 MHz signal source is "clean", the above information shows that the two receivers behaved quite similarly.  It also shows that if there are two additional bits of A/D resolution available in the signal pipeline on the RaspberrySDR, their effect is not readily apparent in the measurements above.

All is not well:  A few glaring bugs!

There are several "features" that are readily apparent in this version of RaspberrySDR firmware (Version 1.402) that cause a few operational problems:

  • Inconsistent RF level calibration.  Occasionally, when powered up, the RF signal level calibration (S-meter, waterfall) will be way off requiring a setting of about -30dBm to yield correct S-meter calibration at 10 MHz rather than -19 - which is within a few dB of the setting of the KiwiSDR:  Simply rebooting the SDR server will likely correct this.
  • No obvious improvement in dynamic range or sensitivity due to the "extra" 2 bits of A/D resolution.  As discussed, one would expect to see clear evidence of improved performance due to the additional two bits of A/D converter resolution, but either this is masked by the low-level noise of the input amplifier, problems in the processing of the A/D data itself, or issues related to handling of high signal levels (see the next topic, below).  What difference there may be appears to be at the top end of the signal range rather than at the bottom.
  • "Broken" S-meter at higher signal levels.  The S-meter seems to be incapable of reading properly above about -33dBm:  Signals higher than this will yield widely-varying numbers that have little to do with the actual signal level.
  • "Motorboating" on strong, narrowband signals.  It has been observed that at about the same time that the S-meter starts to malfunction (above about -33dBm) one will hear odd noises on a strong signal (unmodulated carrier received in AM mode using a 10 kHz bandwidth) indicative of a malfunctioning bit of code somewhere - likely related to the broken S-meter.  The nature (sound) of this effect appears to change depending on the applied signal level.  It is not (yet) known to what extent this issue has a "global" effect:  That is, does a single, strong signal cause this effect on other/all signals within the receiver's 0-62 MHz passband?
  • The "Firmware Update" function in the "Admin" screen doesn't work at all.  Make of that what you will.


An interesting notion - Direct reception of the 2 meter amateur band:

In theory it should be possible to modify the RaspberrySDR to directly receive the amateur 2 meter band.  Because the sample rate of the receiver's A/D converter is 125 MHz, one can undersample the 144-148 MHz 2 meter band which would appear in the range of 19-23 MHz.  Because this is just above the sampling frequency, the "direction" of the frequency conversion (e.g. increasing frequency at the antenna will show as increasing on the display) will be correct which means that a standard transverter offset (e.g. a local oscillator frequency of 125 MHz) could be used.

To do this one would need to - at the very least - bypass the Nyquist low-pass filter, and with the noise floor likely to be much worse than the 158dBm/Hz seen at HF so significant low-noise amplification AND strong band-pass filtering (to quash spurious responses) would be required - probably something on the order of 25dB.

Based on the specifications of the LTC2208, undersampling should work into the hundreds of MHz, possibly covering other amateur bands - but the need for appropriate amplification and filtering applies!


The "elephant" in the room

It is immediately obvious - particularly from the board layout and screen shots - that the RaspberrySDR has heavily "borrowed" its design from the KiwiSDR and this is, to a large extent, entirely fair game since the KiwiSDR is a self-declared open-source hardware and software design.  Having said this, a few issues have been raised considering the RaspberrySDR:

  • Is the RaspberrySDR being produced entirely in accordance with the KiwiSDR Open Source license?  Likely not.  For example, elements of the KiwiSDR "branding" appear all over the RaspberrySDR - from the derivative (parody?) logo on the board (see Figure 2) to the name "KiwiSDR" being present within the web interface itself.  The former may be an intentional, perhaps perceived as a slight - and the latter is rather hard to eliminate entirely - particularly if one wishes to maintain a branch of the code that echoes the continued development of the KiwiSDR - not to mention effort flowing in the other direction (e.g. improvements by others being incorporated into the KiwiSDR base).
  • Another board with similar capabilities (16 bit A/D, 125 MSPS) has appeared - apparently similar to this RaspberrySDR board - but it interfaces with the BeagleBone:  I have not used one or seen one in person, nor do I know anything about hardware/software support.
  • The RaspberrySDR source code itself seems to be somewhat obscured.  While there is a RaspberrySDR fork on Github (see note below), it is apparently not the very same code that is what is made available as a Raspberry Pi image only (as far as is known at the time of writing) from a link provided by the online seller.  In other words, I have not been able to find any sort of equivalent of a Github repo for the RaspberrySDR - a fact not exactly in keeping with the spirit of "open source". 
    • Github user "FlyDog" has produce the fork mentioned above and his repo may be found HERE.  As mentioned above, I haven't been able to get this code to work with this board.
  • The schematic of the RaspberrySDR does not seem to be available at the time of this writing - again, not in the spirit of open source.
  • The primary author of the KiwiSDR code announced recently on the KiwiSDR message board that certain parts of the KiwiSDR's code - presumably elements not previously released under an open-source license - would be available only as binary "blobs" in the future.  The intent of this action - as it seems to be interpreted by many of the readers (including me)  - is, in addition to protect certain elements, is to increase the difficulty of replicating the software in the future - and some might argue that this goes against the spirit of "open source" that was embodied in the original Kickstarter definition of the KiwiSDR.  Whether this is true or not, the reader should not overlook the fact that the primary author has spent (and continues to spend) a lot of time, effort and money in the maintaining of the KiwiSDR software, hardware manufacture and certain elements of infrastructure about the KiwiSDR (Proxies, DDNS, TDOA to name three) and there is an understandable desire to "encourage" involvement (including buying "official" KiwiSDR boards, for example) that would go toward maintaining this.  The presumed argument is that follow-on versions based on the open source hardware and code - whether strictly adherent to the open-source licenses or not - are not compatible with his intent going forward.

Is it worth getting?

Is the RaspberrySDR a good deal?  It all depends on what you want to do with it.  At the moment, the ongoing support for it in terms of software development is a bit ambiguous as the "open source" nature of this fork seems to be a bit opaque, which is unfortunate.

For a general-purpose receiver that does not need the (useful!) facilities unique to the KiwiSDR network (proxy, TDOA, etc.) and for a receiver that includes the 6 meter band, this unit may fill a niche.

Again, the reader is cautioned that the "official" KiwiSDR brings to the amateur community several valuable features - including the TDOA - that require ongoing support which translates directly to people buying the "official" KiwiSDR boards, as I have clearly done.


Final comments:

  • For a general-purpose web-enabled remote receiver with decent performance, both the KiwiSDR and RaspberrySDR seem to be a good deal and the RaspberrySDR works reasonably well despite the bugs mentioned above.  The RaspberrySDR has the advantage that it also covers the 6 meter amateur band and has the potential of improved performance by virtue of its 16 bit (versus 14 bit) A/D converter.
  • The KiwiSDR kit with the Beaglebone Green and case - even though it costs more (approximately US$100 more than the RaspberrySDR) - has the distinct advantage of ongoing support along with the other infrastructure features mentioned above.  Like any open-source project, there will come the day when such support will cease and it will be up to others to try to build on what is in the repository at that time.
  • The current hardware of the KiwiSDR is starting to show its age for the reasons mentioned above and the existence of the RaspberrySDR shows that a relatively minor modification can potentially improve performance without a major rework of either hardware or software.
  • With the understanding that the time and resources of the primary author and frequent contributors to the KiwiSDR are limited in the ability to undertake such a change, I believe that it would be a mistake to overlook the potential (and "inspiration") of parallel work being done by others when it comes to keeping the KiwiSDR project up to date and relevant.

This page stolen from ka7oei.blogspot.com



Tuesday, July 7, 2020

An automatic transfer relay for UPS/Critical loads, for the ham shack, generator backup, and home

It is quite common to use a UPS (Uninterruptible Power Supply) to keep critical loads - typically computers or NAS (Network Attached Storage) devices - online when there is a power failure as even a brief power failure can be inconvenient.  Like any device, a UPS occasionally needs to be maintained - especially the occasional replacement of batteries - and doing so often necessitates that everything be shut down.

A simple transfer relay can make such work easier, allowing one to switch from the UPS to another load - typically unprotected mains, or even another UPS - without "dumping" the load or needing to shut down.

This type of device is also useful when one is using a generator to provide power:  Rather than dumping the load when refueling the generator, another generator could be connected to the "other" port, the load transferred to it, and the original generator be shut down and safely refueled - such as during amateur radio Field Day operations.
Figure 1:
Exterior view of the  "simple" transfer relay depicted in Figure 2, below.
The "Main" power source is shown as "A" on the diagram.
Click on the image for a larger version.

But first, a few weasel words:
  • The project(s) described below involve dangerous mains voltages which can be hazardous/fatal if handled improperly:  Please treat them with respect and caution.
  • Do NOT attempt a project like this unless you have the knowledge and experience to do so.
  • While this information is provided in good faith, please do your own research to make sure that it suited to your needs in terms of applicability and safety.
  • Do not presume that this circuit or its implementation is compliant with your local electrical codes/regulations - that is something that  you should do. 
  • There are no warranties expressed or implied regarding these designs:  It is up to YOU to determine the safety and suitability of the information below for your applications:  I cannot/will not take any responsibility for your actions or their results.
  • You have been warned!

The simplest transfer relay:

The simplest version of this is a DPDT relay, the relay's coil being powered from the primary power source - which we will call "A" - as depicted in the drawing below:

Figure 2:
The simplest version(s) of load transfer relays - the load transferred to "A" ("Main") upon its presence, switching to "B" (Aux) in its absence.
The version on the left uses a relay with a mains-voltage coil while that on the right uses a low-voltage transformer and relay coil - otherwise they are functionally identical.
Click on the image for a larger version.

How it works:

Operation is very simple:  When the primary power source "A" is energized, the relay will pull in, connecting the load to source "A".  Conversely, when power source "A is lost, the relay will de-energize and the load will be transferred to the back-up power source, "B".  In every case that was tried, the relay armature moved fast enough to keep the load "happy" despite the very brief "blink" as the load was transferred from one source to another.

Two versions of this circuit are depicted:  The one on the left uses a relay with a mains-voltage coil while the one on the right uses a low-voltage coil - typically 24 VAC.  These circuits are functionally identical, but because low-voltage coil relays are common - as are 24 volt signal transformers - it may be easier to source the components for the latter.
Figure 3:
The interior of the "simple" transfer relay.  Tucked behind the outlet is the
DPDT relay with the 120 volt coil, the connections made to the relay.
using spade female spade lugs. The frame of a discarded light switch
is used as a mounting point for a standard "outlet + switch" cover plat
with neon panel lights being mounted in the slot for a light switch.
The entire unit is housed in a plastic dual-gang "old work" box.
Click on the image for a larger version.

The actual transfer takes only a few 10s of milliseconds:  I have not found a power supply that wasn't able to "ride through" such a brief outage but if a UPS is the load, it will probably see the transfer as a "bump" and briefly operate from battery.

Why a DPDT relay?

One may ask why use a DPDT (Double-Pole, Double-Throw)  relay if there is a common neutral:  Could you not simply switch the "hot" side from one voltage source to another?

The reasons for completely isolating the two sources with a double-pole relay is multi-fold:
  • This unit is typically constructed with two power cords - one for each power source.  While it is unlikely, it is possible that one or more outlets may be wired incorrectly, putting the "hot" side on the neutral prong.  Having a common "neutral" by skimping on the relay would connect a hot directly to a neutral or, worse, two "hot" sides of different phases together.
  • It may be that you are using different electrical circuits for the "A" and "B" power in which case bonding the neutrals together may result in circulating currents - particularly if these circuits are from disparate locations (e.g. long cord.
    • For readers outside North America:  While typical outlets are 120 volts, almost every location with power has 240 volts available which is used to run larger appliances.  This is made available via a split phase arrangement from a center tap on the distribution transformer which yields 120 volts to the neutral.  It is because of this that different circuits will be on different phases meaning that the voltage between two "hot" terminals on outlets in different locations may be 240 (or possibly 208) volts.
  • There is no guarantee that a UPS will "play nice" if its neutral output is connected somewhere else.  In some UPSs or inverters the "neutral" side may not actually be near ground potential - as a neutral is supposed to be - so it's best to let it "do its thing."

How it might be used:

With such a device in place, one simply needs to make sure that source "B" is connected, and when  load "A" - typically the UPS, but it could be a generator -  is disconnected, everything will get switched over, allowing you to performs the needed maintenance.

UPS maintenance:

When used with a UPS, I have typically plugged "A" (Main) into the UPS and "B" (Aux) into a non-UPS outlet.  If you need to service the UPS, simply unplug "A" and the load will be transferred instantly to "B".  Having "B" as a non-UPS source is usually acceptable as it is unlikely that a power failure will occur while on that input - but if you choose not to take that risk, another UPS could be connected to the "B" port.

I have typically kept input "B" (Aux) plugged into non-protected (non-UPS) power as a failure of a UPS would not likely interrupt the power to the backed-up device(s) - but if you do this you must keep an eye on everything as unless it is monitored, the failure of a UPS may go unnoticed until there is a power failure! 

This same device has also been used in a remote site with two UPSs for redundancy, not to mention ease of maintenance.  One must, of course, weigh the risk of adding yet another device (another possible point of failure, perhaps?) if one does this.

Generator change-over:

During in-the-field events like Amateur Radio Field Day such a switch is handy when a generator is used.  It is generally not advisable to refuel a generator while it is running even though I have seen others do it.  If, while gear is running on a generator, it is necessary to refuel it - another generator can be connected to input "B" and once it is up to speed (and switched out of "Eco" mode if using an inverter generator) input "A" is un-plugged  for refueling, checking the oil, etc.

If you are of the "OCD" type, two generators can be used:  The generator on "A" would be running the gear most of the time, but if it drops out, a generator on "B" - which will have been under no load up to that point - will take over.

Disadvantages of this "simple" version of the transfer relay:

For typical applications, the above arrangement works pretty well - particularly if power outages and maintenance needs are pretty infrequent - and it works very well in the "generator scenario" where one might wish to seamlessly transfer loads from one generator to another.

It does have a major weak point in its design - and that's related to how the relay pulls in or releases.

For example, many UPSs or generators - especially the "inverter" types - do not turn instantly "on", but rather they may ramp up the voltage comparatively slowly, but by its nature the relay coil may pull in at a much lower voltage than nominal - say, 80 volts.  When a load is transferred at this lower voltage, it may momentarily cause the power source to buckle, causing the load to be dropped and/or the relay to chatter briefly or, possibly simply cause the load to drop owing to too-low battery voltage.  The typical "work around" for this is to allow the "A" source to come up fully before plugging back into it - which is fine in many applications.

A "slow" pull-in on a relay can also be hard on relay contacts - particularly a "slow" rise the voltage from power source "A" - in which the contacts may not close quickly enough to prevent extensive wear.  In severe conditions, this can even result in one or more of the contacts welding (sticking together) which is not at all a desirable condition.  For this reason it is a good idea to use a relay with a significantly higher current rating than you are planning to pull.

A slightly more complicated version:

What can help this situation would be the addition of a short delay, after power source "A" is applied  but before the load is transferred to it - and better yet, we would like this load to be transferred only if its voltage is above a minimum value:  The circuit in the diagram below does this.

Figure 4:
This version of the transfer relay offers a short delay in transferring to load "A" as well as providing a low-voltage lock-out/detect.
The relay is a Dayton 5X847N - a 40 amp (resistive load) DPDT contactor with a 120 VAC coil.  This relay is likely overkill, but it should handle about anything one can throw at it - including capacitor-input power power supplies that tend to be very hard on relay contacts due to inrush current.
Click on the image for a larger version.
How it works:

This circuit is based on the venerable TL431 - a "programmable" (via resistors) Zener diode/voltage reference - U1 in the above diagram.  A sample of the mains voltage is input via T1 which, in this case, provides 9-12 volts AC which is then half-wave rectified by D1 and then smoothed with capacitor C1.  LED D2 was included on the board mostly for testing and initial adjustment - but it also establishes a 8-12 milliamp static load to help discharge C1 when the mains voltage goes low - although the current consumption of the relay does this quite well.
Figure 5:
An exterior view of the version of the transfer relay depicted in Figure 4,
above.  The unit is mounted in a 6x6x4" electrical "J" box.
The 10 amp rating is a bit arbitrary and conservative considering that
the contactor itself is rated for 40 amps and the fact that capacitor-input
supplies are likely to be connected to it.
Click on the image for a larger version.

The DC voltage is divided down via R2 and R3 and this is further filtered with capacitor C2, with R3 being adjustable to provide a variable threshold voltage to U1.  The combination of R2 and C2 causes the voltage at their junction to rise comparatively slowly, taking a couple seconds to stabilize.

When power is first applied, C2 is at zero volts, and will take a couple seconds to charge.  When the wiper of R3 exceeds 2.5 volts, U1 will suddenly turn on (conduct), pulling the "low" side of the coil of relay RLY2 to ground, turning it on which, in turn, will apply current to the coil of RLY1.  When it does, the base of transistor Q1 is pulled toward ground via R6, turning it on and when current passes through R4 into the junction of R2 and R3, the voltage will rise slightly, resulting in some hysteresis.  For example, if R3 is adjusted so that RLY2 will be activated at 105 volts, once activated the voltage threshold for U1 will be effectively lowered to about 90 volts.

If power source "A" disappears abruptly, RLY1 will, of course, lose power to its coil and open immediately - and a similar thing will happen if the voltage goes below approximately 90 volts when RLY2 will open, disconnecting power to RLY1 - and at this point Q1 will be turned off and it will require at least 105 volts (as in our example) for RLY1 to be activated again.  Diode D4 may be considered optional as it will more-quickly discharge C2 in the even the power on "A" goes away and suddenly comes back, but it is unlikely that its presence will usefully speed response.

As noted in the caption of Figure 4, the relay used is a Dayton 5X847N which has a 120 volt coil and 40 amp (resistive load), self-wiping contacts.  While 40 amps may seem overkill for a device with an ostensible 10 amp rating as depicted in Figure 5, it is good to over-size the relay a bit, particularly since many loads these days (computer equipment, in particular) can have very high inrush currents due to capacitor-input rectifier, so a large relay is justified.

Note:  The 5X848 is the same device, but with a 240 volt AC coil while the 5X846 has a 24 volt AC coil:  All of three of these devices are suitable for both 50 and 60 Hz operation.

Circuit comments:

Figure 5:
Inside the transfer relay unit.  The large, open-frame DPDT relay is in the
foreground while the 12 volt AC transformer is tucked behind it.  Mounted
to the wall behind it (upper-left in the box) is the piece of prototype
board with the smaller relay and delay/voltage sense circuitry.
Click on the image for a larger version.
U1, the TL431, is rated to switch up to 200 milliamps, but it's probably a good idea to select a relay that will draw 125 milliamps or less.  Because the contacts of the relay are simply switching power to the main relay (RLY1), RLY2 need only be a light-duty relay.

When I built this circuit I used a 5 amp relay with a 9 volt coil because I had a bunch of them in my junk box and in checking it out, I found the coil resistance to be 150 ohms meaning that at its rated voltage, it would draw 60 milliamps.  The voltage across C1 when RLY1 was not active was measured at about 16 volts so it was presumed that with the load of the relay that this would drop by a volt or two meaning that a series resistor that would pass 60 milliamps across 6 volts (the difference between the 15 volt supply and 9 volt coil voltage) should be used - and Ohms law tells us that a 100 ohm, 0.5-1 watt resistor would do the job.


A variable AC supply (e.g. a "Variac") is essential for proper adjustment.  To start, the wiper of R3 is adjusted all of the way to the "ground" and then the applied AC voltage is set to 105 volts - a nice, minimum value for U.S. power mains.  Then, R3 is adjusted, bringing the voltage on its wiper upwards until RLY2 and RLY1 just close.  At this point one can lower the input voltage down to 80-90 volts and after capacitor C2 discharges, the relays will again open and one can then move the voltage back up, slowly, and verify the pull-in voltage.

Figure 6:
The back side of the front panel of the J box:  A large, square hole was cut
in the front and an plastic dual gang "old work" box with its back
cut away was used to facilitate mounting of the two outlets  to the front panel.
Adhesive was used around the periphery to prevent the box from sliding
around on the front panel.
Click on the image for a larger version.
If less hysteresis is desired, the value of R4 can be increased to, say, 22k.  Note that despite the operation of Q1, some of the hysteresis is cancelled out by the voltage across C1 decreasing under load when the circuit is triggered, by the current through RLY1, so a bit of hysteresis is absolutely necessary or else the relays will chatter!


As can be seen in figures 5 and 6, a 6x6x4 inch gray plastic electrical "J" box was used to house the entire unit - a common item found in U.S. home improvement stores.  A pair of "duplex" outlets were mounted in the front cover by cutting a large square hole in it and using a modified "old work" box with its back removed, giving a proper means of mounting the outlets.

A pair of front panel neon indicators indicate the current state:  The "B" indicator simply indicates the presence of mains voltage on that input while the "A" indicator is wired across the relay's mains-voltage coil and is thus indicative of the delay in the relay's closure.

The circuitry with the TL431 and RLY2 is constructed on a small piece of prototype board, mounted to the side of the box using stand-offs.  The 9-12 volt AC transformer - the smallest that I could find in my junk box (it's probably rated for 200 milliamps) is also bolted to the side of the box.  Liberal use of "zip" ties are used to tame the internal wiring with special care being taken to absolutely avoid any wire from touching the armature of the relay itself to prevent any interference with its operation!

Final comments:

Both versions work well and the "simple" version depicted in figures 1 and 2 is suitable for most applications.  For more demanding applications - particularly those where a transfer may occur frequently and/or the mains voltage may rise "slowly", the more complicated version is recommended.

Again, if you choose to construct any of these devices, please take care in doing so, being aware of the hazards of mains voltages.  As mentioned in the "Weasel Words" section, please make sure that this sort of device is appropriate to your situation.

This page stolen from ka7oei.blogspot.com


Thursday, July 2, 2020

What the heck happened to this Sense power monitoring module?

Figure 1:
The exterior of this Sense SM3 power sensing module.
The connections are made via barely-visible holes on the left side while a
WiFi antenna permits connectivity onto the user's wireless network.
Click on the image for a larger version.
A friend of mine had a "Sense" tm power monitoring system at his house for a couple of years.  This device works with additional software to allow a user to monitor power consumption within their house or business, potentially offering the ability to audit loads and manage their household power consumption.  It also has the ability to monitor the production of a rooftop solar, allowing another means of monitoring its production and performance.

This system and its software wasn't without its minor quirks, but it did work pretty well.

Until recently.

A couple of months ago he started getting anomalous readings from the unit - and a day or two later, it failed to provide any current readings at all but it still read the mains voltage.  Upon opening his breaker panel he could detect the strong smell of burnt glass-epoxy circuit board so he knew that the unit had catastrophically failed in some way.

Figure 2:
The other end of the Sense unit showing the model number.
While masked for this picture, it appeared to be a
rather early production unit with a very low
serial number.  It would be interesting to know if that
fact was significant to this event.
Click on the image for a larger version.
He sent it in to the manufacturer to check about a repair and after a pandemic-induced delay of a monitor or two they finally got to looking at it and deemed it "Not economical to repair" with a comment about lightning damage;  They did offer to send him a refurbished unit for about the same price as one could get a new one for on sale, so he opted to have it sent back to him in the (unlikely) hope that a more "courageous" repair would be possible.

Thus, it landed on my workbench.

As it was, I could hear parts rattling about - almost never a good sign - and after using the "spudging" tool to get it apart I could see the problem:  Two arrays of incinerated 39 ohm surface mount resistors.

Lightning damage?  I think not!

Based on a cursory overview, this unit appears to directly rectify the 240 volt mains and apply it to a switch-mode converter - and this portion of the circuity appeared to be relatively undamaged - a fact borne out by the owner who said that it was still reporting mains voltage when he pulled it from service.  What appeared to be "smoked" were the shunt resistors for both sets of CTs (current transducers) - and the question came up:  "How the hell did that happen?"

Figure 3:
The damage - while significant - did not appear to be "total":  Had I an exemplar from which to work I could have probably repaired this thing fairly easily - but one wasn't on hand and the circuit board traces were too-badly damaged to, uhmm, trace.
Click on the image for a larger version.

Lightning damage or a power line transient causing damage/failure of the affected components seems unlikely considering the very nature of how CTs are connected and used:
  • First off, CTs are completely isolated  (galvanically) from the current-carrying conductor that they are measuring, so some sort of "arc 'n spark" of mains voltage to the sensor input would seem to be out of the question.  I would expect that the stand-off voltage of the CT on the piece of wire that was being monitored would be in the high kilovolt-range - and if there had been enough voltage to break down the insulation not only would there be visible evidence.
  • This damage appears to be a result of a longer-term fault than a brief transient, having occurred over enough time to thoroughly heat and char the board as seen in the pictures.  A very brief, high-energy transient would likely have blown components clear off the board and, at the very least, physically damaged other components in the signal path.
  • He has a "whole house" surge suppressor installed - a "good" one:  Certainly that would have suppressed a transient capable of causing direct damage via the CT input - assuming that it was likely at all.  Had a massive transient actually happened, one would expect that the suppressor would have shown signs of "distress".
  • An event capable of this sort of damage - again assuming a transient - would have surely caused other damage to something - anything - in the house:  This was not the case.
  • He has several grid-tie solar inverters at his house.  At the time of damage, these would have surely registered a transient event, had their been one.
  • Considering the time of year, the location, and the weather involved at the time this failed, the probability of lightning falls into the "bloody unlikely" category - particularly since the weather was fine in the day or two that it took for it to go from "sort of working" to "failed" status.

What was interesting was that the circuitry associated with both CTs - the one monitoring the mains, and the one for monitoring the solar - were similarly damaged, although the former appeared to be suffering far worse in terms of board damage.  As can be seen from the pictures, the damage is thermal, confined entirely to the area around the 39 ohm resistors.
Figure 4:
The most badly damaged of the set of sense resistors.
(Yes, pun intended!)
Click on the image for a larger version. 

So, what happened?

At this point, it's really not possible to be completely sure, but it looks as though there may have been either a fault in both CTs (but how likely is that?) and/or there was a deficiency in the design of the monitoring board.

What are CTs?

CTs (current transducers) are nothing more than simple transformers:  One passes the wire to be monitored through the middle of a toroidal core and a voltage is induced on the many windings of the secondary wound around it:  The current through the wire in the middle is directly proportional to the (lower) current that flows and the way this is typically done is to terminate the secondary winding with a resistance.  By using Ohm's law and measuring the voltage across that resistance, the current on the wire can be calculated.

It is absolutely imperative that a CT be terminated with a low-ish resistance as leaving it open-circuit can develop a tremendous voltage.  But, there is a potential problem (pun intended!):  Current transducers are very nearly an ideal current source - that is, whether you simply short its output together or terminate it through even a fairly high-value resistor, the current will (ideally) be the same - but knowing Ohm's law, the higher the resistance, the more voltage drop for a given current - and the more power being dissipated in the shunt resistor(s).  Clearly, if the shunt resistance had increased, something terrible would be bound to happen.
Figure 5:
 The lesser-damaged portion.  Amazingly enough, most of
these resistors still read within 10% of their original values,
likely explaining why the system "sort of" worked - until it
Click on the image for a larger version.

What I expect happened was this:
  • The original component constituting the shunt resistance - which appears to consist of ten 39 ohm resistors in parallel (for 3.9 ohms) - may have been of marginal total dissipation rating.  Under a moderate load, it's possible that these resistors have been running quite warm and over time, they have degraded, slowly increasing in value.
  • As the value increased, the calibration would have started to drift:  Whether or not that happened here over a long period is unknown - but the owner did report that it took a couple of days for the unit to go from sending alarms about nonsensical readings to the total loss of current readings.
  • As the resistance went up, so would the power dissipation of the sense resistors.  Because CTs are essentially constant current devices, as the voltage increased, the power being dissipated by those resistors would also increase.  The original failure mode was possibly that the resistance was increasing due to these resistors running hot, the increased heat would have likely caused the previously slow-moving failure to accelerate.
  • At some point, a cascade failure would have occurred, with the voltage skyrocketing - and the current remaining constant:  This would certainly explain the evidence on the board.
Interestingly, this unit carries a 200 amp rating for the CT/unit combination - but there was never a time where this rating was ever attained:  The circuit that being monitored was on a 125 amp electrical service and the failure occurred during the early spring when no air conditioning was being used.  Additionally, the "solar" circuit - which is external to the 125 amp panel (on the "utility" side, in fact) - which could not possibly have anywhere near the same current load as the entire house - was also damaged, but the resistors were not so completely incinerated as those related to the main CT.


I got my hands on a working unit and did a bit of tracing of the circuitry and found that the wiring associated with the burnt resistors was completely different than I expected in - at the very least - the following ways:
  • These resistors are connected in series to form a single 390 ohm resistance, one end of the string being connected to each of the larger power devices visible on the board.
  • The power devices - both marked "Z0M"  followed by a "G" and "E822" and are made by ST.  Both devices test "open" with a diode test function on both the working and damaged unit, but  while they look like transistors, they are likely SCRs or Triacs.
  • The other ends of the resistors are connected across the mains - each string being connected to its own side.
  • When checking the CT inputs with an ohmmeter, I found no obvious resistive shunt - and the unit with the damage read identically to the known-good unit.
Further checking of nearby components didn't show any obviously-bad devices, seeming to indicate that the damage - both physical and electrical - was very localized to the resistor strings, so at some point I'll attempt a repair, possibly replacing the string of surface-mount resistors with larger, multi-watt 390 ohm units.

What was the problem, then?
Figure 6:
The main processor board for the unit.  The damage is
actually superficial - the board covered with smoke
residue when the sense resistors incinerated themselves.
Click on the image for a larger version.

Assuming that there was not any sort of inadequacy in the original circuit design, I'm at a loss to explain the damage to the board.

What seems to have been the issue was, in fact, stressed components on the circuit board and/or a failure of the CT itself  (or even the wrong CTs being supplied) but it seems unlikely that both CTs would have failed in exactly the same way.

Barring other information, I'm tending toward believing that a gradual degradation of the shunt resistors - possibly owing to the original components being thermally stressed under normal conditions - was a problem, culminated with a cascade failure at the end.

It would be very interesting to have a peek inside other units of the same model and revision that have been installed for a while to see if they show thermal stress related to the shunt resistors.  A quick perusal on the GoogleWeb did not immediately reveal this to be a common problem, so it is possible that this is some sort of freak incident.

Unless he decides to get another unit of the same model to replace this one and a comparison is done, we'll probably never know.

This page stolen from ka7oei.blogspot.com


Sunday, June 7, 2020

An ESP8266-based Temperature, Humidity and Line Voltage monitor

Figure 1:
The completed Temperature/Humidity/Line Voltage web
server/telemetering device.  The remote temperature/humidity
sensor is the unit to the left.  The two AC-DC wall adapters used for
powering the unit and monitoring mains voltage are not visible.
Click on the image for a larger version.
As anyone who reads this blog probably knows, I have a bit to do with the operation and maintenance of the Northern Utah WebSDR - a remote receiver system that allows anyone with Internet access and a web browser to listen to the LF, MF, HF and some of the VHF bands as heard from a rural site in Northern Utah.  The equipment for this receiver system is located a small building in the middle of mosquito and deer-fly infested range land near brackish marshes - no-where that anyone in their right mind would like to be during most of the year.  With the normal weather in the summer and many clear days, this building gets hot at times:  It's been observed to exceed 130F (55C) on the hottest days inside - a temperature that causes the fans on the computers scream!

Even though electronic equipment is best kept at much lower temperatures, this isn't practical in this building as it would be prohibitively expensive to run the on-site air conditioner full time - but all we really need to do is to keep the building closer to the outside temperature and even though it may be uncomfortable for humans, it is enough to keep the electronics happy.  To that end, vents have recently been installed to allow convection to pull away most of the heat and the exterior will soon been painted with white "RV" paint to (hopefully) reduce the heating effects of direct sun.

It would make sense, then, that we had a way to remotely monitor the building's internal temperature as a means of monitoring the situation.  Additionally, temperature information can also be used to make minor adjustments to the frequencies of some of the receivers' local oscillators to help counter thermal drift.

Figure 2:
The "business end" of the small board that contains the
ESP8266 module - the device with the metal shield.
This board also includes a USB plug, a CH340-like
USB to serial converter that allows for programming
and debugging and a voltage regulator that allows direct
operation of this board from a 5 volt supply.
As can be seen here and in Figure 4, the ESP8266 board was,
itself, mounted to a larger prototyping board for construction
of the ancillary circuitry.
Click on the image for a larger version.
On site we do have an Ambient Weather (tm) station, but anyone who has used this (or similar) hardware knows that some vendors of this type of gear make it difficult to obtain your own data without jumping through hoops:  Although this data is visually available on the local display or even on a web site, it is a bit awkward to pull this data from their system and (at least with the newer versions of the hardware) one cannot get this data locally from the weather station itself.

Fortunately, the most-needed data - temperature inside the building - is easily measured using inexpensive sensors, so it made sense to throw together a device that could make these measurements and present them in an easy-to-use web interface.

The ESP8266 "Arduino" board:

As is often the case with projects like this, the Internet has the answer.  The ESP8266 is an inexpensive embedded computer module that has a reasonable amount of program memory and RAM and it also sports hardware such as a WiFi module, several digital  I/O pins and a 10 bit A/D converter.  What this means is that for less than U.S.$12 you can get two of these delivered to your doorstep that contain an already-mounted ESP8266 module on a carrier board with a USB port in a format that strongly resembles that of the ubiquitous Arduino development board.  More importantly, the Arduino IDE supports this board meaning that it is pretty easy to use this hardware in your own projects.

Because the '8266 board has been available for quite a while, there is a large library of software for it - including a small web server and code to interface with many types of devices, including the well-known (and relatively inexpensive) DHT-22 temperature and humidity sensor.

Comment:  The ESP8266 variant used here appears to be the "12E" version which has 32 Mbit (4 Mbytes) of Flash memory and "around 50k" of RAM.

The "DHT Humidity and Temperature web server":

It took only a few minutes to find online several implementations of a web server coupled with the DHT-22 sensor - and I chose what seemed to be a popular version on a web site by Rui Santos - to look at it yourself, go here:


Presented in good detail, it was only about 20 minutes from the start to tack a few flying leads to my $6 ESP8266 "Arduino" board to connect the DHT-22 sensor before I had a wireless web server on my workbench that was happily reading the temperature and humidity.

Of course, getting something working can be miles from a finished project and that was certainly the case here as the project was about to be subject to self-inflicted feature creep and code bloat as I'd already decided that I wanted it to do two other things as well:
  • Monitor the AC line voltage.  The WebSDR receive site - being rural - suffers from very dirty AC mains power.  We have seen the nominal 120 volt mains exceed 140 volts for brief periods in addition to the frequent outages - and it would be nice to have a device that would allow us to record such excursions.
  • Telemeter the gathered information via RF.  Because the ESP8266 is a small computer - and it has data that we want - and we are at a radio receive site - it would be a simple matter to have this unit tap out the information using Morse code on a low-power, unlicensed (part 15) transmitter that was capable of being received by the on-site receivers.
The final result is this, in schematic form:
Figure 3:
The schematic of the support circuitry of the ESP8266 unit described, including a pictorial representation of the processor board itself.
Click on the image for a larger version.
Circuit description:

The ESP8266 is treated as a single component - the support circuit being connected to the pins as noted with the '8266 itself being mounted on a larger board as can be seen in Figure 4, below.  It's worth noting that this is a 3.3 volt device which means that the "high" output voltage is around 3 volts:  If I'd needed a digital input, I would have had to make sure that the logic high input level was appropriately limited in voltage.

Power supply and monitoring:

There are two uneregulated AC-DC transformer "wall warts", both being a low-voltage transformer (9-12 volts AC) with full-wave rectification and capacitive filtering  - one to power the unit and the other to monitor the line voltage.  The separation of these two function is necessary for obvious reasons:  We'd want the unit to continue to function when the AC mains was out, but continue to run from the UPS which means that we can't monitor or own power supply!  Even if we could, the current consumption of the unit varies a bit and as a consequence, so does the unregulated voltage from the monitor supply.  The source of power for the unit itself could be anything that can provide 10-15 volts DC - regulated or not - but these AC->DC transformers were on-hand, plus being simple transformer-rectifier-filter units, they do not generate RF noise - unlike some switching-type devices - a factor important at a radio receive site.

The power from each of the AC-DC adapters enter via a screw terminal strip and immediately passes through a pair of bifilar-wound inductors - the purpose here being to provide RF isolation:  Because this device contains a computer and a low-power transmitter, we don't want any signals on this device from being radiated on the power leads.

The first AC-DC adapter is used to power the unit - a red LED indicating that voltage is present.  Following this is a "bog standard" 5 volt regulator using a 7805 to provide a lower voltage to feed to the "VIN" pin of the ESP8266 board and to run other circuitry on board.

The other wall wart has only a light load - most of the current being consumed by D4, an orange LED used to indicate that the mains voltage being monitored is present.  As you would expect, an unregulated AC-DC supply like this isn't a precision instrument when it comes to measuring line voltage as it is not any sort of RMS measuring device and with its built-in filter capacitor, it's also relatively slow to respond  - but it is "good enough" for the task at hand.

This voltage is divided down via R3 and variable resistor R4 for the 0-3.3 volt input range of the "A0" (analog input) pin on the ESP8266 module.  (Note:  It's reported that A/D range of the "raw" '8266 module is 0-1 volt - apparently this board includes a voltage divider to scale from 3.3 volts.)  Resistor R4 is 10 turn unit used for calibration of the line voltage.

Watchdog timer:

Figure 4:
The completed ESP8266-based temperature, humidity and line voltage
monitoring device.  The CW transmitter portion is in the upper-left corner
of the board with the 555-based watchdog timer below it.  In the lower-
right corner is the 7805 regulator with its heat sink with R4, the
calibration for the line voltage being seen just below the lower-left
corner of the ESP8266 board.  The gray wire at the top connects to the
small board containing the DHT-22 temperature/humidity sensor.
The entire unit is mounted via stand-offs into the lid of the plastic case
depicted in Figure 1.  Inside the lid I placed a sheet of self-adhesive
copper foil that is used as a ground plane to which the input filter
capacitors (C1, C3), the LEDs and the ground connections of the
board are soldered.
Click on the image for a larger version.
Because this device is unattended in a remote location I took the precaution of adding a simple hardware watchdog timer.  The software generates a pulse train (nominally a square wave) on pin "D2" which is then applied to transistor Q1:  Capacitive coupling, via C7, is used as a DC coupled signal would have made it possible that a watchdog reset condition could have been simulated if the pin were stuck "high".  The timer itself is the ubiquitous NE555 "programmed" via C8 and R7/R8 to have an approximately 45 second period.

The pulse train from pin D2 pulses Q1, keeping timing capacitor C8 discharged - but if the pulse train stops, pin 3 will go high after the timing period, briefly pulsing the "RST" (reset) pin of the EP8266 via capacitively-coupled Q2.  A 45 second period was chosen as it takes about 8 seconds for the ESP8266 to "boot up" enough for the software to generate the - and it also allows just enough time to upload the program.

During initial development one would probably not plug a 555 into its IC socket as spurious resets would likely be an annoyance as there may not be code to create the reset pulses, but with the size of the code for this project the reset period is long enough to allow uploading of the code before a reset occurs and the pulses resume.

Temperature/Humidity sensor:

The readily-available DHT-22 sensor is used, chosen over the slightly cheaper DHT-11 as the '22 offers a wider temperature and humidity measurement range - although the software can be configured to work with either one. To avoid erroneous temperature or humidity measurements from the unit's heat generation, this sensor is mounted on its own board as depicted in Figure 3.  On this small board is not only a power supply bypass capacitor (C20) but also a pull-up resistor R19.

The "sensor module" - visible in Figure 1 - was placed inside a small piece of ABS tubing (gray non-metallic electrical conduit) for protection with small pieces of nylon window screen glued to each end to keep out insects, but allow air flow to permit accurate measurements.

CW transmitter:

Because it is a computer - and there was plenty of code space - I decided to add Morse Code generation to provide telemetry that could be picked up by the HF receivers on site.  Stealing my own Morse-generating C code from a 20+ year old PIC project, I made minor modifications to it, using a hardware-derived timer in the main loop to provide a sending clock.  The Morse generating code toggles D3, setting it high to "key" the transmitter.

The signal from pin D3 goes to Q3 which is wired via current-limiting resistor R13 to Q4, a PNP transistor to provide "high side" keying of the unregulated V+ supply.  This voltage is then passed through resistor R14 which provides both a bit of current limiting and, with C12, some R/C filtering to slow the rise/fall of the voltage:  Without it the RF would have been keyed very "hard" causing objectionable "key clicks" on the rise and fall of the RF waveform.  This voltage is used to key both the buffer and the output amplifiers, described below.

The signal source is a 28.57 MHz crystal "can" oscillator module that I found in my junk box.  While I could, in theory, have done CW keying by turning this oscillator on and off, these oscillators aren't designed to be particularly stable and doing so would have caused the oscillator to "chirp" - that is, the short-term frequency drift that occurred when power was applied would have caused an objectionable shift in the received audio tone during keying.

Instead, the oscillator was powered continuously with its output fed to Q5, an emitter-follower buffer:  R15 "decouples" the oscillator from Q5 somewhat and without it, the RF current into the base of Q5 would increase when its collector voltage was switched off causing the oscillator to heat internally, resulting in a frequency shift.  The output of the buffer circuit is then passed via resistive and capacitive coupling to Q6 which is used as the final RF amplifier.  L1 is used to decouple its collector from the power supply while C16 removes the DC voltage from the RF output.  The remaining components - C17-C19 and L2/L3 comprise a low-pass filter resulting in harmonics that are at least 40dB below the fundamental.

This circuit was originally built without Q5, the buffer amplifier, but I had two issues that could only be resolved by its addition:
  • Backwave.  Because the oscillator runs continuously, there will inevitably be a bit of leakage - and in CW where it is the very presence and absence of the signal that is used to convey the information, having a rather strong signal when there is supposed to be silence made it difficult to "copy" the code.  When Q6 was turned off, there was enough leakage between its base and collector to offer only about 15 dB of attenuation when the transmitter was "un keyed".
  • Oscillator stability.  As noted above, R15 was used on Q5 to limit the current out of the oscillator to prevent frequency drift as buffer transistor Q5 was keyed.  When I'd tried to drive the output (Q6) directly - without a buffer - I had the same problem:  If I coupled enough energy to drive the transistor, the frequency would vary with the CW keying - and the backwave would get worse - but if I increased the resistor enough to reduce the problem, the transistor would be properly driven - which also increased the backwave as the transistor's output would fall in comparison to the signal leakage.
With the circuit built as shown the backwave is at least 40dB below the keyed output  - which is more than adequate for the task.

The code:

As noted above, the basis of the project was that published by Rui Santos - and in the spirit of open source, the code was modified:
  • The original code included some small graphics served on the web page - but in line with the KISS principle, this was stripped out in favor of the simplest text, using only standard HTML formatting.
  • Additional code was added to read the AC mains voltage via pin "A0".  In the main "loop" routine, this input is read 100 times a second and then averaged, with a new reading made available every second.  This average removes most of the noise on the pin - some of which is internal to the '8266 itself, but the majority of which is due to a small amount of AC ripple on the voltage monitoring line.
  • Additional code was added to record the minimum and maximum of all of the monitor parameters - that is, temperature, humidity and line voltage.
  • The default of the code was to read the temperature in Celsius, but being in the U.S. I added code to give the readings in Fahrenheit as well. 
  • The web server code was modified to display all of the available data - the temperature in Fahrenheit and Celsius, the humidity and line voltage - and their minimum and maximum values.
  • Addition modification was made to the web server code to allow each of the data points to be read in simple text format to simplify parsing for remote monitoring and logging of this data.  The nature of the the web server actually made it very easy!
  • Yet another modification was made to reset the minimum and maximum readings and to provide information as to how many seconds it had been since a reset had occurred.  The temperature/humidity min/max reset is separate from the line voltage min/max.
  • The code also keeps track of mains voltage that falls below a threshold (an outage) or exceeds a threshold (a "surge") - both of which are extremely common at the remote receive site. 
  • I added my own Morse generation code, ported from some PIC-based "C" code that I wrote about 25 years ago and interfaced it with the main timing loop.
Source Code:

If you are interested in the source code (sketch) you may find it at THIS LINK.

Figure 5:
 A screen shot of the web page.  This same information is available
from individual links (on the bottom half of the page) that will
return just the information requested, making it trivial to obtain
individual data points using something like WGET.
Click on the image for a larger version.

As mentioned, with the WiFi capability of the ESP8266, the information that this device records is available via a web page on the wireless network:  One only need enter the SSID and wireless password at compile time.  For reasons obvious to anyone familiar with the Internet, this device won't be accessible from the web itself, but only to devices on the local network.

With the CW generator operating on 28.57 MHz, this signal lands within the 10 meter amateur band - and with this device being co-sited with receivers, it is a simple matter of tuning to that frequency to hear the telemetry.  Even though the RF output power is on the order of 50 milliwatts, the actual transmit "antenna" is a very small piece of wire - large enough to radiate just enough signal to be heard via the nearby antennas but not nearly enough to exceed FCC part 15 rules, eliminating the need for this device to transmit an FCC-issued callsign.


This device will soon be installed at the Northern Utah WebSDR and when it is, this page will be updated with information about how you can hear the Morse telemetry.

This page stolen from ka7oei.blogspot.com