Tuesday, November 17, 2020

Interesting signals on the 20 meter band: Probable Radio Habana Cuba transmitter malfunction - not jamming

 I happened to be looking at the various receivers at the Northern Utah WebSDR - as I'm wont to do (since I maintain them!) and noticed a few strange-looking signals that hadn't been there before:

Figure 1: 
Obvious QRM (interference) in the 20 meter amateur band.  The signal repeated every 66 kHz or so, allowing its source - below the 20 meter band - to be easily divined. 
Click on the image for a larger version.

The first thing that I did was to check other receivers - both on-site and across the U.S. - to make sure that this wasn't some sort of local problem (overload, image, nearby source) and found it elsewhere - but the selective fading visible in the waterfall display made me quite sure that this was ionospherically propagated and not local.  The errant signal was practically nonexistant in the Eastern U.S. - but with the known skip distance of 20 meters, that might have meant that those receivers were closer to the source, geographically.

When tuned in using AM, there was a very obvious audio tone (approximately 363 Hz) associated with the signal with a vestige of distorted speech underneath and the RF signal itself wasn't stable frequency-wise.  The tell-tale sign that this was more likely a spurious signal of some sort was the fact that this seemed to appear at intervals - roughly 65-70 kHz - so I decided to "follow the money", tuning lower in frequency and finding stronger and stronger instances.

Figure 2:
YouTube clip with audio from the errant spurious signal.  This clip - from one of the instance of spurious signal "nearby" the original - clearly contains Spanish-language audio - a clue as to a possible source!

Adjacent to the 20 meter amateur band is the 22 meter Shortwave Broadcast Band, and there I found the culprit:  A Radio Habana Cuba signal with the same sort of tone on it, symmetrically flanked by the same sidebands.  Using the TDOA feature of the KiwiSDR network clinched the diagnosis:  I tuned to one of the lower-frequency components of this signals, ran the analysis and came up with the results, below:

Figure 3:
  Several TDOA runs on the WebSDR network yielded the same results:  The errant signal appeared to be coming from western Cuba.  The main signal was not actually on 13563 kHz:  It was slightly higher up the band (probably 13700 kHz) - I just picked this particular spurious component because it was one of the strongest ones and "in the clear" - not atop another signal. 
Click on the image for a larger version.

Clearly, the program material matched the location!

While writing this, the spurious signal suddenly disappeared at around 1503 UTC:  Perhaps someone noticed the problem and switched the errant transmitter off (or fixed something) - or maybe whatever it was that had been failing finally gave up the ghost?

Interesting!

Update:

The same problem was noted again on 18 November (during the 1500 UTC hour) with spurious signals appearing on the 22 and 19 meter shortwave broadcast bands with interference again appearing in the 20 meter amateur band.  Again, the KiwiSDR TDOA network showed the likely source of the signal to be Cuba.

Either the folks at Radio Habana Cuba are unaware of the problem, or don't care enough to fix it/curtail transmissions to avoid causing issues across the HF spectrum!

The most likely source of the interference is the transmitter on 13700 kHz as it is symmetrically flanked with spurious signals above and below, spaced about 68 kHz (variable).  There is clearly something wrong with the 11760 kHz transmitter as well based on its long-term issues of very poor audio quality.

This page stolen from ka7oei.blogspot.com

[End]

Saturday, November 14, 2020

A high-current DC (and AC) noise filters for UPS or RV use

A friend of mine (Glen, WA7X) acquired a 16 kVA UPS (for free!) a year or so ago - a commercial system consisting of four hot-swappable 4 kVA modules:  With his current load, he only uses one of the four modules, the rest being available as spares or providing room to grow.  Using this as a battery back-up system for important devices in his house (computers, etc.) it's active all of the time as it is an "online" UPS - that is, the inverter pulls power from the battery bank, but the battery bank is always being charged.

Figure 1:
Whiteboard diagram of the dual AC mains filter for the
UPS - See text for details
Click on the image for a larger version.

AC-side filtering:

When he first installed the UPS, he discovered that being a commercial device, it was only a "Class A - commercial" device under FCC part 15 - and it trashed the 20 meter amateur band and caused interference on a few others.  This, however, was easy to remedy as he'd asked me for advice and built a larger version of UPS noise filters that we'd implemented together in the past:  See the article "Containing RF Noise from a Sine Wave UPS" - link.  

Being capable of many kVA, the filtering for this UPS had to be built from scratch rather than using (expensive!) commercially-available filter modules, but this was easily done using readily-available ferrite toroids and bypass capacitors.

Figure 1 shows the general diagram, crudely sketched on a white board in his shop after our consultation.  The inductors are 12-14 turns of 6 AWG on FT240-31 cores, each half (phase) being an equal number of turns for best common-mode suppression as depicted in Figure 2.  Because the UPS outputs 240 volts, the 50+ amp capability of unbundled 6 AWG wire is sufficient for the envisioned load on this UPS.


Figure 2:
The inside of the dual mains filter, built
into a standard NEMA box.  The capacitors
- mostly obscured - are connected to the
blocks with the ground side bonded to the
case.
Click on the image for a larger version.

The filter uses suitably-rated parallel 0.01uF and 4700pF capacitors:  Those across the AC leads (which could have been as large as 0.1uF or so) help force the RF energy to be common-mode across the bifilar choke while the capacitors to ground on the "outside" (non-UPS) side of the filter shunt the remaining RF - which is already at higher impedance due to the choke - to the common-point ground.  Shown in red on the drawing in Figure 1 are large 43 Mix slip-on beads on the "UPS" side of the filtering to better-suppress the high frequency (VHF) components:  Ideally, one would run both conductors through each bead for net zero flux on the core, but larger diameter devices were not available at the time of construction.

The filter pictured in figures 1 and 2 completely solved the RFI problem:  One has to get within a few inches of the UPS cabinet to hear magnetically-coupled RF energy with a portable shortwave radio.

DC-side filtering:

It wasn't a huge surprise, then, when he added more battery capacity external to the UPS - 120 volts DC - and the racket on 20 meters and other bands reappeared.  Because RF is RF, the filtering method for the DC leads is exactly the same as required for the AC leads:  Common-mode choking, bypass capacitance and single-point grounding techniques. 

Considering that the UPS is capable of up to 16 kVA, the DC filter needed to be able to handle more than 100 amps at the 120 volt (nominal - about 138 volts, actual) input.  Looking about, he found a pre-made set of 6 foot long, 2 AWG, very flexible "inverter cables" at Harbor Freight (cost:  $35) that were conveniently available - easily capable of handling about 100 amps - more than enough because he was not ever expecting to load the UPS to its capacity.

Because of the size of the wire, standard FT-240 (2.4 inch/61mm O.D.) cores aren't appropriate, so Glen obtained some "Monster" size toroids (Mix 31) from KF7P.com:  These cores are about 4" (102mm) in diameter and it was possible to wind 7 bifilar turns of the 2 AWG wire onto them, yielding about 170 uH - more than enough inductance to provide adequate choking on the HF bands.

Because they were on-hand, the same capacitors were used:  0.01uF and 4700pF capacitors in parallel:   With a DC system, much larger-value capacitors (e.g. 0.1-10uF) of appropriate voltage could have also been used if lower-frequency attenuation were required.  Like the AC choke, large slip-on ferrite beads (31 mix in this case) were slipped over each of the 2 AWG wires on the "UPS" side to help suppress the higher-frequency energy.  Because of the current involved, 200 amp screw terminal strips were procured - both to terminate the connections to the wire comprising the inductances, but also provide connections to the "outside" world.

Figure 3:
The completed DC noise filter.  The bifilar-wound choke on the "monster" 31-mix core is wound with 2 AWG welding/inverter cable:  The slip-on ferrites on the "UPS" side of the DC are clearly visible.
Mostly obscured are the bypass capacitors, connected to the screw-type terminals.
Click on the image for a larger version.

There are a few caveats to making a filter like this work:

  • The "ground" lead must be as near zero length as possible.  This box was bolted directly to the box containing the AC input/output filter described above  (which, in turn, is bolted to the UPS cabinet) to establish a single point ground where the RFI on the AC in/out leads and the DC leads come together:  Connecting the two boxes with just a few inches of wire caused noticeable degradation in its performance!
  • The cables connected to the UPS must be considered to be "dirty", carrying a lot of RFI, and must be kept as short as possible.  Additionally, one must keep other wires away from these "noisy" leads to prevent interference from being re-coupled into them!
  • The external battery bank itself has its own fuse, at the battery bank:  Do not even think of connecting a high-current power source like this without some sort of short-circuit protection!

As with the AC filter, this one appears to be completely effective with no conducted noise being detected on the leads of the external battery connection.

Where might these techniques be applied?

The filters shown above are simply "scaled up" versions of those described previously on this blog (links below) to handle higher voltage and current.  A few instances where these techniques might be useful include:

  • Adding higher battery capacity to an existing UPS.  You may own a UPS that will power your gear, but simply has too little battery capacity for the desired run time - and adding external battery capacity safely (e.g. fused, insulated) is one way to do this.  As in this case, adding more capacity caused radiation of RFI which had to be suppressed.
  • Suppress noise from an existing UPS.  Many modern UPSs are likely to create RFI - and these pages show how that might be mitigated.
  • Suppress noise from an RV power system.  Many RV (recreational vehicles) have power converters (AC to DC for charging batteries) and inverters (DC to AC for running mains-voltage devices) that are likely to generate RFI.  The techniques described on these pages show how it is practical to prevent the conduction/radiation of RFI on both AC and DC leads.

Thanks to Glen, WA7X, for supplying the pictures:  I just scribbled down diagrams and notes and gave him a few capacitors - he's the one that actually built the thing!

Related links:

Links to other articles about power supply noise reduction found at ka7oei.blogspot.com:

The large, ferrite toroids and beads used on this project were obtained from KF7P.com - link.

* * *

This page stolen from ka7oei.blogspot.com

[End]



Thursday, October 22, 2020

Using the jt9 executable to receive FST4W signals

Note: 

Since originally posted, WSJT-X v2.3.0-rc2 was released, adding a feature that simplifies this process as described below.  Note that most of this post was written soon after "rc1" had become available.

* * *

As a heavy user of K1JT's WSPR and operating on the 2200 and 630 meter bands, I have noted with interest the introduction of the "FST4W" mode in the recent (v2.3.0-rc1) wsjt-x release.  Operating using the same detection bandwidth as WSPR (when FST4W is operated in the 120 second mode) it offers a theoretical 1.4dB improvement in detection sensitivity.

Being involved with wsprdaemon (link to that project here ) - an open-source project that automates and optimizes reception of WSPR signals on all bands, particularly if multiple receivers/antennas are used - we have been watching this development with interest, particularly since FST4W has the likelihood of supplanting conventional WSPR operation, especially on the lowest amateur bands (2200, 630 and possibly 160 meters) where minimal of Doppler shift is expected.

Internally, WSJT-X  uses the subordinate wsprd program as the decoding (and encoding) engine.  As a stand-alone program, the wsprd executable code may be invoked with a command line to decode signals contained within a .wav file that was captured during the standard two minute interval - aligned with even UTC minutes - and produce a text file containing the decoded signals.

Why use the executable rather than the entire wsjt-x suite?  The fact is that the use of the wsjt-x suite does not lend itself easily to script-driven, bare-minimum, lightweight implementations where further processing of the decoded data (to remove duplicate decodes from multiple receivers, antennas and to use this same data for further analysis of signal/noise) is desired.

The "jt9" executable:

After a bit of digging about, it was "discovered" that FST4W - being an offshoot of the JT9 protocol - was handled not by the wsprd executable, but the jt9 executable.  Simply executing this program with no arguments will yield a list of command-line arguments which, on the face of it, made it appear that updating the wsprdaemon to include the decoding of FST4W signals would be a relatively simple matter.

Except that it didn't work.

Initial testing with strong, off-air FST4W signals that was known to be decodable (because farther-flung stations were able to decode the very same transmissions) yielded no results when the .wav file was applied to the jt9 program - but automatic execution over many hours yielded the occasional off-air decode.  Confused by this, I sought help on the WSJT-X groups.io forum.  Fortunately, Joe Taylor and several of the developers offered a clue:  The "-f" parameter of the jt9 executable, described minimally as "Receive Frequency Offset".

Apparently, the default center frequency of the jt9 executable - at least when in FST4W mode (and maybe others) is 1500 Hz - a fact implied when one gets the display of command-line arguments.  What is not so clear - and only alluded to in the available documentation - is that the apparent bandwidth of the decoding, at least in the 120 second mode, is on the order of 40 Hz (+/- 20 Hz)Addendum:  This issue was fixed with the "-F" parameter - see below.

At a quick glance through the source code (file "jt9.f90"), this bandwidth setting appears to be hard-coded into a shared variable (apparently accessible by other programs in the WSJT-X suite) called "ntol" (likely a number referring to the "frequency tolerance" setting in the GUI) that is not available via the jt9 command line - at least, not without modification of the source code.  (The possibility of directly accessing these shared variables exists - but this would be platform-specific, a bit messy and somewhat dangerous!)

Unfortunately, this fixed +/-20Hz bandwidth does not appear to be compatible with the way that the FST4W mode has (already!) found use on 2200 and 630 meters where it is used along-side the WSPR mode in the 200 Hz subbands.

A hell of a kludge:

Update - kludge no longer needed:

As of version 2.3.0-rc2 it appears that a new parameter "-F" was added to allow something other than a +/-20Hz bandwidth (referred to as "tolerance") to be used, likely eliminating the need for multiple decodes, below.  A possible command-line for this would be:
jt9 -W -p 120 -f 1500 -F 200 <wav file to be processed> 
With the center frequency (-f) being the center of the passband (1500 Hz) and the "-F" parameter referred to as"tolerance" (e.g. detection bandwidth) being 200 Hz. 
Initial testing indicates that the -F parameter does what it's supposed to do and the kludge below is now longer required.

This fact implies that in order to use something other than the GUI version of the wsjt-x software, a work-around must be invoked.  The following is a bare-minimum example of how one might do this via the command line:

jt9 -W -p 120 -f 1420 <wav file to be processed> 

jt9 -W -p 120 -f 1460 <wav file to be processed>

jt9 -W -p 120 -f 1500 <wav file to be processed>

jt9 -W -p 120 -f 1540 <wav file to be processed>

jt9 -W -p 120 -f 1580 <wav file to be processed>

(One might include the -H, -L and -d parameters in actual practice.)

In other words, in order to cover the entire 200 Hz WSPR subband, the JT9 executable (v2.3.0-rc1) must be executed - processing the same .wav file - at least five times:  The results of the decoding will, in each case, be found in the file "decoded.txt".  If one wishes to implement an equivalent of the -w parameter of the wsprd executable (e.g. +/- 150 Hz "wideband" mode), you will need even more invocations than above.

The result from the above mess will be five different decoding results, each of which must be saved (e.g. renamed) between subsequent executions to prevent overwriting by the previous instance.  After this, the five results would be concatenated to yield a single file - but there is a catch:  It is likely - particularly if the signal is strong - that the same signal will be decoded more than once.  Apparently, the "+/- 20Hz" limit isn't the result of a "brick-wall" filter:  Signals beyond this frequency range may be decoded, but the reported S/N values will likely be reduced as distance of the received signal from the specified center frequency increases.  In short, this means that the results of the concatenated version of the "decoded" file(s) must be sorted and all but the single, "strongest" decode (e.g. best SNR) for each station must be discarded.

Comment:

It would appear that just five iterations to cover the 200 Hz bandwidth is not enough:  I received correspondence from a reader of this blog that observed that a frequency variation of less than 10 Hz from that defined by the "-f" parameter can affect the S/N reading by about 1 dBMake of that what you will!

* * * * * * * * *

If one wishes to integrate the FST4W decodes into the existing WSPR captures for processing, yet another step must be undertaken:  "Fixing" the formatting.  Not surprisingly, the output in the "decoded.txt" is not formatted the same as the results of the decoding from the wsprd executable meaning that one will need to do a few things, after the fact, to "fix" them - particularly if you wish to forward them to wsprnet.org, including:

  • Supply the date.  The "decoded.txt" includes the time - but not the date.  Because date of the .wav file may not be the same as the system date (e.g. later processing of the .wav files - or the interval being processed occurred just before the new day) - one must use the actual date of the recording.  The obvious place to obtain this is from the name of the .wav file being processed.
  • Frequency offset.  The information that one might send to wsprnet.org must include the carrier frequency of the received signal, but the output in the "decoded" file has only the audio frequency:  One must obtain the LO frequency of the receiver being used from "somewhere else" and calculate this on the fly.
  • Supply missing information.  The "decoded.txt" file does not have all of the same information fields that one might supply when uploading WSPR spots, so this information must be added as necessary.
  • Arrange the fields in the proper order.  Once the needed information is applied, one will probably want to use "awk" or similar to produce the same order as the wsprd data - assuming this wasn't already done in the process.

* * *

There are two outputs from the jt9 executable - one directly from the program itself to the standard console and that output to the file "decoded.txt" - and the latter is the most useful. 

Console output:

0416 -24  0.7 1515 `  KA7OEI DN40 17                                 
<DecodeFinished>   0   1

The fields are:  <time UTC> <SNR in dB> <DT?> <Audio frequency in Hz> <always "`"> <Callsign received> <Grid of received station> <Reported TX power in dBm>

From the "decoded.txt" file:

0416   0  -24   0.7   1515.   0   KA7OEI DN40 17                        FST4

 The fields are:  <time UTC> <unknown - possibly drift in Hz> <SNR in dB> <DT?> <Audio frequency in Hz> <unknown> <Callsign received> <Grid of received station> <Reported TX power in dBm> <Always "FST4">

* * *

There you have it:  The germ of what would be needed if one wishes to supplement the existing WSPR decodes with the newer FST4W mode using just the bare executables.  If one wishes to decode other than the 120 second FST4W mode, things get even more complicated!

Sample audio file:

An audio file containing both FST4W-120 and WSPR transmissions may be found HERE - right-click to download.  This file contains an FST4W-120 transmission by KA7OEI from about 116km distant and (at least) two WSPR transmissions.

* * *

P.S.:  While it would be pretty trivial tweak the code to allow modification of the ntol variable via command line, this would complicate the ongoing maintenance of the wsprdaemon code.  We can only hope that the current authors see fit to include a means by which the entire wspr subband can be monitored with a single invocation of the jt9 executable.

 

This page stolen from ka7oei.blogspot.com

[End]


Saturday, September 26, 2020

Revisiting the "Limited Attenuation High Pass" filter - again.

In several previous posts (See:  "A Limited Attenuation High Pass Filter" and "Revisiting the Limited Attenuation High Pass Filter" I described a "high pass" filter that offered low attenuation at high HF frequencies, but a controlled amount of attenuation at lower frequencies - this, to accommodate a fundamental fact about both HF propagation and direct-sampling Software Defined Radios (SDRs):  The two don't play nice with each other!

As noted in the previous post(s), the problem is two-fold when it comes to broad-band SDRs that are intended to cover the entire HF spectrum all at once:
  • HF noise power and signal level is (generally) inversely proportional to frequency.  At lower frequencies - say, 2-8 MHz - the noise power is far higher than it typically is at around 20-30 MHz.
  • A direct-sampling SDR - or any receiver, for that matter - can tolerate only so much RF power on its front end.  Traditionally, this is a mitigated with the use of narrow-band RF band-pass filters, but this can't be done if one intends to be able to cover the amateur radio bands 160 through 10 meters (1.8-30 MHz).
With the aforementioned issues is yet another one:  Because the noise floor at 10 meters when it is "quiet" is so much lower than 80 meters (perhaps 40-50 dB during noisy nighttime conditions, 25 dB or so during quiet daytime conditions) there is an intrinsic disparity between the amount of sensitivity that is need to "hear everything" at the opposite ends of of the HF spectrum - but since a typical direct-sampling SDR is pretty much "flat", we end up with what might seem like a pair of intractible problems:
  •  To accommodate the very strong signals and high noise levels at lower HF frequencies, the RF signal gain in front of the A/D converter must be carefully set to prevent overload.
  • In order to "hear" the noise floor at 10 meters, the system gain must be set fairly high.

What these two factors, together, imply is that if we have enough gain to comfortably detect the noise floor at 10 meters, our receiver will be badly overloaded during strong-signal conditions on the lower bands.  Conversely, if we scale (e.g. attenuate) the input to accommodate the very large signal excursions, the receiver will simply be unable to detect signals at/near the "quiet" 10 meter noise floor.

Comment:
There will (hopefully) be the day that the upper HF propagation conditions improve greatly with the arrival of solar cycle 25 and at that time, strong signals will appear on the bands >=15 MHz.  When this happens, we will likely be faced with a problem similar to that which we are trying to solve here (e.g. very strong signals overloading the A/D converter).  At this time, the only recourse will likely be a means of using an external device to adjust the gain/attenuation in front of the receiver, probably using the existing I/O lines under receiver control.

A revised circuit:

Why talk about this issue a THIRD time?  I decided to make one that provided a better 50 ohm match across all frequencies than the previous versions.  This revised circuit may be seen in the figure below:

Figure 1:
Generic pre-emphasis network set for about 50 ohms.
Click on the image for a slightly larger version.


Some readers will recognize the topology of the circuit in Figure 1 as the classic pre-emphasis network found in the signal  path of FM video transmitters.  Whereas those circuits are typically designed for 75 ohms, this one is intended for a 50 ohm system - but careful observers will notice that 47 ohm resistors are used, instead:  For receive-only purposes, I have chosen the components in this article to be standard values at the expense of a slight increase in mismatch - but the VSWR of these circuits, when terminated at 50 ohms - is likely to be no more than about 1.1:1.

This circuit - compared with the previous versions - has the advantage that it presents a consistent source and load impedance across the frequency range, making it a bit more "friendly" in systems that may be impedance sensitive (e.g. following a band-pass filter, long coaxial cable runs, following/preceding conditionally-stable RF amplifiers.)  The obvious trade-off is that as compared to the previous version (which was based on a high-pass filter and some resistive bypassing) this circuit has definite limitations on how sharp and deep the "knee" may be at any given frequency as only a single inductor and capacitor are used.

By tweaking the values of R1, R4, C1 and L1 we can adjust both the amount of low-frequency attenuation and the frequency of the "knee" where the attenuation takes place - but for our purposes, we will be placing the center of that "knee" around 10 MHz to provide both the minimal loss at 30 MHz and adequate attenuation at and below 7 MHz.

Here are a few examples of values of R1, R4, C1 and L1 using standard-value components and approximate attenuation values at various frequencies:

R1 = 68 ohms  R4 = 39 ohms
C1 = 390pF  L1 = 1uH
DC attenuation:  7.3dB
@ 2 MHz: 7.0dB  @4 MHz: 6dB
@ 7 MHz: 4.6dB  @10 MHz: 3.4dB
@ 14 MHz: 2.3dB  @28 MHz: 0.8dB
R1 = 120 ohms  R4 = 20 ohms
C1 = 330pF  L1 = 0.82uH
DC attenuation:  10.8dB
@ 2 MHz: 9.8dB  @4 MHz: 8.1dB
@ 7 MHz: 5.6dB  @10 MHz: 3.9dB
@ 14 MHz: 2.5dB  @28 MHz: 0.8dB
R1 = 120 ohms  R4 = 20 ohms
C1 = 270pF  L1 = 0.68uH
DC attenuation:  10.8dB
@ 2 MHz: 10.1dB  @4 MHz: 8.7dB
@ 7 MHz: 6.5dB  @10 MHz: 4.8dB
@ 14 MHz: 3.3dB  @28 MHz: 1.2dB
R1 = 100 ohms  R4 = 27 ohms
C1 = 270pF  L1 = 0.68uH
DC attenuation:  9.4dB
@ 2 MHz: 8.9dB  @4 MHz: 8dB
@ 7 MHz: 6.3dB  @10 MHz: 4.8dB
@ 14 MHz: 3.6dB  @28 MHz:1.3dB

Figure 2:
Table showing some possible values for the circuit of Figure 1 and the example attenuation values.

 

In practice, several of these sections will likely need to be cascaded to achieve the desired amount of attenuation at the lower HF frequencies which brings up the question:  Could you not choose components to do this for a single section?  The answer is theoretically, yes - but the fact is that practical inductors - particularly the molded type - are quite lossy and achieving high amounts of lower-frequency attenuation with a single stage can become problematic - so it's probably better to cascade several of these networks together, instead.

A practical example:

Figure 3:
The exterior of the four channel filter network.
Click on the image for a larger version

A practical example of such a network is one that is to be installed in the KFS (Half Moon Bay, CA) KiwiSDR/WSPRDaemon system.  There, four wideband antennas are available to feed the KiwiSDRs on site, so a box was constructed with four, identical pre-emphasis networks, each to feed its own receiver stack.

As is the case at the Northern Utah WebSDR, noise and signals at the lower end of the HF spectrum is often very much stronger than at the high end:  If amplification is added to allow the detection of the noise floor at 10 meters, there is the very high probability that the receiver will badly overload on HF signals from the lower end of the spectrum.

Each "channel" of the device depicted in Figure 3 is identical, consisting of two cascaded sections.  The first section is that from the upper-left quadrant of the table (R1=68 ohms, C1 = 390 pf) and the upper-right quadrant (R1=120 ohms, C1 = 330pF).  Rather than the use of molded chokes, the individual inductors were wound using 30 AWG wire on T25-2 toroids:  17 and 15 turns for the 1 uH and 0.82 uH inductors, respectively.

Figure 4:
The interior of the four-channel network.
The circuit is simple enough to be wired "Manhattan"
style on glass-epoxy PC board material between the
two center pins of the BNC connectors.
Click on the image for a larger version

As can be seen in Figure 4, the construction is very simple, requiring no circuit board at all when using standard, through-hole components.  The circuit was built into a die-cast aluminum box with the BNC connectors holding the piece of PCB material in place.

To secure the components - particularly the small, toroidal inductors - RTV sealant (white) was used to hold components in place and to prevent adjacent wires of C1/R1 and R2/R3 from coming into contact with each other.

This method of construction is very simple and effective, offering good performance into the VHF range when reasonable care is taken.  With the 20mm high dividers between the sections installed as shown, the channel-to-channel isolation exceeded 85dB (the limit of convenient measurement) at 30 MHz.

Figure 5, below, shows the typical response of the sections:

Figure 5:
The response of one of the sections as measured on a DG8SAQ VNA.
Click on the image for a larger version.

Because it can be a bit difficult to read, the values of attenuation and VSWR in the upper-left corner are reproduced below:

Frequency (MHz) Insertion Loss (db) VSWR
0.474 21.4 1.09
1.812 19.9 1.09
3.592 16.6 1.08
5.324 13.4 1.08
7.038 10.8 1.08
10.12 7.4 1.07
14.06 4.7 1.07
18.16 3.2 1.07
21.08 2.4 1.07
24.94 1.8 1.09
28.18 1.4 1.10
50.0 0.4 1.19
Figure 6:
Attenuation and VSWR of the network at amateur band frequencies.
 
Practical usage:
 
For large, broadband antennas and small, active E-field whip antennas, the tendency will be for a relatively "flat" frequency response - but with a small E-field whip antenna, the typical high-frequency roll-off can exacerbate the aforementioned low-HF band overload issue, making a filter network such as the above, even more indispensable.  While an attenuation value of about 17dB at 80 meters may seem to be rather extreme, unless your antenna system has severe low-frequency roll-off at the low end, the noise floor on 80 meters - even during a quiet winter day when the band is dead - should be at least several dB above the receiver's noise floor.

 For specifics relating to a wideband direct-sampling SDR like the KiwiSDR or Red Pitaya, refer to the earlier article linked above - "A Limited Attenuation High Pass Filter".


Set-up:

As mentioned above, a direct-sampling receiver like the KiwiSDR does not have enough sensitivity to "hear" the 10 meter quiet band noise floor at a very quiet receive site. In terms of overall system gain adjustment, a few comments are warranted:
  • A good test is to see if, on 10 meters when it is "dead", you are hearing your local noise floor.  Note the S-meter with the antenna connected and disconnected - preferably, with the input to the receive system terminated with a 50 ohm load when disconnected.  If you do not see an increase in the S-meter reading and on the waterfall by 3-5 dB, the overall system gain is too low to allow the receiver to see the noise floor at your antenna system.
  • If you do not see an increase in noise when the receiver is connected to an antenna, a bit of extra gain is recommended.  Given an ideal isotropic antenna at a very quiet receive site, it will probably take about 12 dB of gain to comfortably "see" the antenna's noise floor - assuming no other losses (coax, splitter, etc.)
  • The preferred location of an amplifier is after the filter described above as it, too, will be protected against the very strong lower-frequency HF signals - even though a device like the above will increase the loss by about 1.4dB.
  • In cases where there are splitting losses (e.g. feeding multiple receivers) it may be beneficial to split the gain.  A modest-gain amplifier (10-14dB) might precede the splitters - the modest gain being enough to overcome splitting losses and to maintain system noise figure.
  • In the case of a low noise level receive site, the splitting losses may put the 10 meter noise floor below the detection threshold of the receiver and, if necessary, another amplifier may be placed just after the filter described above to make up for it.
  • It's worth noting that if you can detect a 3-5dB increase in noise floor with the antenna connected (versus disconnected) than even more gain will NOT further-improve system performance:  On the contrary, more gain than necessary will increase the probability of receiver overload - particularly on a direct-sampled SDR that has no AGC in its signal path like the KiwiSDR.  If one has more than 3-5dB of noise floor increase with the antenna connected on 10 meters when it is quiet, it's suggested that several dB of attenuation be added.  The preferred place to add this attenuation is in front of the amplifier to maximize its strong-signal handling - but only if one can still detect the noise floor on the antenna after doing so.  If one has a very high gain amplifier (say 20-25dB) and the gain is excessive, judicious addition of attenuation on both the input and output of the amplifier may be required.
  • When an amplifier is to be considered for HF use, it should have clearly-defined ratings - one of the most important of these is the output power capability (often "P1dB" which is the output power at 1dB compression) which should be in excess of +20dBm.  Second to this would be the 3rd order intercept point, which should be stated as being in excess of +30dBm - and the higher the better.  Both of these parameters are indicative of an amplifier that can deal with multiple, strong signals that may be present at the antenna.
  
This page stolen from ka7oei.blogspot.com

[End]

 

Tuesday, September 15, 2020

Comparing the "KiwiSDR" and the "RaspberrySDR" software-defined receivers

Update (20201002): 

The RaspberrySDR schematic and a fork of the source code is now available - see the end of this article for additional analysis.

Any reader who has perused these blog pages will be aware that I have been using the KiwiSDR for some time now (I personally own four of them and I manage two more!) and have been happy with their performance, finding various ways to maximize their usefulness.  I was intrigued when a "similar" device appeared that might prove to be useful - the "RaspberrySDR".

Figure 1:
The exterior of the RaspberrySDR.
The case is well-built and compact, housing both the
SDR board and a Raspberry Pi 3+.
Click on the image for a larger version.

Background:

For those not familiar with the KiwiSDR, it is a Linux-based, stand-alone software-defined radio capable of receiving from (nearly) DC to at least 30 MHz using a variety of modes (SSB, AM, FM, Synchronous AM) and has several "extensions" that allow reception of several digital modes - including CW, RTTY, and WSPR - as well as provide a means of viewing FAX transmissions and SSTV.  It also includes a provision for TDOA (Time Difference of Arrival) determination of transmitter location in conjunction with other similarly-equipped receivers.

This receiver does not have a front panel, but rather it is entirely used via a web interface.  What this means is that it may be used remotely, by several people, simultaneously - each person getting their own, virtual receiver that they may independently tune.

Originally introduced as a Kickstarter project around 2016, the hardware has been augmented with continually-improved open-source software with the lions share of the work having been done by John Seamons.  Using a 14 bit A/D converter clocked at about 66.66 MHz, an FPGA (Field Programmable Gate Array) and a GPS front-end chip, most of the number-crunching is done before the data is handed off to a single-board computer - originally the BeagleBone Green, but now also BeagleBone AI (BBAI):  Both the KiwiSDR receiver board and the BeagleBone Green (BBG) have been sourced by Seeed Studios.

For a variety of reasons, the supply of KiwiSDRs has been a bit fickle - both due to the limited capacity by Seeed in response to demand of these devices and issues which have been impacted the supply of some critical parts.  Another possible issue may be that there is likely to be a bit of "fatigue" on the part of some of the key people related to the KiwiSDR:  Careful readers of blog entries from several years ago can see that a similar thing happened to me on a project on which I had previously worked - coincidentally, also an open-source SDR device.

Figure 2:
Inside the case showing the "RaspberrySDR" board.  The phyiscal layout is quite similar to that of the KiwiSDR.  The small heat sink is affixed atop the LTC2208 A/D converter and the fan is controlled by a simple transistor circuit on the acquisition board.  Unlike the KiwiSDR, only one row of headers (top) is used to connect to the host computer, and unlike the KiwiSDR, power is supplied via the host (Raspberry Pi) processor.   In noting the logo on the board, I can't help but wonder if its intent was that of parody, along the lines of the fair use doctrine?
Click on the image for a larger version.



 

The present day:

Because the KiwiSDR is based on open-source design of its hardware and software - ostensibly to encourage participation in enhancement of all aspects of its design - one may freely copy it within the constraints of the open-source license.  It is not surprising, then, that several derivative versions have recently appeared on the scene, more or less following the "open source" philosophy - a topic that will be discussed later.

Using the base code found on GitHub and the openly-published schematics as a starting point, the "RaspberrySDR" has appeared - using, as you may have surmised, the Raspberry Pi - specifically the Raspberry Pi B 3+.  This single-board computer is of similar size as the BeagleBone Green and roughly similar capabilities - albeit a bit more powerful - and is certainly better-known than the other fruit, so it was a natural choice as the hardware interface between it and the receiver board is fairly trivial to adapt with a simple "conversion" board that adapted to the Pi's interface.

Also to be expected, a revised board (like those pictured above) specifically designed to interface with the Raspberry Pi has appeared from Chinese sellers, packaged with a Raspberry Pi, in a small, aluminum enclosure (with an external fan) for approximately $100 less than the KiwiSDR+Enclosure combination - and this revised version uses a higher-speed (125 MHz versus 66.66 MHz) and higher resolution (16 bits versus 14 bits) A/D converter so that its receive range is extended to a bit over 60 MHz, including the 6 meter amateur band - and there is the potential for improved receive performance in terms of dynamic range and distortion.

Comparison:

Having gotten my hands on one of these "RaspberrySDRs" - and already having available some KiwiSDRs for testing, I decided to put them side-by-side to compare the differences - specifically, to measure:

  • Apparent noise floor and sensitivity
  • Appearance of spurious signals
  • Large signal handling capability
  • Image response (Nyquist filtering)

 

Noise floor comparisons of the KiwiSDR using the BBAI and the RaspberrySDR using the Raspberry Pi3B+:

The KiwiSDR using the BeagleBone AI:

Figure 3 shows the noise floor of a KiwiSDR using the Beaglebone AI with no connected antenna.  As is typical of this device, there is a slight increase in the noise floor starting around 18 MHz:  The reason for this is unknown, but it is surmised that this is intentional - an artificial boost in "software" that is used to compensate somewhat for the Sin(x)/x roll-off intrinsic to any analog-to-digital sampling scheme as one approaches the Nyquist limit - which, given the 66.66 MHz sampling rate of the KiwiSDR, would be 33.333 MHz.

This assumption would appear to be supported by the fact that as one approaches the Nyquist frequency, the S-meter reading does not drop as one might expect, but remains fairly constant and appears to be close to +/- 1 dB from below 1 MHz to 30 MHz as shown in the data below.

Figure 3:
The noise floor of the KiwiSDR running on a BeagleBone AI (BBAI).
Note the slight rise around 18 MHz - the possible result of the data being "cooked" in the pipeline to offset Sin(x)/x losses near the Nyquist limit.
Click on the image for a larger version.

In numerical form, the measured noise floor of the KiwiSDR/BBAI combination using a 10 kHz AM bandwidth is:

  • -117dBm @ 1 MHz
  • -117dBm @ 5 MHz
  • -116dBm @ 10 MHz
  • -116dBm @ 15 MHz
  • -114dBm @ 25 MHz
  • -115dBm @ 30 MHz (29.9 MHz)
A broad 3dB peak is indicated in the noise floor:  We will attempt to determine if this peak is "real" in our later analysis.

The RaspberrySDR using the Raspberry Pi3+:

The RaspberrySDR has a 125 MHz sampling clock and a 16 bit A/D converter so the landscape looks a bit different as it can (theoretically) receive to 62.5 MHz, but is limited to 62.0 MHz in firmware.  Comparing the 0-30 MHz noise floor to that in Figure 2 (from the KiwiSDR) we can see some interesting differences in Figure 4:

Figure 4:
Noise floor of the RaspberrySDR running on a Raspberry Pi3+.  There is a similar rise in frequency - although it looks a bit different.
Click on the image for a larger version.

We can see a similar rise in the noise floor, but we get the impression that limiting our range to just 30 MHz hides its nature, so Figure 5 shows the noise floor over the full frequency range of 0-62 MHz:

Figure 5:
Noise floor of the same receiver as depicted in Figure 4, but showing the full 0-62 MHz frequency range.  Very evident is a rise in the noise floor centered at approximately 36 MHz.  This spectrum is unchanged if the SPI frequency is changed from the default 48 to 24 MHz.
Click on the image for a larger version.

In comparison, the noise floor of the RaspberrySDR+Raspberry Pi3+ as measured using AM with a 10 kHz bandwidth is as follows:

  • -118dBm @ 1 MHz
  • -118dBm @ 5 MHz
  • -118dBm @ 10 MHz
  • -116dBm @ 15 MHz
  • -115dBm @ 25 MHz
  • -112dBm @ 30 MHz
  • -113dBm @ 40 MHz
  • -116dBm @ 50 MHz
  • -116dBm @ 60 MHz

The magnitude of the broad peak is more significant than on the KiwiSDR - being on the order of 6 dB rather than 3 dB.  Because the RaspberrySDR is based on the (open source) KiwiSDR, it is expected that a the RF processing will be similar - and this would appear to be borne out by the presence of the rise in the noise floor, centered around approximately 36 MHz.  It would seem likely that the adaptation to the higher sample rate hardware is not fully realized as the pre-emphasis in firmware - if it exists - is not properly implemented as is evidenced by the numbers.

Comparison of S-meter calibrations:

To provide and additional data point in our measurement, we'll check the S-meter calibration.  For our purposes we will use a frequency of 10 MHz and a level of -50dBm as our reference as that appears to be below the apparent amplitude peaking of either type of receiver.  Using a known-consistent signal source, the results are as follows:

Frequency KiwiSDR with BeagleBone AI RaspberrySDR with Raspberry Pi3+
1 MHz -51dBm -50dBm
5 MHz -51dBm -50dBm
10 MHz -50dBm -50dBm
15 MHz -49dBm -49dBm
20 MHz -48dBm -48dBm
25 MHz -48dBm -46dBm
30 MHz -50dBm -45dBm
50 MHz --- -50dBm
60 MHz --- -57dBm

The effects of what appears to be pre-emphasis can clearly be seen:  The effects of the Sin(x)/x roll-off on the A/D converter seems to have been more-or-less compensated on the KiwiSDR, but the attempt to do this seems to be misapplied on the RaspberrySDR.  Based on the apparent noise floor and the absolute response to the -50dBm signals, we can make an estimate of the absolute sensitivity of the KiwSDR and RaspberrySDR on the various bands with simple math.

Frequency KiwiSDR Noise floor (dBm/Hz) RaspberrySDR Noise floor (dBm/Hz)
1 MHz -158dBm -158dBm
5 MHz -158dBm -158dBm
10 MHz -156dBm -158dBm
15 MHz -157dBm -157dBm
25 MHz -156dBm -159dBm
30 MHz -155dBm -157dBm
50 MHz --- -156dBm
60 MHz --- -111dBm

The chart above compares the apparent noise floor of both receivers, compensating for the 10 kHz AM detection bandwidth (40dB) and the measured offset of the S-meter at each frequency.

 
It is worth noting that above about 20 MHz - and given zero line and antenna losses - neither receiver has sufficient sensitivity to detect the expected ITU noise floor given a unity gain antenna:  Approximately 10dB of additional low-noise gain is required at 30 MHz to "hear" the noise floor in that case and even more gain would be appropriate at 6 meters.

This very topic has been discussed at this blog in the past - see the blog post "Limited Attenuation High-Pass filter" - LINK and its follow-up article "Revisiting the Limited Attenuation High Pass Filter" - Link - and their related articles for a discussion. 

For information about the expected ITU noise floor under various "idealized" conditions, see the article:  Recommendation ITU-R p.372-8 link.

 *****************************

Overload signal level comparison:

Another test was done - the determination of the RF level at which the "OVL" message on the S-meter would show, indicating overload of the A/D converter.  This test was done for both units at 10 MHz - the same frequency for which the S-meter was calibrated - and in the "Admin" tab the unit was configured so that just one "OV" occurrence per 64k cycles would be detected.

KiwiSDR OV indication:

The KiwiSDR's "OV" indication just started to indicate at -14dBm.

RaspberrySDR OV indication:

The RaspberrySDR's "OV" indication just started to indicate at -9dBm.

The apparent difference between these is 5dB.

"Wait - shouldn't there be another 12dB of dynamic range with two more bits of A/D resolution?"

In theory, two additional bits of A/D conversion  should yield and additional 12 dB of dynamic range - but this is not readily apparent in the numbers given above (at 10 MHz, 142dB between the noise floor and the "OV" indication for the KiwiSDR and 149dB for the RaspberrySDR) - so what's the deal?

First off, all things being equal (e.g. the same reference voltage for the A/D converter) one would expect the additional range to occur at the bottom of the signal range rather than the top, but this difference can be a matter of scaling via careful adjustment of the amount of amplification preceding the A/D converter and how the code is written.

Ideally, one would carefully balance the signal path so that the intrinsic noise of the amplification preceding the A/D converter would be comparable to the signal level required to "tickle" an LSB (Least Significant Bit) or two with no signal applied:  A higher level than this risks "wasting" dynamic range on internal noise.  Judging by the "even-ness" of the noise across the spectrum, I suspect that the output of the input amplifier is enough to light up at least two LSBs of the A/D converter.  If the signal path were highly "gain starved", low-level spurious signals would likely appear when very low-level signals were applied:  The sort of distortion resulting from the A/D converter being too-lightly driven can be witnessed when using a receiver like the RTL-SDR in "direct" mode and just increasing the signal levels to the point where they start to appear and many spurious signals show up.

It's possible that the "extra" 5 dB at the high end of the signal range is real and that signal dynamics have been juggled a bit with some the extra 2 bits worth of range being present at the bottom end, but this would be difficult to divine without more thorough testing and without the availability of a schematic diagram.  My preference would been to have the unit be slightly "gain starved" so that the the LSBs of the A/D converter would be subject to the action of external amplification to provide maximum flexibility when it comes to managing the signal path.

Discussion:

Note:  The schematic of the RaspberrySDR has become available since this was posted - see the analysis at the bottom of this article.

Without the availability of a schematic diagram of the front end of the receiver there are several unknowns:

  • The LTC2208 has a pin that indicates an overload condition.  It is presumed that in spite of other issues with the firmware (see below) that the "OV" indicator is working properly.
  • The LTC2208 A/D converter has a low-level dither generator built into it:  It is unknown if this feature is active.
  • Shorting the RF signal path  at the A/D converter to eliminate the contribution of the pre-converter amplification would be instructional to ascertain noise contribution from that device.
  • Probing the input of the A/D converter at/near overload to divine the actual range of the receiver itself to determine if the full dynamic range of the 16 bits is properly utilized would be revealing.
  • The LTC2208 has a programmable gain amplifier and full-scale input voltage may be selected as being either 1.5 or 2.25 volts:  The hardware configuration is unknown.
  • At the time of writing this, I have not found in the code any modification of the FPGA image that takes advantage of the extra two bits of A/D resolution.  This does not mean that no modification has been done, but rather that I have not (yet?) discovered it.

Nyquist Image response:

Any receiver has an image response - and an SDR is no exception. In this case, signals above the Nyquist frequency (half the sampling rate) will appear to "wrap around" and show up in the desired frequency range.  Because it is impractical to build true a "brick wall" low-pass filter, there are always compromises when designing such a filter, including:

  • Complexity:  How "fancy" should such a filter be in terms of component count?  More components can mean improved performance, but this implies a more difficult design, higher expense and more performance-related issues such as loss, ripple, sensitivity to source/load impedance, etc.
  • Trade-off of frequency coverage:  It can be difficult to weigh the pros and cons of a filter in terms of its cut-off frequency.  For example, setting the cut-off near the Nyquist frequency will improve performance at the high end of the available range, but at the risk of poorer image rejection.  Conversely, setting it much lower than Nyquist may sacrifice desired coverage.  A case in point would be that for the KiwiSDR, with a Nyquist frequency of about 33.33 MHz, coverage to 30 MHz (the "top" of HF) is desirable, so a bit of compromise is warranted in terms of absolute image rejection.

How bad/good is it?

The image response of both the KiwiSDR and RaspberrySDR were measured and determined to be as follows: 

The KiwiSDR:

Generator frequency (% of Nyquist) Nyquist image frequency on RX
KiwiSDR Nyquist image attenuation
37 MHz  (111%) 29.667 MHz 10dB
42 MHz  (126%) 24.66 MHz 20dB
47 MHz  (141%) 19.66 MHz 30dB
52 MHz  (156%) 14.66 MHz 39dB
59 MHz  (177%) 9.66 MHz 47dB
62 MHz  (186%) 4.66 MHz 55dB
66 MHz  (198%) 0.66 MHz 60dB
70 MHz  (210%) -3.33 MHz 65dB

 The RaspberrySDR:

Generator frequency  (% of Nyquist) Nyquist image frequency RaspberrySDR Nyquist image attenuation
64 MHz  (102%) 61 MHz 9dB
75 MHz  (120%) 50 MHz 18dB
85 MHz  (136%) 40 MHz 27dB
95 MHz  (152%) 30 MHz 36dB
105 MHz  (168%) 20 MHz 46dB
115 MHz  (184%) 10 MHz 54dB
120 MHz  (192%) 5 MHz 58dB
124 MHz  (198%) 1 MHz 60dB
130 MHz  (208%) -5 MHz 65dB

Doing a direct comparison between the two receivers one can see that based on the percentage of the frequency of the unwanted signal with relation to to the Nyquist frequency, the two receivers are pretty much identical in terms of image rejection, implying a very similar filter in each:  I suspect that the RaspberrySDR's Nyquist filter is pretty much that of the KiwiSDR, but with its frequency having been rescaled proportionally.

Because the Nyquist frequency of the RaspberrySDR is approximately twice that of the KiwiSDR, in terms of "dB per MHz" the attenuation of RasperrySDR's Nyquist filter performance will be noticeably worse.  For example, we know from the above information that for the U.S. FM broadcast band that the attenuation of the KiwiSDR's Nyquist filter will be at least 65dB - but we can see that for the RaspberrySDR that this attenuation will likely vary between about 30dB at the bottom end of the band (88 MHz) and 50dB at the top end (108MHz) meaning that it is likely that strong, local FM broadcast signals will cause some interference to the RaspberrySDR in the 37-17 MHz range implying that a simple blocking filter for this frequency range should have been built in   The work-around for this problem - should it arise - is pretty simple:  Install an FM broadcast band "blocking" filter such as those sold for the RTL-SDRs:  An example of such a filter may be found HERE.

Update:  Schematics for the RaspberrySDR are now available - see below.

Effects of receiver noise floor with a strong, off-frequency signal:

Any receiver is affected by other strong signals within its front-end passband - and with direct-sampling SDRs such as these, any signal appearing at the antenna port can and will have an effect elsewhere within the receiver's passband - primarly due to nonlinearity of the A/D converter and, to a lesser extent, the phase noise of the various oscillators - real and virtual.

For this test, a very strong signal from a 10 MHz OCXO (that of an HP Z-3801 GPS receiver - likely a variant of an HP 10811) was used as its output has respectably good phase noise performance.  Two tests were done - at -15dBm and another at -25dBm - each time measuring the change in the noise floor at different frequencies distant from 10 MHz.

With the 10 MHz signal set to -25dBm, NO change was observed in the noise floor at the frequencies listed below on either receiver - but there was a bit of increase in the noise floor with the application of the -15dBm signal - the magnitude of the increase of the noise floor is indicated in square brackets [] in the chart below:

Noise floor frequency Noise floor of KiwiSDR [degradation]
Noise floor of RaspberrySDR [degradation]
11 MHz -114dBm  [2dB] -116dBm  [0dB]
15 MHz -114dBm  [2dB] -114dBm  [2dB]
25 MHz -113dBm  [1dB] -112dBm  [2dB]

Assuming that the 10 MHz signal source is "clean", the above information shows that the two receivers behaved quite similarly.  It also shows that if there are two additional bits of A/D resolution available in the signal pipeline on the RaspberrySDR, their effect is not readily apparent in the measurements above.

All is not well:  A few glaring bugs!

There are several "features" that are readily apparent in this version of RaspberrySDR firmware (Version 1.402) that cause a few operational problems:

  • Inconsistent RF level calibration.  Occasionally, when powered up, the RF signal level calibration (S-meter, waterfall) will be way off requiring a setting of about -30dBm to yield correct S-meter calibration at 10 MHz rather than -19 - which is within a few dB of the setting of the KiwiSDR:  Simply rebooting the KiwiSDR server will likely correct this.
  • No obvious improvement in dynamic range or sensitivity due to the "extra" 2 bits of A/D resolution.  As discussed, one would expect to see clear evidence of improved performance due to the additional two bits of A/D converter resolution, but either this is masked by the low-level noise of the input amplifier, problems in the processing of the A/D data itself, or issues related to handling of high signal levels (see the next topic, below).  What difference there may be appears to be at the top end of the signal range rather than at the bottom.
  • "Broken" S-meter at higher signal levels.  The S-meter seems to be incapable of reading properly above about -33dBm:  Signals higher than this will yield widely-varying numbers that have little to do with the actual signal level.
  • "Motorboating" on strong, narrowband signals.  It has been observed that at about the same time that the S-meter starts to malfunction (above about -33dBm) one will hear odd noises on a strong signal (unmodulated carrier received in AM mode using a 10 kHz bandwidth) indicative of a malfunctioning bit of code somewhere - likely related to the broken S-meter.  The nature (sound) of this effect appears to change depending on the applied signal level.  It is not (yet) known to what extent this issue has a "global" effect:  That is, does a single, strong signal cause this effect on other/all signals within the receiver's 0-62 MHz passband?
  • The "Firmware Update" function in the "Admin" screen doesn't work at all.  Make of that what you will.

 *****************

An interesting notion - Direct reception of the 2 meter amateur band:

In theory it should be possible to modify the RaspberrySDR to directly receive the amateur 2 meter band - and any other signals from above 125 MHz to at least 174 MHz.  Because the sample rate of the receiver's A/D converter is 125 MHz, one can undersample the 144-148 MHz 2 meter band, which would appear in the range of 19-23 MHz.  Because this is just above the sampling frequency, the "direction" of the frequency conversion (e.g. increasing frequency at the antenna will show as increasing on the display) will be correct which means that a standard transverter offset (e.g. a local oscillator frequency of 125 MHz) could be used.

To do this one would need to - at the very least - bypass the Nyquist low-pass filter, and with the noise floor likely to be much worse than the 158dBm/Hz seen at HF so significant low-noise amplification AND strong band-pass filtering (to quash spurious responses) would be required - probably something on the order of 25dB.

Based on the specifications of the LTC2208, undersampling should work into the hundreds of MHz, possibly covering other VHF and UHF amateur bands - but the need for appropriate amplification and filtering applies!

  *****************

The "elephant" in the room

It is immediately obvious - particularly from the board layout and screen shots - that the RaspberrySDR has heavily "borrowed" its design from the KiwiSDR and this is, to a large extent, entirely fair game since the KiwiSDR is a self-declared open-source hardware and software design.  Having said this, a few issues have been raised considering the RaspberrySDR:

  • Is the RaspberrySDR being produced entirely in accordance with the KiwiSDR Open Source license?  Likely not.  For example, elements of the KiwiSDR "branding" appear all over the RaspberrySDR - from the derivative (parody?) logo on the board (see Figure 2) to the name "KiwiSDR" being present within the web interface itself.  The former may be an intentional, perhaps perceived as a slight - and the latter is rather hard to eliminate entirely - particularly if one wishes to maintain a branch of the code that echoes the continued development of the KiwiSDR - not to mention effort flowing in the other direction (e.g. improvements by others being incorporated into the KiwiSDR base).
  • Another board with similar capabilities (16 bit A/D, 125 MSPS) has appeared - apparently similar to this RaspberrySDR board - but it interfaces with the BeagleBone:  I have not used one or seen one in person, nor do I know anything about hardware/software support.
  • The RaspberrySDR source code itself seems to be somewhat obscured.  While there is a RaspberrySDR fork on Github (see note below), it is apparently not the very same code that is what is made available as a Raspberry Pi image only (as far as is known at the time of writing) from a link provided by the online seller.  In other words, I have not been able to find any sort of equivalent of a Github repo for the RaspberrySDR - a fact not exactly in keeping with the spirit of "open source". 
    • Github user "FlyDog" has produce the fork mentioned above and his repo may be found HERENow found HERE.  As mentioned above, I haven't been able to get this code to work with this board.
    • UPDATE (20200927):  Github user "howard0su" had produced a fork (found HERE) more likely to be relevant to the RaspberrySDR hardware.  Based on posts to the "raspsdr" list on groups.io, this appears to be a legitimate, open-source fork of the KiwiSDR code.  The owner of this fork has stated on that group that he is not involved with the production of the RaspberrySDR hardware.  As of the date of this update, I have not attempted to build from this source.
  •  Update:  Schematics and source for the RaspberrySDR are now available - see below.  The schematic of the RaspberrySDR does not seem to be available at the time of this writing - again, not in the spirit of open source.  
  • The primary author of the KiwiSDR code announced recently on the KiwiSDR message board that certain parts of the KiwiSDR's code - presumably elements not previously released under an open-source license - would be available only as binary "blobs" in the future.  The intent of this action - as it seems to be interpreted by many of the readers (including me)  - is, in addition to protect certain elements, is to increase the difficulty of replicating the software in the future - and some might argue that this goes against the spirit of "open source" that was embodied in the original Kickstarter definition of the KiwiSDR.  Whether this is true or not, the reader should not overlook the fact that the primary author has spent (and continues to spend) a lot of time, effort and money in the maintaining of the KiwiSDR software, hardware manufacture and certain elements of infrastructure about the KiwiSDR (Proxies, DDNS, TDOA to name three) and there is an understandable desire to "encourage" involvement (including buying "official" KiwiSDR boards, for example) that would go toward maintaining this.  The presumed argument is that follow-on versions based on the open source hardware and code - whether strictly adherent to the open-source licenses or not - are not compatible with his intent going forward.

Is it worth getting?

Is the RaspberrySDR a good deal?  It all depends on what you want to do with it.  At the moment, the ongoing support for it in terms of software development is a bit ambiguous as the "open source" nature of this fork seems to be a bit opaque, which is unfortunate.

For a general-purpose receiver that does not need the (useful!) facilities unique to the KiwiSDR network (proxy, TDOA, etc.) and for a receiver that includes the 6 meter band, this unit may fill a niche.

Again, the reader is cautioned that the "official" KiwiSDR brings to the amateur community several valuable features - including the TDOA - that require ongoing support which translates directly to people buying the "official" KiwiSDR boards, as I have clearly done.

Final comments:

  • For a general-purpose web-enabled remote receiver with decent performance, both the KiwiSDR and RaspberrySDR seem to be a good deal and the RaspberrySDR works reasonably well despite the bugs mentioned above.  The RaspberrySDR has the advantage that it also covers the 6 meter amateur band and has the potential of improved performance by virtue of its 16 bit (versus 14 bit) A/D converter.
  • The KiwiSDR kit with the Beaglebone Green and case - even though it costs more (approximately US$100 more than the RaspberrySDR) - has the distinct advantage of ongoing support along with the other infrastructure features mentioned above.  Like any open-source project, there will come the day when such support will cease and it will be up to others to try to build on what is in the repository at that time.
  • The current hardware of the KiwiSDR is starting to show its age for the reasons mentioned above and the existence of the RaspberrySDR shows that a relatively minor modification can potentially improve performance without a major rework of either hardware or software.
  • With the understanding that the time and resources of the primary author and frequent contributors to the KiwiSDR are limited in the ability to undertake such a change, I believe that it would be a mistake to overlook the potential (and "inspiration") of parallel work being done by others when it comes to keeping the KiwiSDR project up to date and relevant.

Addendum - 20201002 - RaspberrySDR schematics and sources are now available:

An email from another amateur radio operator informed me that the schematic diagrams of the RaspberrySDR - and a reference to a Github repo of the source - were posted on the "RaspberrySDR" group on "groups.io" - the link to that posting is HERE(Membership in that group may be required to see it.)

A brief analysis of the diagram has revealed several things:

  • GPS receiver:  The GPS receiver is identical - but that's not too surprising.  While the GPS receiver chip is not specified on the RaspberrySDR schematic, it has the same pin-out as that of the KiwiSDR, although a 66.666 MHz oscillator is shown rather than the 16.384 MHz oscillator on the KiwiSDR.
  • Front end filter:  As I'd surmised, the low-pass (Nyquist) filter is of identical topology as that of the KiwiSDR with a note on the diagram stating "LPF change to 64M" - but the values are the same as those of the KiwiSDR.  Clearly, a change in the components was made, but the schematic was not updated.
  • As with the KiwiSDR, the RF amplifier is shown as being an LTC6401-20.  This device has a fixed gain of 20dB (voltage gain of 10) and has a differential output:  The data sheet depicts it being used to drive an LTC2208 - the same A/D converter as is used on the Raspberry SDR.
  • The A/D converter is shown as being an "LTC2208CUPPBF" - a 130 MHz, 16 bit A/D converter.  The diagram shows all 16 of the "A" bus being connected as well as the "DITH" (used for enabling internal dither generator), "MODE" (used to set the output data format), "PGA" (used to set the gain of the A/D converter to either 1.5 or 2.25 volts full-scale) and "RAND" (used to randomize the output data to minimize possible noise contribution) pins being connected to the FPGA - and like the KiwiSDR, the "OFA" pin is also used to detect over/underrange of the A/D converter.
  • Maybe I missed it, but the circuit used to control the cooling fan does not appear to be on the schematic, nor did I find and obviously-named pin that might be used to control it:  Because the fan does not spin up unless the RaspberrySDR software service is running, it's clearly under software control - likely via a GPIO pin from the Raspberry Pi itself.

Discussion:

  • When time permits, I will probe about to determine the state of the DITH, MODE, PGA and RAND pins on the A/D converter.
  • Because the "PGA" pin may be controlled by the FPGA, it is possible that the A/D's input voltage range can be increased to 2.25 volts - a theoretical increase of about 3.5dB in signal input.
    • If this pin is set to the "low" state, this would - all things being equal - increase the "OV" (overload) threshold from the -14dBm of the KiwiSDR to about -10dBm - very close to the "-9dBm" that was observed in the test, above.
    • In other words, given the otherwise-identical circuitry, it is entirely possible that the increase in the "OV" threshold is entirely due to changing of the A/D converter's PGA setting.
    • A back-of-the-envelope calculation shows that assuming a 1dB loss in the low-pass filtering, a -14dBm signal - that required to cause an "OV" indicator on the KiwiSDR, amplified by 20dB would yield about 1.12 volts peak-to-peak - a value that correlates well with a presumed 1.25 volt A/D maximum input voltage.
    • Similarly, another calculation shows that - also assuming a 1 dB loss in the low-pass filtering - a -9 dBm signal - that required to cause an "OV" indication on the RaspberrySDR, amplified by 20dB would yield about 2.0 volts peak-to-peak - a value that also correlates pretty well if the "PGA" pin is set to configure the A/D converter for a 2.25 volt range.
    • Because 2 extra A/D bits (theoretically) correspond to about 12 dB more usable range, that would, in theory, indicate about 8 dB more dynamic range for the RaspberrySDR over the KiwiSDR.  How well this hypothetical gain is distributed is certainly a topic for more detailed analysis.
  • Because it is (presumably) under software control, I would like to see the settings of the DITH and PGA pins of the A/D converter being made available to the user in the configuration screen.  Because the amplitude of the dither is only on the order of 0.5dB (according to the data sheet) it is unlikely that its effect would be seen when a real-world antenna - and its noise - is connected to the receiver:  Anyway, it seems likely that the noise floor of the input amplifier may be the limiting factor.
  • It's worth pointing out that, according to the data sheets, the SFDR (Spurious-Free Dynamic Range) and  S/(N+D) (Signal to Noise+distortion) specifications of the 14 bit LTC2248 in the KiwiSDR are typically specified as being 90dB and 74.2dB respectively at 30 MHz while the same spec for the 16 bit LTC2208 (in the RaspberrySDR) - assuming a PGA setting of 2.25 volts - are 94dB and 77.5dB:  Not quite the "theoretical" 12 dB afforded from two extra bits!  (The SFDR of the LTC2208 actually goes up to 100dB when the PGA is set for 1.5 volts.)

 

This page stolen from ka7oei.blogspot.com

[End]