Monday, June 18, 2018

A limited attenuation high-pass filter for the KiwiSDR


Figure 1:
Inside the "limited attenuation" high pass filter, housed inside a small, die-
cast aluminum enclosure two which BNC connectors were mounted.  Some
components were secured using small dabs of clear RTV sealant.
Click on the image for a larger version.
One of the issues common with using a broad-band, direct-sampling SDR (software-defined radio) like the KiwiSDR is that of overload by strong, low-frequency signals, such as those on the AM (mediumwave) broadcast band - but there's another problem that should be considered as well:  The high generally-high signal levels at lower HF frequencies.  If one looks at an spectrum analyzer connected to a broad-band receive  antenna during the evening, one will immediately note that the lower the frequency, the higher the signals seem - particularly the background noise.

This becomes problematic if one is using an antenna with a relatively flat gain across the entire HF spectrum - and one wishes to make the receiver usable at both the top and bottom ends of this range.  As an example, I have a KiwiSDR connected to an antenna that is rated to cover from 3 to 30 MHz with roughly constant gain, but I noted that at the top end of the frequency range, around the 10 meter amateur band, the overall system gain was not quite sufficient to "hear" the background iononspheric noise.

The obvious solution to this gain deficit is to install an RF amplifier - which I did - but this had the effect of increasing the already-strong signals below 5-10 MHz even more, resulting in occasional "OV" indications on the KiwiSDR's S-meter signalling to me that the RF levels were high enough to "clip" the A/D converter.  While this wasn't too much of a problem during normal conditions, if the lower HF band were particularly noisy - as often occurs in the summer with thunderstorms on the same continent - reception across the entire HF spectrum was compromised when the loud static crashes would occasionally saturate the A/D converter.

It occurred to me that while I had about the right amount of system gain on 10 meters, I had far more than I needed at lower frequencies and could throw some of it away, so I set about designing a filter that would reduce signals at the low end of the HF spectrum, but have minimal effect at the upper end.

A "limited" high-pass filter:

The obvious solution to this would be the addition of a high-pass filter - but there's a problem:  Even a minimal high-pass filter would have increasingly-higher attenuation at lower and lower frequencies - potentially in the many 10s of dB - but we don't really want to get rid of the lowest frequencies.  What we need is a filter that will "knock down" signals by a significant amount - but not so much that they become inaudible.

In analyzing the signal levels, I determined that the goal of the design would be to leave signal levels above about 10 MHz unaffected, but reduce the signals below 8 MHz or so by 10-15dB.  This amount of attenuation (about 2 "S" units) would significantly reduce the amount of RF energy entering the A/D converter at these frequency (about 2 "bits" worth) but analysis of the noise floor and signal levels at these lower frequencies indicated that I would still be able to hear the noise floor.

The diagram of this filter is shown below:
Figure 2:  Diagram of the "limited attenuation" 10 MHz high-pass filter.
This filter attenuates by about 12dB (2 "S" units) below 8-10 MHz, reducing the overall signal power reaching the A/D converter of the KiwiSDR.


The diagram above, as rendered by "LT Spice", depicts the load (the receiver) as R2, a 50 ohm resistor.  No real attempt was made to make this filter's input and output impedances "flat" across the entire HF spectrum - and to be sure, below about 14 MHz its input impedance a bit high, but this will have little practical effect on its operation - and we really don't need to be too precise, anyway.

As tested on a spectrum analyzer, the insertion loss is 12-13dB from DC to about 4 MHz at which point it gradually drops to about 2dB at 11 MHz and then dropping to less than 1dB by 30 MHz.  When doing an "A/B" comparison with and without the filter on the KiwiSDR, the waterfall above 10 MHz looked unchanged by the signal below about 7 MHz were much less "bright" - and most importantly, the occasional "OV" indications on the S-meter pretty much stopped appearing altogether.

Comment:
In my option, the "raw" A/D input on the KiwiSDR is slightly deaf, requiring a bit of gain (say, 6-10dB) to be able to reliably hear the background ionospheric noise on the higher HF bands - particularly when they are closed.  To this end, the KiwiSDR at this location is preceded by a low-noise, high dynamic range RF amplifier that is flat from a few 10s of kHz to well over 30 MHz.

The components for construction of this filter aren't critical:  The capacitors are high-stability NP0 (a.k.a. C0G) ceramic types while L1-L3 are wound using 30 AWG enameled wire with L1 and L2 having 15 turns and L3 having 12 turns on T37-2 toroidal cores.  L4 is a an inexpensive molded inductor, and its value can be anything from 2.2 to 3.3 uH, or one could make it by winding 25 turns on the same T37-2 toroidal cores as used for L1-L3.

Conclusion:

This filter seems to be very effective in reducing the total signal power from lower HF frequencies while having minimal effect at higher frequencies.  Because the signal+noise levels from a broadband antenna are much higher at the lower end of the spectrum, it is possible to reduce these signals by 2 "S" units or so without dropping the background noise - or the signals themselves - below the noise floor of the receiver.

For information on a filter system that is specifically designed to attenuate AM (MW) broadcast band signals, see the article "Managing HF signal dynamics on the RTL-SDR (and KiwiSDR) receivers", also on this blog.

[End]

This page stolen from ka7oei.blogspot.com
 




Thursday, May 31, 2018

A "floaty thingie" for keeping NiHM cells topped off

A charge-state maintenance device for NiMH cells


PLEASE NOTE:  Messing about with batteries/cells can be hazardous:  Most cells contain hazardous materials and injury and/or damage can result from mishandling them.

Cells that are shorted, improperly charged or otherwise maltreated can pose an explosion/burn/chemical or other hazard.  It is entirely up to you to do research and provide the appropriate precautions to prevent damage and/or injury.


You have been warned!

The problem:
Table 1:  Comparison of self-discharge of various types of cells.
Comparison of self-discharge rates of various types of cells

The table below shows the approximate amount of time that it takes to lose 10% of the cell's current charge capacity at different temperatures.

Cell
Type
0C
(32F)
20C
(68F)
40C
(104F)
60C
(140F)
Alkaline >15 yrs. 4 yrs. 18 mo. 3 mo.
NiCd 3 mo. 1 mo. 14 days 5 days (A)
NiMH 1 mo. 10 days 5 days 1-2 days
Zinc
6 yrs. 2 yrs. 10-12 mo. 2-3 mo. (A)
These are typical values for new cells, published by various manufacturers.  Note that aging/mistreated cells will probably exhibit much higher self-discharge rates.  The NiMH information above is for "standard" cells, not the so-called "low-self-discharge" variety.

NiMH cells are ubiquitous these days - and for good reason:
  • They have usable capacity comparable to that of an Alkaline cell of the same size.  A typical AA alkaline cell has 2.4-2.8 amp-hours of capacity whereas modern NiMH cells range in capacity from 1.8 to 2.8 amp-hours.
  • They are relatively inexpensive.  If you shop around you can easily find AA NiMH cells for $2 each - often much less!  This means that if they are used just a half-dozen times, they may pay for themselves.
  • They have low internal resistance compared to alkaline cells.  When you pull power from a battery, the output voltage sags - something that can make many devices such as digital cameras shut down before the battery is drained:  Alkaline cells typically have higher internal resistance than NiMH (or NiCd) cells which means that many devices cannot fully-utilize the energy of the cells - particularly when partially discharged.
  • NiMH cells are more forgiving than NiCd and LiIon cells.  NiCd battery packs suffer from a problem called "cell reversal" in which when just one of the cells runs down before the others - an inevitability when several cells are connected together - the weakest cell ends up being charged backwards as the others pull power through it.  This causes an irreversible chemistry change that robs the NiCd cell of its power - making it more likely to run down first next time and become even more damaged than before!  NiMH cells are more tolerant of such abuse.  While NiMH cells can take a bit of abuse, LiIon cells can not, which is why they should always be connected using "protection" circuitry to guard against overcharge and overdischarge.

About "Ready-to-use" low self-discharge types.


There are some types of NiMH cells that are marketed as being "ready-to-use" that have significantly lower self-discharge rate than the standard cells.  It would seem that these cells - at least when new - do, live up to the claim, but I've yet to see information as to how much the self-discharge rate increases as they age.  I've also noted that these types of NiMH cells tend to have lower rated capacities than some other NiMH cells, ranging between 1500 and 1800mAh for these types versus 2100-2800 mAh for "normal" NiMH AA-size cells.  Such cells shouldn't be damaged if they are put in the "floaty-thingie.

Dealing with self-discharge:

As wonderful as NiMH cells are, the higher-capacity types and older, heavily-used cells do have a drawback:  Self discharge.

Referring to Table 1you'll notice something:  At ordinary room temperature, a good NiMH cell will lose 10% of its power after just 10 days - which means that after 6-8 weeks it's already half dead - and that's just from sitting there, doing nothing!  At higher temperatures things get far worse.  If you have a device with NiMH cells in it in a car on a hot, summer day you can expect it to be mostly dead in just a week or two.  Remember that the lower-capacity, "low discharge" types lose their charge slower than this, but I have yet to find specific information on these devices.

The data in Table 1 also assumes something else:  Typical, new cells.  As they age they tend to self-discharge even faster.

What does this mean, then?

  • Don't leave NiMH cells around for "later use."  If you charge up your NiMH cells and the just leave them around, chances are they'll be mostly dead by the time you get around to using them - unless you have a system of cycling through them very quickly.
  • Don't put NiMH cells away in your emergency box.  You should not rely on NiMH cells for emergency purposes unless you have a system by which you can guarantee that they are kept fully-charged.  For those devices that are put away for months at a time, Alkaline cells are a much better choice as long as they are stored outside the device to prevent possible damage from cell leakage and/or accidental discharge.
The challenge, then, is to have a system by which you can be reasonably assured that any NiMH cell you pick up is likely to have a full charge - but you don't want to do anything that is likely to damage them.

Maintenance charge:

In the case of NiMH cells (where the self-discharge rate is rather high - especially as the cell ages) it may be desirous to leave it on a "maintenance" (or "trickle") charge for very long periods of time.  Recent recommendations by some battery manufacturers suggest a "C/300" current for this while other manufacturers recommend a charging rate as high as C/40.  Following the C/300 example, our hypothetical 1 amp-hour cell above, this would be about 3.33 milliamps - that is, 1/300th of the cell's rating.  I have not seen any specific recommendations for such a maintenance charge for NiCd cells, but I would expect that the same C/300 rate would be suitable.

It should go without saying that charging a "dead" battery at the maintenance charge rate may take weeks to accomplish!

Comment:
At this point in the article I would normally provide a link to the sites of several cell manufacturers - but I've observed that these links are constantly changing, so I'll forgo doing this:  I will leave it up to you to find the technical data for larger manufacturers such as Eveready, Ray-O-Vac, Duracell, etc. that give recommendations for long-term float charging.
 
A "Floaty Thingie" - A simple device to maintain NiMH cell charge during periods of non-use.

Because I extensively use NiMH cells - and because I'm aware of their tendency to self-discharge - I have built a simple device that does a maintenance charge for large numbers of cells.  This device, which I have called a "Floaty-Thingie" (a highly technical term, I know...) consists of several multi-cell battery holders with series resistors and LEDs to both limit current and indicate that a maintenance charge is occurring.  The battery holders are simply attached to a sheet of wood or plastic and powered by a 12 volt DC "Wall Wart" from my junk box.  Note that while I use mostly 4-cell holders, there is also one 2-cell and one single-cell holder so that I don't need exact multiples of 4 cells to fill a holder!


Figure 1:
Top:  The "Floaty-Thingie" used to maintain charged on NiMH cells.   (This version only does AA cells in groups of 4).  Even though there can be up to 48 cells being floated, a small 12 volt, 100mA wall-wart is all that it necessary.
Bottom:  The schematic of one section of the "Floaty-Thingie."
Click on either image for a larger version.

The circuitry is extremely simple:  A resistor and cell(s) in series with an LED - the latter being used to indicate current flow which allows you to be sure that the battery is connected.  All of this is powered by a 12 volt (nominal) voltage source.

Using a 12 volt (unregulated) DC "wall wart" supply (which ranges from 12-15 volts, depending on total battery load) a resistance was calculated, taking into account how many cells were used and what size.  My "Floaty-Thingie" handles only AA and AAA sizes as these are the most common, but using the information here and a simple application of Ohm's law, other values can be calculated.

For the maintenance charge I chose to follow the "C/300" float rate as this seemed to be adequately comparable to the self-discharge rate of the cell itself.  For typical AA NiMH cells, this would be about 8 milliamps - assuming a cell capacity of 2.5 amp/hours - and for AAA NiMH cells, this would be around 3 milliamps - assuming a cell capacity of 1.0 amp/hours.  These values are typical and are definitely not critical!   Do not worry if your AA cells have 1800 mAH or 2800 mAH capacity, for example!

At this point, a few assumptions are made:

  • A supply of 13.5 volts.  This is a reasonable voltage to see from a "12 volt" unregulated "Wall Wart" under moderate load, but anything from 11 to 15 volts would be OK.
  • About 1.5 volts per cell.  (We are assuming that our cells are already fully-charged.)
  • Float currents:  The float current is 8 mA for AA cells and 3 mA for AAA cells - values that roughly correlate with C/300 for typical NiMH cells of those sizes.
The series resistance for various cell combination under the above conditions is as follows:



Table 1: Typical values for different types and numbers of cells using the circuit in figure 1 with a supply voltage of 12-15 VDC
Number and type of cells Resistance value (ohms) with 2 volt LEDs (standard-brightness red/yellow/green) Resistance value (ohms) with 3.6 volt LEDs (high-brightness green/blue/white)
4 AA
680
470
2 AA
1000
820
1 AA
1200
1000
 
 

4 AAA
1800
1200
2 AAA
2700
2200
1 AAA
3300
2700

  • The above values are not critical and variations of +-25% should not be of any concern
  • 1/4 watt resistors or larger are suitable.
In Figure 1 may be seen the schematic of the "Floaty-Thingie."  As you can see it is very simple and there's nothing critical about it - except to say that any exposed wires should be insulated to prevent accidental shorting of any components:  Remember that NiMH cells can put out many amps under such conditions!

On the schematic, "R" is a resistance from the table above, "D" is the LED, and "B" is the holder, containing 1, 2 or 4 cells.  When operating from a "12 volt" supply (which can be anything from 11 to 15 volts) it is not recommended that more than 4 cells be used as you need several of volts of drop across resistor "R" in order to limit current effectively and maintain fairly consistent current with minor voltage fluctuations.

Note that Table 1 shows different resistance values for "2 volt" LEDs and "3.6 volt" LEDs.  The older-style "normal brightness" red, yellow and green LEDs (but not blue or white!) are of the 2 volt variety while the newer "ultra bright" LEDs (most notably green, blue and white) are of the "3.6" volt type.  When you by the LEDs, a quick look at the "forward voltage" specifications will tell you what you wish to know - but don't be worried by slight variations.  For example, the "2-volt" types may vary from 1.7 to 2.2 volts while the "3.6 volt" types may say anything from 3.2 to 4.1 volts.

A note about the use of 3.6 volt LEDs:

  • These types are usually the "ultra bright" (green, blue, white) LEDs.  If you use these - and you have a lot of holders - the total amount of light coming off the "floaty-thingie" may be surprisingly bright - even at just 8 or 3 milliamps.  If you build one of these, expect that they may still be painful to look at and also that at night, the entire assembly may be annoyingly bright!
Remember:  We aren't aiming for ultra-precise results here - just those that are "in the ballpark."

Using the "Floaty Thingie"

I've used this thing for several years now (over a decade!) - as have several friends who have seen it and made their own.  Here are a few observations and comments:
  • Put ONLY fully-charged cells in the Floaty-Thingie.  It will take a very long time to charge a dead cell (several weeks, perhaps!) at the above currents.  Since the whole idea is to have fully-charged cells on hand for immediate use it would be a bad idea to put anything but fully-charged cells in it in the first place!
  • Completely fill up the cell holder.  This should go without saying:  Unless every position in the cell holder is filled, you won't complete the circuit and do charge maintenance.  Because of this, I recommend having one single-cell holder and one two-cell holder - in addition to a larger number of four-cell holders for each cell size (e.g. AA and/or AAA.)  Doing this allows you to "float" any number of cells that you may have onhand.  Some people who have built it have used two-cell holders (and a single one-cell holder) instead of any four-cell holders, which works, too, but remember that since each holder takes the same amount of current, regardless of the number of cells, you'll be able to maintain fewer cells overall if your wall-wart is rather small.
  • Make sure that you adequately size the wall-wart.  When you pick your "wall wart" supply to run this, consider how much current you will pull from it if you load cells into every holder.  To play it safe, assume that each AA holder will pull 10 milliamps and each AAA holder will pull 5 milliamps and simply add the total number of holders of each size - and make sure your supply can handle this.  
  • Note that a one-cell holder pulls the same current as a two or four-cell holder of the same cell size:  The difference in power is "eaten" by the series resistor used to limit current.  Again, this means is that if you have a very small wall wart - of if you have a limited power budget (say, from a small solar panel) you can get better efficiency by using mostly four-cell holders rather than mostly two-cell holders.
  • Yes, you can use a 12 volt solar panel for this.  Since the sun only shines part of the day, don't worry if the voltage goes well above 12 volts (as high as 18-20 volts) during bright sun as the "average" current will be in the general range of what it should be.
  • This "maintenance" charge doesn't seem to have damaged the NiMH cells.  Over the past 5 10 years or so, neither I or others who have used a Floaty-Thingie have seen any evidence that its use causes loss of electrolyte due to overcharging, "Lazy Cell" syndrome (see below) or obviously shortens the life.  Nevertheless, it would be a good idea to rotate through and use all of the cells as this would reduce the possibility of "Lazy Cell" syndrome (if it is likely to occur in NiMH at this "maintenance" rate anyway) and it give you another chance to spot those cells that are going bad!  Even when treated well, cells won't last forever!
  • The "Floaty-Thingie" doubles as a night light.  Since my Floaty-Thingie can hold over 30 cells, its LEDs give off a surprising amount of light when all holders are populated and if you happen to use a mixture of different colors you can get some pretty cool effects!  Remember, though:  The modern "ultra bright" LEDs put out a lot of light - enough to make looking at them painful and keeping a room annoyingly bright at night.  If you do use these newer, modern LEDs be aware that many of them (such as the blue, white and green) have higher voltages - between 3 and 4 volts as opposed to around 2 volts for the old-fashioned, dim red, yellow and green "indicator" type LEDs, so be sure to take that into account when selecting the resistor values.
  • I try group group "like" cells together.  If you are like me, you have been acquiring NiMH cells for years so you not only have different brands, but different milliamp-hour capacities of cells - even of the same brand!  Grouping like-cells together will also assure that when you use them in a device that takes several cells, you'll get optimal performance.
    • Note:  When I buy rechargeable cells, I always write the month and year of purchase on them with an indelible marker as this also makes it easier to group them together.
  • DO NOT put alkaline cells in the "Floaty-Thingie."  When one attempts to recharge alkaline cells, they can do unpredictable things such as leak, so don't!
  • Come up with a system for "rotating" stock.  It is best if you make sure that all cells get as equal use as possible.  One way to do this would be to leave at least one empty holder at all times, knowing that the next holder contains the cells to be used when previously-charged cells are to be installed in the now-blank one.  In this way one can help assure more even usage of cells over time.
Can you put NiCds in the "floaty thingie"?  Yeah, probably...  It probably won't hurt them to keep them in there for short periods such as days, but I'm not sure that I'd leave them in the device for weeks/months at a time!

Using "similar" cells:

As with other types of cells, it is recommended that you avoid, as much as possible, mixing different brands/capacities of cells.  While the chemistry of NiMH cells makes it less likely than with NiCds that they will be damaged by cell reversal, it never hurts to play it safe.

This is fairly easy to do, actually:  Simply group the same brand and same-capacity cells together and use them as such.  Personally, I write the month and year of acquisition on cells when I buy them with an indelible marker, making it even easier to match the cells into groups - plus, it lets me readily identify the oldest of the cells and keep track of how old they are and whether or not they deserve further scrutiny as they age.

Detecting apoptosis (e.g. "cell death"):

The "floaty-thingie" has another use:  To detect cells that are near the end of their useful life.

Inevitably, cells will lose their capacity and die - but how do you detect that fact before discovering that the device you put them in quit working sooner than expected?

In using the "floaty-thingie" there are some signs that an individual cell may be "sick" and might have lower-than-expected capacity.  To do this, you'll need a reasonably accurate digital voltmeter:  It needn't be expensive - I've found that even the $3-on-sale digital multimeters from places like Harbor Freight have more than adequate accuracy.

Here's the procedure:
  • Charge the cell normally using your normal charger.
  • Put it in the "floaty-thingie" and wait a week or so.  This wait time is required to allow the cell to equalize and "do its thing" - that is, if it's really bad, it may take a few days for the symptoms to show up.
  • While in the holder, measure the cell voltage.  I have found that a normal room temperature that typical NiMH cells measure between 1.35 and 1.47 volts.  I've noticed that same-brand and same-vintage cells tend to stay very close to each other and that this voltage seems to slowly decrease over time as the cells age and self-discharge (leakage) currents increase.
If you find one cell that has radically different voltage from the others - especially if it was made at the same time and is of the same brand as the others - then be suspicious of that cell!  If the cell's voltage is unusually high after a week of being in the "floaty-thingie" (a reading above 1.5 volts should certainly set off alarm bells!) then it is very likely that there is something seriously wrong with that cell!

If the cell voltage is lower than it should be - say below 1.3 volts - mark it with a piece of tape (so you can tell it apart from the others) and then try charging it normally, re-install it in the "floaty-thingie" and wait another week or so - just to make sure that it is really sick.  If it tests OK this second time, chalk up the first "bad" results to, perhaps, accidentally putting a battery that was not fully charged into the "floaty-thingie" - but if it tests bad again, get rid of it!

Of course, it should go without saying that all batteries should be disposed of properly!

Disclaimer:


Again, messing about with batteries/cells can be hazardous:  Most cells contain hazardous materials and injury and/or damage can result from mishandling them.

Cells that are shorted, improperly charged or otherwise maltreated can pose an explosion/burn/chemical or other hazard.  It is entirely up to you to do research and provide the appropriate precautions to prevent damage and/or injury.

You have been warned!


This blog posting was adapted from an earlier article on my web site.


[End]

This page stolen from ka7oei.blogspot.com


Tuesday, April 24, 2018

Pine needle what? (Keeping your coniferous antenna supports alive!)

The afflicted tree:  The needles are more normal at
the top, but looking rather "thin" farther down.
Because of the number of pine cones, the tree
looks "browner" in this picture than it really
is - but it is definitely under stress!
Click on the image for a larger version.
Late last year I noticed something amiss with a fairly large (50 foot/15 meter tall) Scots Pine tree in my front yard:  The needles looked a bit "thin" and short.  In the past this tree would tend to shed needles and pine cones all over the place on a seasonal basis, so I kind of ignored it over the winter - but this spring, when I started yard work again, I became concerned.

Something was definitely wrong with this tree.  While the needles themselves seemed to be green and flexible, I noticed that at the top of the tree they looked normal-ish, but the bottom 2/3rds of the tree looked "skimpy", so I decided to investigate more closely.

If you've followed this web page you've probably figured out that I'm probably not a yard and garden person - I do what I need to do to keep the yard in reasonable shape, asking my Dad or friends for advice when something wasn't quite right.   A couple of weeks ago I finally did what I should have done months ago:  Take a very close look at the tree.

On the lowest branches - where the problem seemed worst - I could tell that the limbs were very green and flexible - but the needles themselves, while green, were thinner than they should have been - and were covered with small spots.  Doing what many people do these days I resorted to Google and it "told" me about all sorts of possible fungal infections and other things - but I wasn't satisfied that it was describing what I was seeing, so I asked a friend of mine who'd had tree work done in the past year or so.  He referred me to the guy that did his tree work - a "semi-professional" who did this as a combination of a hobby (I gathered that he really likes trees!) and a second job.

After a couple weeks of phone tag, I was finally able to talk to him and I described what I was seeing.  Almost the first thing he asked was "Do the spots scrape off?"

"What?" I thought.  "Those are probably small bugs" he continued.  I must admit that it never once occurred to me to try scraping them off - and it then I realized that I should have looked at the needles with a magnifier.  He continued to explain that he suspected that the tree was suffering from "Pine Needle Scale" - an infestation of small insects that feed on the sugars in the needles - and this would probably kill the tree if left un-checked.  He then explained that he could probably drive out to my house and charge me a chunk of change to tell me this same thing in person - and then charge me more to treat it, or he could just save himself some time and me some money and have me treat it myself.

A close up of a bough on the affected tree.  As you can see, the needles are a bit shorter and thinner ("thin" like paper rather
than in number) and paler than they should be - and there are lots of spots!
Click on the image for a larger version.

After a few more minutes detailing the common treatments, we got off the phone and this time, armed with more definitive information, I did a bit of online research and what I saw in the pictures looked very much like what I'd seen.  It wasn't until I got home and plucked a few needles off the tree and looked through a magnifier that I saw that these spots were, in fact, small insects - looking exactly as he described and much like the pictures on the web.

A close-up of one of the needles showing the infestation
of what are probably Chionaspis pinifoia - little, hard-shelled
insects that literally suck the life out of the tree!  These will scrape off
with a fingernail revealing more-or-less normal looking needle
underneath - except that it's wet with weeping liquid from the tree.
Click on the image for a larger version.
I have two other pine trees in my yard that look healthy - so I inspected them, but the results were inconclusive:  I saw nothing obvious, but I didn't check everywhere within reach. If they were infested, it wasn't bad... yet...  A more thorough inspection has revealed that both of the trees in the back yard are infested - but it appears to be mild as I had to look a bit for them.


Why did the tree in my front yard get infested but the others not as bad?  I have a suspicion:

Two years ago, during the installation of a solar power system, it turned out to be necessary to upgrade the utility power connection to my house, so a narrow, 48" (122cm) deep trench was dug through my front yard - but the path took it very close to this same tree.  When the trench was open I could see that there were several rather large roots that had been cut - and this concerned me a bit.  What I suspect happened was that this weakened the tree a bit, making it more susceptible to an infestation - but then again, it could have just been bad luck!

Between talking to the tree guy and going online, I read about three common ways to control these critters - typically Chionaspis pinifoia (read about this insect at the "Tree Geek" web site).  These methods include:
  1. Spraying with an insecticide.  This is the "kill them right now!" approach - but it may not get the entire tree (particularly if it is a tall tree that is difficult to spray in its entirety) and this is most detrimental to beneficial insects like bees, ladybugs and other things if they happen to be on the tree, downwind, or very nearby.
  2. Ground soak.  A solution of insecticide and water is poured (usually in a small "moat" to confine it) around the base of the tree so that it is quickly absorbed into the root system.  This systemic treatment is slower to take effect, but is longer lasting and will protect the entire tree - and it is somewhat less harmful to beneficial insects since it is more or less confined to the tree.
  3. By injection.  If the tree is in really bad shape, insecticide is injected directly into the tree where it can more-quickly be taken up.  The "tree guy" with whom I was speaking seemed to think that since I didn't (yet!) have large sections of die-off that this wouldn't be necessary.
Comment:
There are more ways to deal with these things that are less harmful to beneficial insects - but the general opinion seemed to be that these were best for preventing infestations, controlling those infestations that were minor, or in those situations where there was a need to minimize the effect on the "good" bugs (e.g. to protect pollinators, etc.)  For trees that were under significant stress, arborists seem to recommend the "strong" approach where it can be safely done.

Based on what I read, my tree wasn't in "really bad shape" since the needles - while getting a bit pale - weren't dying off in large quantities... yet - but it is under a fair amount of stress.  Apparently, it is about this time of year (April, May) around here that these bugs start to reproduce and become more active - so this is the time to do the treatment.  I decided on a combined approach:  Spraying where I could reach and doing a ground soak around the base.

To this end, I sprayed the tree on a wind-less day as far as I could reach with a solution of "Sevin" (a carbaryl inseciticide):  I was able to get the bottom 1/4-1/3 of the tree, which encompasses about 1/3-1/2 of the pine needles.  Because of the height of the tree, I couldn't really go much higher than I could reach via the spray while working from a free-standing ladder - but this would, at least, have an immediate effect on a significant part of the tree.

The second treatment is a ground soak (an imidacloprid-based insecticide) as described by the "tree guy", the guy at the local farm supplies distributor from whom I bought the stuff, and the online descriptions:  This latter application will take a couple of weeks to work its way through the entire tree - but it is, by all accounts, considered to be very effective.  While I'm at it, I'll also proactively treat the other two pine trees in my yard with the ground soak - just in case those critters managed to get around - however they do that.  Based on what I have been told and what I have read, this treatment will become an annual, spring ritual.  Since pine trees are less attractive to bees than other plants, I'm hoping that the effect on them will be minimal - although they may collect some components of propolis from them.

So, the next few months will be telling.  I really do like my pine trees:  I think that they look nice, they offer a bit of cooling shade to the house - and they provide nice anchor points for my ham radio antenna!

* * *

Update - 6 June, 2018:

It's been about six weeks since treating the tree - and it is no worse off, but the bugs are still hanging on, so I went a step further as recommended by an expert:  Treating the tree with "Safari 20SG" - a Dinotefuran-based insecticide.  According to available information, this insecticide is taken up by the tree more quickly and is more effective at eradicating parasitic insects.  Hopefully, once these bugs are knocked down, the tree will recover and will be able to better-resist them in the future.

The instructions indicated that 1.0-2.2 oz were recommended for every 10 feet of tree height, so for a tree of this size (about 55 feet tall) the entire 12 oz container (which cost about US$110) was dissolved into several gallons of water and poured around the base using "soil drench" techniques.

Update - 27 June, 2018 - Success, I hope:

 About a week ago, I examined the tree again and saw that the bugs were rapidly dying off:  In fact, it was difficult to find a live one.  This die-off must have started happening a week or so before this as the needles are already looking "plumper" and slightly more green - a sign that fewer of these things are still trying to suck the life out of the tree!

There are still a lot of these things covering the needles even though they are dead, but at least some of them should weather off - and all of their carcasses will eventually fall with the needles as the tree replaces them in its normal cycle.

* * *

[End]

This page stolen from ka7oei.blogspot.com




Tuesday, February 20, 2018

Better frequency stability for the QRP Labs "ProgRock" synthesizer

Update:

It turns out that the newer (version 4) of the QRP Labs ProgRock board has pads for a TCXO.  See below for a link to the ProgRock web page.

The ProgRock:

The "ProgRock" synthesizer from QRP Labs is an inexpensive device based on the Si5131 "any frequency" synthesizer that may be used to produce up to three frequencies simultaneously - typically from around 8 kHz to around 200 MHz (with some limitations) but it may be coaxed to go down about 3.5 kHz as high as 290 MHz.
Figure 1:
The "synthesizer" portion of an unmodified Version 3 ProgRock.  The
27 MHz crystal, in the upper right quadrant, is a typical "computer grade"
device amd is typically stable to only a few 10s of PPM over a a wide temperature
range - OK for many applications, but not where you want really
good frequency stability.
Version 4 of the ProgRock has pads for an SMD TCXO on the
top of the board, although it's not the same device that I used - see text.
Click on the image for a larger version.


Typically programmable using a pushbutton and a DIP switch, newer versions of firmware may be programmed via a serial port as well.  These devices also have an input from a 1PPS (1 pulse per second) source, such as a GPS receiver, to allow precise setting/control of the frequency.

Unlike a VFO, the ProgRock produces only a set of fixed, pre-programmed frequencies:  Up to 8 "banks" of frequencies may be selected via three digital select lines.

What sort of things might this be used for?
  • Arbitrary frequency sources for the workbench.
  • Providing clocks for digital circuits.
  • The local oscillator of a fixed-frequency receiver or transmitter.
  • An internal local oscillator for a radio - such as a a frequency converter or BFO.
For casual use, the supplied crystal - a typical computer-grade unit - is adequate, but if you need the frequency to be held to fairly tight tolerance - say, a couple of parts-per-million - over a wide temperature range you will probably want something else.  QRP Labs does sell an "OCXO" version of the synthesizer which works well, but it is more complicated to build and adjust, it consumes several watts of power and produces extra heat.

You might ask:  "Why not just use the 1PPS input for frequency control?"

Figure 2:
The bottom side of the board after modification.  The tiny TCXO module
is affixed to the board and then connected to the circuit using flying leads.
In the picture above, pin "1" is in the lower right corner of the device - the
only one without a solder connection.  On the "label" side of the chip
pin 1 is identified by a very tiny dot.  Pin 2 ground (lower left, blue flying
lead) can be identified with an ohmmeter as it is also connected to the case.
Click on the image for a larger version.
While these devices can be "nailed down" to a precise frequency with the application of a 1pps input from a GPS receiver or other high-stability source, there is a problem with this option:  It tends to cause a "step" change in frequency on the order of 1-2 Hz.

Such step changes would probably go unnoticed on SSB or CW, but with certain narrow-band digital modes there might be a problem.  While modes like WSPR or JT-65 can deal with frequency drift, this would normally occur very gradually over the period of several symbols giving the decoder enough time to track, but if the frequency shift were very sudden, a few symbols would probably be lost.  While the occasional loss of data is normal, any loss caused intrinsic to the receive system - perhaps due to frequency steps of the local oscillator - would degrade the remaining error-correcting capability overall.

In other words:  If phase or very fine frequency changes will affect your communications, you might not want to use the 1PPS input.

Using a TCXO:


Another option is to replace the crystal with a TCXO (Temperature Controlled Crystal Oscillator).  These small, self-contained oscillators have on-board circuitry that counteracts the temperature-related drift, holding the frequency relatively constant over their design range.

A suitable device is a part made by Taitien and is readily available, being DigiKey part number 1664-1269-1-ND (Mfg. P/N TXETBLSANF-27.000000).  This device is tiny - only 3.2x2.5mm square so soldering to it is a bit of a challenge - but still manageable with a fine-tipped iron and some magnification.

Note:
As pointed out in the QRL Labs documentation, some TCXOs may have "stepped" frequency adjustments as part of their temperature compensation due to a built-in temperature sensor and D/A converter referencing a look-up table.  If sufficiently large (e.g. results in more than a few 10ths of Hz "step") these frequency discontinuities can disrupt/degrade modes such as WSPR that operate over very narrow bandwidths.  If a TXCO does this, the synthesizer being controlled by it will also exhibit the same frequency steps, proportional to the output frequency.

The Taitien TCXO units noted above were observed at 432 MHz (the 16th harmonic of the 27 MHz TCXO)  using signal analysis software to magnify possible frequency steps:  If such "step" behavior was happening, it was smaller than 0.34 Hz at 432 MHz (e.g. 0.02Hz at 27 MHz.)

The power requirements of this device are very low - only 1-2mA, far less than the 100-200mA of a warming crystal oven - and it may be powered directly from the existing 3.3 volt supply of the synthesizer board.  This TCXO produces about a volt pk-peak output which is in line with what the data sheets for the Si5351A suggest for a capacitively-coupled external signal being fed into the crystal input.  It is possible that the Si5351A would work just fine if this TCXO were directly-coupled, but I included the capacitor just to be safe.

Wiring the TCXO:

Comment: 
As noted above, later version of the ProgRock have pads for a TCXO, albeit one with a different footprint than above.  The device suggested by QRP Labs is the FOX924B-27.000 (Digi-Key P/N:  631-1075-1-ND) which is quite a bit larger than the Taitien device and has a rated stability of 1.5ppm.  This device has a higher output voltage swing, which allows the omission of the coupling capacitor used with the Taitien device noted above.

I first removed the crystal and cleaned the holes of solder.  The TCXO module was then glued using cyanoacrylate adhesive (a.k.a. "Super Glue") "belly up" to the circuit board (after it was cleaned with denatured alcohol) at a location on the bottom side of the board between the synthesizer chip and the crystal position as shown with pin "1" in the lower right corner.  With the oscillator firmly in place, a small piece of 30 AWG wire was used to solder pin "4" (V+ - upper-right) to the nearby connection of C3, one of the V+ lines for the synthesizer chip.  Connected to the opposite corner (pin 2, lower left) another short piece of 30 AWG wire is connected to the other side of capacitor C3 to provide the ground.

A small, 1000pF disc ceramic capacitor was inserted into the bottom side of the board to connect to the crystal terminal closest C3 and the "CLK 0" terminal with the other lead carefully formed and bent to be soldered to the upper-left pin, #3 - the output terminal of the TCXO.  Once the capacitor is soldered into place it is a good idea to re-heat the capacitor's other lead (the one soldered into the board) to relieve any mechanical stress that might have occurred from bending the lead to fit to the connection.

Before soldering to the TCXO - but after it has been glued to the board - it is recommended that a small amount of liquid flux be applied to the connections and that they be tinned using a hot iron with a very fine tip:  The ceramic package tends to draw away heat quickly, making it a bit difficult to solder and tinning it before-hand assures that a solid connection has been made.  Don't tin the unused pin as it's an easy way to identify the pins of the device while it is inverted.

Assuming that the connections are good and that the pins were properly identified, the synthesizer may be plugged into a ProgRock as normal.  If all went well, the output frequency will be pretty close to what it was before - but slightly low in frequency.  While the nominal frequency of the original crystal is 27.000 MHz, in this circuit the frequency is usually 2-5 kHz high in this circuit so the "default" clock frequency of the ProgRock is set to about 27.003 kHz to compensate.  With the TCXO being within 1 PPM of its intended frequency, register 02 of the ProgRock will have to be set to the new frequency, hopefully within a few 10s of Hz of exactly 27.0 MHz:  If you have a means of precisely measuring the frequency, use that number for register 02, otherwise use 27.000000 MHz.  Once this is done the programmed, output frequencies will be quite close.

Once everything was checked out I put a few more dabs of adhesive on the capacitor and flying leads to make sure everything was held into place.

How well does it work?

I put together two of these TCXO-based ProgRocks and when compared to a GPS-referenced source, I found one to be 1 Hz high (e.g. 27.000001 MHz) and the other to be about 13 Hz low (26.999987 MHz) - both well within the 1PPM specification.  These frequencies were programmed into register 02 and CLK0 was set to precisely 10 MHz and I found the output to be within 1 Hz of the intended frequency.  I then heated and cooled the units and observed that the frequency stayed well within the 1PPM spec, indicating that all was as it should be.

Example applications:

Stable receiver local oscillator:

Figure 3:
An application of the ProgRock where two of the outputs are being used as
the local oscillators of two "SoftRock Lite II" receivers configured to cover
different portions of the 40 meter band.  Fitted with a TCXO, these
frequencies will be held to within 1ppm over any reasonable temperature
excursion.
Click on the image for a larger version.
An immediate need for a stable frequency source came about recently while I was putting together a module that is designed to cover the entire 40 meter amateur band in two segments using two "SoftRock Lite II" SDR receiver modules.  Normally these ship with crystal oscillators, but the use of a single ProgRock module allowed a pair of these receivers to collectively cover the entire 40 meter band with very good frequency stability - important if digital modes such as WSPR are to be considered.

Figure 3 shows the result.  Both receiver modules and the ProgRock were mounted in the lid of a Hammond 1590D die-cast enclosure and a simple 2-way splitter using a BN-43-2404 binocular core was constructed.  The end result - when coupled with good-quality 192 kHz sound cards - is a high-performance, stable receive system capable of covering the entire U.S. 40 meter amateur band - with a bit of overlap in the middle and extra coverage on the edges.

Replacement of a crystal in a phase-modulated VHF/UHF transceiver:

These days it is increasingly difficult to source custom quartz crystals for older "rockbound" commercial radio gear.  An example of this is the GE MastrII line of VHF (and UHF) transceivers that require a crystal for each transmit or receive frequency.  Even though this equipment is now quite old, it is still useful as it is quite rugged and has excellent filtering and when properly prepared, it has been proven to very reliable.

These radios use crystals in the 12-13 MHz area for transmit and 16-17 MHz area for receive so a ProgRock can be easily programmed to be used in lieu of a crystal with a slight modification of a GE "ICOM" channel element.  Because the MastrII transmitters use phase modulation, the signal source is never modulated - and this is an advantage if you happen to need to set several overlapping transmitters to the same frequency and you need their modulation to "track" precisely.  By using a TCXO (or QRP Labs' OCXO) rather than the 1PPS to maintain frequency stability, the possibility of occasional "clicks" in the audio due to frequency correction steps is eliminated.

With any synthesizer the concern is that it will produce spurious signals and/or additional phase noise that will degrade the transmit/receive performance, but preliminary testing has shown that even when multiplied to 70cm, the resulting spectra is quite clean - probably good enough to be used on a repeater.  If one does do this, there are a few things that should probably be kept in mind:
  • Even though the ProgRock can output two frequencies at once, there is a small amount of crosstalk between them and when multiplied to the ultimate VHF/UHF frequency, these low-level spurs could end up on the output.  For this reason it would probably be a good idea to use two separate ProgRocks, located apart from each other, in a full-duplex radio.  For half-duplex, a single ProgRock could be used with the RX and TX frequencies being toggled by selecting a pre-programmed "bank".
  • Its worth noting that in many of these radios the LO frequency of the receive frequency is immediately multiplied by the next stage.  Testing was done on a receiver showing that the ProgRock could be set to the output frequency of this multiplier stage.  This cannot be done for the transmitter as the oscillator's output is immediately phase-modulated at its operating frequency.
  • It would probably be a good idea to place some high-Q band-pass filtering tuned to the synthesizer's output frequency to minimize any low-level spurs at frequencies removed from the main output frequency that might be present on the synthesizer.  Initial testing didn't show any obvious problems, but using such a filter would be a sensible precaution.
  • A very small amount of added "hiss" - probably from low-level phase modulation - was observed at UHF.  In normal use, this would probably have not been noticeable unless one did an "A/B" test.

[End]

This page stolen from ka7oei.blogspot.com


Thursday, February 15, 2018

Managing HF signal dynamics on the RTL-SDR (and KiwiSDR) receivers

Note:  This article was inadvertently posted for a few days before it had been finished.  This is the "completed" version of that article.

The Kiwi SDR:

The Kiwi SDR is a stand-alone network-connected multi-user "SDR in a box" device with its own web interface that allows one to tune from (nearly) DC to at least 30 MHz.  Using a 14 bit A/D converter, it is more robust than the RTL-SDR mentioned below, but it can still be overloaded by strong AM broadcast stations - so what follows can be applied.

The "RTL-SDR":

The so-called "RTL-SDR" dongles are devices that have become quite popular owing to their low cost and their ability to cover a wide frequency range - typically from a few hundred kHz to nearly 1GHz, depending on the device.  There are two separate signal paths on these devices:
  • Via the Raphael R820 chip.  This has an onboard synthesizer, mixer and band-pass filters and it converts signals in the (approximately) 24-1300 MHz range to a lower frequency.  This is the "normal" signal path used in these RTL-SDR dongles when use as they were originally designed for reception in the VHF and UHF bands.
  • Right into the RTL2832 chip.  This (typically unused) input may be made available via another connector, or via a frequency-splitting filter network as is done on the "RTL-SDR Blog" dongles.  This input can work from a few 100 kHz to 10s of MHz, more or less.
It is the RTL2832 that has the A/D converter - which is just 8 bits - and that is the main limit of these devices when used in environments with both strong and weak signals.
Receiving HF with an RTL-SDR dongle:

If we want to receive the HF spectrum - which we'll call 500kHz-30 MHz (we'll include MF here...) we have to work around some issues.  The most obvious is that the R820 chip can typically tune down to something in the 20 MHz range (it can sometimes be coaxed to go even lower) but it certainly cannot be relied on to work well down in the 1-5 MHz range.

The "direct" input has the advantage that is uses an 8-bit A/D that is sampling at 28.8 MHz followed by some DSP logic that allows signals in that stream to be internally converted to "baseband" samples - but the nature this chip poses a few problems.  To receive the entire HF spectrum, we have two methods that may be used, but each of these has their own sets of quirks, advantages and disadvantages:

The "direct" method:
  • It is very simple to implement in that it uses would would normally be an unused input.  Many dongles already include this modification - but if not, it may be easily added (instructions may be found on the web.)
  • The frequency stability can be better than the "upconversion" method since it is always lower - and there is only one oscillator that must be kept stable.
  • This has the disadvantage that the sample rate is about 28.8 MHz meaning that signals above 14.4 MHz will be aliased.  For example, a signal at 21.25 MHz will also appear at (28.8 - 14.4 = ) 7.55 MHz.  This can usually be mitigated by the addition of band-pass filtering around the frequencies of interest if "fixed frequency" operation is expected.
  • The "direct" input is typically quite "deaf", often requiring a bit of amplification if it is to be used for microvolt-level signals.  In testing, it took about 7 microvolts (-90dBm) for a CW signal to become audible, about 15 microvolts (-84dBm) for an SSB signal to be readable and around 25 microvolts (-79 dBm) for an AM signal to become listenable.  In other words, the sensitivity of this unit in direct mode is 20-30dB worse than a modern receiver.  The RTL-SDR dongles used happen to have a built-in amplifier so their sensitivity is "reasonable".
The "up-converter" method: 
  • In this method there are two oscillators that can contribute to drift:  The (typically) 100 or 125  MHz oscillator used for the up-conversion and the clock reference in the receiver itself.  Because both of these oscillators are operating at a rather high frequency - and because there are two of them - drift can be exacerbated.
  • The "image" problem associated with "direct" method is largely avoided.
  • The mixers used for frequency conversion can, in some cases, be overloaded by strong signals meaning that signals may be degraded before they get to the receiver.
No matter the configuration, there is one limitation intrinsic to these "RTL" devices:  The 8 bits of A/D conversion.

The "dynamic range" problem:

With 8 bits one can only attain an overall dynamic range of about 48dB (the actual amount is actually harder to calculate owing to oversampling, thermal and circuit noise, external noise, etc.)  The problem arises from the fact that a "weak" signal at, say, 160 meters may be on the order of 1 microvolt (-107dBm) but a nearby AM broadcast transmitter may be presenting a signal that could be 500 microvolts (-53dBm) or even much more!  In our example, we can see that this could pose a signal difference of 54dB - above the range that can be represented using an 8 bit converter.  In other words, assuming a 48dB dynamic range of our A/D converter, if we adjusted our levels so that a 1 microvolt signal (-107dBm) just barely registered on the A/D, any signal(s) that were 48dB above this (-59dBm) would "max out" our converter - again, ignoring oversampling, etc.

In other words, if we were to carefully adjust our signal level to our RTL-SDR (using an attenuator) such that we were just below the signal level that "maxed out" our A/D converter, our weak signal would be below the signal level represented by the lowest bit and it would (probably) be lost in the noise.  Conversely, if we tried to bring the weak signal up to the point where it was out of the quantization noise of the A/D converter, we'd be overloading our A/D, causing distortion and making it work very badly.

The importance of band-pass filters:

Whether you are using the RTL-SDR dongle in the "direct" mode with with a converter it would be a very good idea to limit the signals arriving at its input to only those of interest as much as possible.  This need contradicts the desire of many users of this device to cover from "DC to Daylight" - but if one attempts to put such a large frequency range into the antenna, performance will suffer:  If the unit doesn't just overload, there will likely be issues with trying to receive weak signals in the presence of strong ones - and with just 8 bits of A/D conversion, if that difference is greater than 40dB, you may see significant degradation.

When used in the "normal" mode where the R820 chip is acting as frequency converter, the A/D converter in the RTL2832 chip sees a somewhat limited spectrum owing to filtering in the R820 chip - but despite this filtering there is still the problem of "weak versus strong" signals that are within this passband - not to mention the fact that it just doesn't take a lot of signal to overload the R820 outright.

If you are using "Direct sampling" mode on HF, the program is more severe:  The entire HF spectrum being applied at the RF input is being digitized by a measly 8 bits which means that even if you are running the RTL2832 in the 2048ksps mode where you can "see" 2 MHz of spectrum, signals from the rest of the spectrum are still being digitized by that 8 bit converter.  If you consider that the signal-handling capability of the 8 bit A/D to be a limit to the total RF power being applied, you may be "wasting" much of this A/D capability at frequencies that are of no interest to you at all!

Images in the "direct" mode:

A worst-case example:  20 meters

If you are running "direct" mode, the problem is worse, still:  As noted above, the sampling rate of the A/D converter is 28.8 MHz, which means that signals above the Nyquist limit at half this frequency - 14.4 MHz - will "reappear" elsewhere.  For example, if you were listening at 14.300 MHz on the 30 meter band - which is 100 kHz below the 14.4 Nyquist limit - you would also hear signals at 14.500, which is 100 kHz above the Nyquist limit.  Similarly, if you were tuned to an AM broadcast station at 1.0 MHz - 13.4 MHz below the Nyquist limit - you would hear signals that were at 27.8 MHz - 13.4 MHz above the Nyquist limit.

In the second case, it's pretty easy to filter out the 27.8 MHz signals:  A simple low-pass filter will do, but in the first case - using the RTL-SDR Dongle at 20 meters - things are quit different as it is difficult to build a filter that will pass signals at 14.35 MHz with little attenuation but block signals at 14.45 MHz - the "image" of 14.35 MHz - sufficiently.  What this means is that are several MHz away from this 14.4 MHz limit, you can probably get away with filtering in the "direct" mode - but the closer you get (say, 13-16 MHz) the more difficult it will be to remove the image.

The typical work-around for this is to convert the HF range to a higher frequency - typically by mixing it with a local 125 MHz oscillator, so instead of 20 meters being tuned in at 14.0-14.35 MHz in direct mode, it would be at 139.0-139.35 MHz.  This works pretty well, although oscillator stability - both in the RTL-SDR dongle's synthesizer and in that added 125 MHz oscillator - is much more of a concern as the frequency at this 139 MHz frequency could drift hundreds of Hz between the two oscillators whereas in direct mode - with only one oscillator - the absolute frequency is much lower (about 1/10th) along with the amount of drift.

Even with the "upconverting" technique, you are not excused from needing to have front-end filtering:  The limits of the 8 bits of A/D conversion and those of the circuitry still apply!

Another approach for 20 meter coverage:

Taking our 20 meter example, while we could upconvert - which would be the easiest approach if  you happened to by an RTL-SDR with an upconverter - another approach would be to convert the frequency band from 14.0-14.35 MHz down to, say, 4.0-4.35 MHz by using a mixer and a 10 MHz oscillator:  This lower frequency would imply higher stability and the filtering requirements would be greatly relaxed as compared to up-converting by 100 or 125 MHz:  An example of such a circuit may be seen below in Figure 1.
Figure 1:
20 meter converter for an RTL-SDR Dongle used in "direct" mode.  This mixes the incoming (filtered) 20 meter signals with 10 MHz to produce a 4.0-4.35 MHz output.  Originally, I'd planned to use an inexpensive 10 MHz TCXO, but it was unavailable at the time of construction.  In the upper-right corner is an representation of the circuit layout.
Click on the image for a larger version.
In the diagram above we see the signals coming in and being applied to a 2-pole bandpass filter for 20 meters.  I "borrowed" this design from QRL Labs' bandpass filter (link) - so if you don't wish to build your own, you may order a kit of parts for the band of your choice from them.

This filter is not nearly "sharp" enough to pass the top end of the 20 meter band (14.35 MHz) and sufficiently block its nearest image frequency (14.45 MHz) if we were to use "direct" mode with the 14.4 MHz Nyquist limit so we down-convert the 20 meter band from 14.0-14.35 to 4.0-4.535 MHz, instead - well away from from the image response.  The bandpass filter limits the number of signals that get converted and provide enough filtering to minimize an image response that could occur due to the converter itself - that of the sum frequency (e.g. 24.0-24.35 MHz).  Transistor Q1 boosts the HF signal somewhat to overcome the loss of the filter and of the mixer and low-pass filter following it.

The local oscillator that I used was a 10 MHz OCXO (Oven-controlled Crystal Oscillator) that I had kicking around and its output was at TTL levels - a 5 volt square wave, so R5 was placed in series to knock this down to about 1 volt peak-peak at the "LO" input of U1, a diode-ring mixer.  I had originally intended to use an inexpensive 10 MHz TCXO (Temperature-controlled crystal oscillator - Digi-Key PN:  1664-1262-1-ND) but this turned out to be unavailable at the time I constructed it:  Had this part been on-hand I would have included a 3.3 volt regulator for the TCXO and would have omitted R5.  In a pinch, a "crystal can" 10 MHz computer-grade oscillator could have been used, but these - unlike the TCXO that I would have used - are not particularly accurate in frequency or stable with temperature (e.g. they would be within "only" 100ppm or so - which could be about 1 kHz at 10 MHz.)

The output of the mixer is passed to a low-pass filter to remove the "other" image resulting from the mixing product (24.0-24.35 MHz) as well as "bleedthrough" of the 10 MHz local oscillator - the presence of which could degrade the performance of the RTL-SDR.

This entire device was constructed "Manhattan Style" on a piece of copper-clad circuit board using "Me Squares" (from QRP-ME - link) which was mounted in the lid of a die-cast aluminum enclosure.

Is "Direct mode" worth the trouble?

Why use the "direct" mode at all?  When the frequency is converted, drift can be a concern, and this is significantly reduced  in "direct" mode - and it is quite simple to implement in hardware:  Many dongles - like the "RTL-SDR Blog" dongle have a diplexing filter (and amplification on the "direct" branch) that make it easy to use - provided that one be aware of the limitations!

In cases where the frequency is quite low - say, below 12 MHz - it is pretty easy to apply band-pass filters that will remove the image response above 14.4 MHz.  Above the Nyquist frequency we can actually use this effect to our advantage and directly tune in the 17 and 15 meter bands, using band-pass filters to limit the input to only those frequency ranges.  We again hit another Nyquist response at 28.8 MHz, but by then the low-pass filter built into the dongle is starting to take effect.

Whether or not you use direct mode, you will want to apply a bandpass filter to the input to limit the signals to those of interest - that is, if you really want to minimize the possibility of overload.

In short:  Is the "direct" worth the trouble?  In those instances specified above, it can be, owing to its relative simplicity and improved frequency stability as compared to the "upconverter" method.

Extreme case:  Receiving both the AM broadcast band and 160 meters


To illustrate this problem, let us look at the signals present on an HF antenna located in the Salt Lake City area, below:
Figure 2:
Off-air signals of the AM broadcast band into a Carolina Windom designed for 80 meters and higher bands.  This spectrum analyzer plot's vertical axis is 10dB/division with the top bar being 10dBm.
The on-screen reading is for marker #4, which is a weak "local" station at -62dBm, but the strongest signal is marker #1 which is a bit stronger than 0dBm which means that this signal has a million times more power at the antenna input than the weaker one!
Click on the image for a larger version.
In Figure 2, above, we see the plethora of signals that are intercepted by a typical HF antenna in a metropolitan area.  The strongest signal (marker #1) is at 1160 kHz (KSL) a local 50kW "clear channel" station which is producing a power level of about +3dBm which is nearly 1/3 volt of RF!  In contrast we can see another signal indicated by marker #4 which is another local, low-power station with a signal level that is about 60dB (a factor of 1000000!) weaker.

The "top" end of this plot (far right) includes the entirety of the 160 meter band and at this (rather noisy) site location we can see that the background noise is presenting us with a signal level of about -80dBm (about 22 microvolts) of noise.  In a truly "quiet" location, away from power lines and other urban QRN this noise floor would be 10-15dB lower during daylight hours, or in the area of -95dBm (about 4 microvolts.)

From this plot we can see several problems that arise if we want to use an RTL-SDR:
  • The strongest signal (marker #1 at 1160 kHz) is about enough to case the front-end static-protection diodes built into good-quality dongles to conduct and cause intermodulation distortion on their own.
  • The signal level differences between the strongest local signal (marker #1) and the weakest (marker #3) is nearly 40dB below that of the strongest signal - which is almost all of the range of our A/D converter.  There are "nearby" stations located farther away that are even worse off - such as the station located at marker 4 that is about 60dB down (1-millionth) of the signal level of our strongest.
  • If we wanted to listen to local AM broadcast signals and be able to receive signals on 160 meters we would need manage the fact that there is about 80 dB difference (a factor of 100 million) between the strongest signal and the noise floor!  What's worse is that this -80dBm (ish) noise floor at 160 meters isn't all that much higher than the noise floor of the RTL dongle itself.
To be sure, the magnitude of these disparate signals can be a challenge even for a modern communications receiver which, when connected to this same antenna, may well experience overload unless a significant amount of attenuation is added but coupled with the limited dynamic range of the RTL dongle, being able to receive both sets of signals poses a challenge.

"Squashing" the signal levels

Clearly, if we want this system to work in both environments we need to reduce the levels of the strong broadcast band signals while boosting the weak signals on the 160 meter band and the way to do this is with some filtering.  If we design a "band stop" filter that will attenuate only the broadcast band signals we can prevent the dongle from being overloaded as badly.

Let's design a hypothetical band-stop filter that will reduce signals in the broadcast band by 30dB (1000-fold) but leave those outside the band alone:  Will this help?

Taking the strongest signal (1160 kHz) and reducing it by 30dB means that instead of +3dBm it will now be -27dBm - better, but this is still about 53dB above our 160 meter noise floor.  What about the other signals on the band?  That signal at marker #4 (1230 kHz) will be reduced from -36dBm to about -66dBm - quite weak, but still audible, albeit a bit noisy.  What about those other stations that are weaker-still?  Those will get submerged reduced as well, getting down near the -79dBm "minimum signal level" for the RTL dongle.  Even by reducing this level, we still haven't done anything to bring up the 160 meter signals at all.

To make it work we will need to do more.  One way to do this is to apply selective (notch) filters to the strongest signals to reduce just those signals.  Looking at Figure 2, again, we can see that if we were to considerably reduce the strongest signals by 20-30dB, we'd "compress" the range between the strongest and weakest signals and allow us to be able to deal with them with our range-limited RTL dongle.  Figure 3, below, shows what the AM broadcast band looks like once we have done this:
Figure 3:  
The AM broadcast band - and the 160 meter band, with the YELLOW trace showing the signals before filtering and the CYAN trace after we have applied broadband attenuation to the AM broadcast band, selective attenuation to the strongest signals and some amplification overall.  As can be seen the range between the weakest and strongest signals is significantly reduced with the signal levels in the 160 meter band being increased enough to be above the dongle's noise floor.
Click on the image for a larger version.
In Figure 3 we can see the result of our work:
  • The strongest signals are reduced by about 20dB
  • The weaker signals are reduced a bit overall, but not as much as the strong ones owing to our attempts to selectively reduce only those that are strong.
  • The noise floor at 160 meters has been increased by 20dB.
  • The difference between the strongest broadcast band signals and the 160 meter noise floor is now around 40dB - within the (theoretical) usable range of the 8 bit converter in our RTL dongle.
In other words we have reduced the strongest signals by over 40dB, the weaker signals by 20dB and then brought everything back up by about 20dB.  Because our filter had little effect on the signals above and below the broadcast band, they came up by about 20dB as well.

How this may was done:

After using the Elsie program (there's a "free", somewhat cripped student version that's adequate for this task) and perusing my "Filter Design Handbook" by A.B. Williams I designed a "band stop" filter that was designed to cover the AM broadcast band - that is, provide at least 30dB of attenuation from about 540 to 1725 kHz.  Within this range, the attenuation can be much higher - greater than 60dB - but I was (theoretically) guaranteed that the minimum would be 30dB - and I ended up with the circuit, below:
Figure 4:
Diagram that includes a splitter to provide an unfiltered signal path along with the BCB reject filter and post-filter amplifiers.  Adjustable notch filters to attenuate strong, local stations are also included.
Click on the image for a larger version.

Circuit description:

Because one may want an "unadulterated" signal path for other purposes, a two-way signal splitter (L1/L2) is included:  The added 3dB loss is irrelevant with a decent HF antenna and modern receivers in terms of ultimate system sensitivity.  One branch of the filter is passed through a resistive attenuator that helps set the source impedance to the band-reject filter and again, this added loss isn't much of a concern on a decent HF antenna.

The AM BCB filter was designed for a nominal center frequency of 950 kHz and each series and parallel L/C circuit is tuned to that frequency.  The capacitor and inductor values were juggled to attain the closest standard values (or permit parallel combinations of standard-value capacitors) - this variation from the "ideal" having negligible effect on performance.  The capacitors used are all 5% NP0 (a.k.a. C0G) types for stability and the inductors are wound on T50-1 toroids using 30 AWG wire:  The values of the inductors were checked and adjusted with a known-accurate L/C meter after winding and it was determined that the calculated number of turns typically yielded inductance that was 5-10% high - a direction preferable to the other as adjustment simply required the removal of a few turns. (Yes, I wound toroids:  Lots and lots of toroids!)

Across the output of the band-stop filter is a set of simple series L/C notch filters.  I happened to have on-hand some inductors that were adjustable from about 8uH to 15uH and number crunching indicated that with just three capacitor values, the entire AM broadcast band could (more or less) be covered with a bit of overlap.  The first three notch elements (those at the right side of the string) used just a single capacitor as it was anticipated that there would be at least one station in the low, middle and upper portion of the AM broadcast band that would need to be reduced in strength.  The remaining four notches use two capacitors with computer-type push-on jumpers that allow the smaller capacitor to be selected for the upper portion of the broadcast band, the larger for the middle and the two together for the lower portion - or complete removal of the jumpers would disable the notch altogether.

The specific value of the inductor used for the notch filters is not important - it could be anything from around 4uH to 33uH, values calculated using an equation like:  L*C = 25330/((Freq in MHz)^2), where "L*C" is the product of the inductance times capacitance:  One simply divides this number by the amount of capacitance in pF (or inductance in uH) to get the "other" value.

When choosing inductors it is very helpful if its tuning has a 2:1 adjustment range:  If this is the case, three selectable capacitor values ("A", "B", "A+B") as described on the diagram will allow coverage of the entire AM broadcast band.  Unfortunately, it is getting more difficult to find small, adjustable inductors, so one must keep an eye out for them at surplus outlets or be prepared to modify 455 kHz IF transformers/AM tuning inductors such as those available from Kits and Parts (link).

Ignored up to this point are R5 and R6 - the "bypass" adjustment that reinjects signals back into the filter's output.  It may seem strange to build a filter that removes signals in the AM broadcast band - only to put them back again - but it does make sense if you do want to be able to receive such signals, but be able to strictly control their levels at the receiver input terminals - more on this later.
Figure 5:
 The "as built" AM/BCB filter module depicted schematically in
Figure 4, above.  Along the left edge is the splitter and to the right
of it is the AM band-stop filter with the "bypass" control along the top.
In the lower-right is the 7-element tunable notch filter assembly.  The
remaining circuits are the amplifiers/splitters that are downstream from
the band-stop filter.
Click on the image for a larger version.

Following the filter is a broadband RF amplifier constructed using the venerable 2N5109 transistor - a rugged, low-distortion device that is readily available from many suppliers.  This amplifier provides a reasonably low noise figure for HF (in the 5dB range) along with about 12-14dB of gain, overcoming the losses of the front-end splitter, attenuator and filter with a bit of room to spare.  Theoretically, the system noise figure at this point will be on the order of 9 dB, but considering the nature of the HF spectrum and the fact that even at 28 MHz an "acceptable" system noise figure is around 15dB, we aren't really suffering due to the losses in this signal branch.

Following this amplifier is another 2-way splitter - and then another amplifier with splitter:  This second amplifier with splitter - which is probably a bit of overkill - is useful for some types of wideband SDR devices, such as the RTL-SDR dongles:  When these are coupled with filtering and an adjustable attenuator, a bit of "excess" gain is handy to be able to throw away when setting the RF levels into them.

A balancing act:

As mentioned above, there is a "bypass" adjust that is used to allow the reinjection of AM broadcast signals back into the signal path and with the components shown the usable adjustment range is from about -35 to -15 dB.  As it turns out only a few of the signals that one is likely to intercept will be strong enough to cause problems while the rest are quite low.  At night the cumulative signal power of the myriad stations arriving by "skip" can cause a receiver like an RTL-SDR to become overloaded, but by having 10-20dB of added attenuation we can keep this total signal power down to a more reasonable level.

As can be seen on the YELLOW trace of Figure 3, above, there are a few stations that stick way above the general level of the other stations and the trick is to knock these down so that they are still "strong enough", but not so strong that the receive system is overloaded.  By using the notch filters, the strongest of the stations can be reduced by 20-30dB, putting their signal levels on par with the rest as is demonstrated by the CYAN trace if Figure 3.

Comment:
There is a side-effect of such a simple notch circuit - that being that the signal a few 10s of kHz above the notch frequency can be boosted by 10dB or so.  Unless that signal is quite strong, this is of little importance - but if is a problem one can carefully park the notch between the two signals, attenuating the strong, lower signal adequately while moving the "peak" above the upper signal.

At this point the real balancing act occurs.  In the case of this filter assembly, the receiver itself was an RTL-SDR dongle that was intended to cover from about 460kHz to 2500 kHz which encompasses the entire AM broadcast band plus the 630 and 160 meter amateur bands.  This proved to be a bit of a challenge as the signals in the two amateur bands could be as low as a few microvolts while those in the AM broadcast band were noted to be in the hundreds of millivolts - a span of around 90dB, which would be a challenge even for a 16 bit A/D converter, not to mention the measly 8 bits of an RTL-SDR.

By knocking the strongest AM broadcast band signals down by 25dB with a notch and another 20dB with the band-stop filter we can get reasonably close to our goal.  Even with the overall AM broadcast band attenuated by 20dB, a reasonably sensitive and properly adjusted receive system can still receive some of the weaker stations during the daytime (which really aren't all that weak) and still yield a band full of signals at night!

The other part of the balancing act is to set the signal level at the input of our receiver - an RTL-SDR dongle:  We have "excess" signal coming from our filter/amplifier network which means that we can knock it down again with an attenuator - which for an RTL-SDR could simply be a 100-200 ohm potentiometer paralleled with a fixed resistor (to achieve something in the 50-75 ohm area - the actual impedance isn't really important) with the "wiper" side feeding into the receiver.  At this point one would increase the attenuation just to the point below where the RTL-SDR's A/D converter started to clip significantly - and this sort of adjustment would have to be done under various conditions.  In my case, it was checked during the daylight hours when a number of the local stations were running their full 50kW and again at night when these stations were running lower power - but there were many other stations of low-moderate strength being propagated via skip.

Also included in this balancing act is the adjustment of the "bypass" control:  Too little bypass, the weaker AM signals cannot be heard, but too much and the receiver will overload.  It need not be said that this set of adjustments is trial-and-error, but it is quite possible.

Real-world result:

Contained in Figure 6, below, is a trace captured from exactly the system described above as used on the Northern Utah WebSDR - link where you may see this filter in action, with the visible signals varying wildly depending on whether it is local day or night.  This receive system is connected to a large antenna (which is designed to work only down to 3 MHz, but still intercepts signals below this range quite will) and it passes through the splitter/filter/amplifier system depicted in Figure 4.
Figure 6:
An actual, off-air "waterfall" capture (during the local nighttime) from an RTL-SDR system that covers from about 460 to 2500 kHz - a range that includes the AM broadcast band and the 630 and 160 meter amateur bands.  As can be seen,
the band is packed with signals - most of them via ionospheric "skip".  Outside the range of 530-1750 kHz the
background noise can be seen with several signals being visible on 160 meters.
Click on the image for a larger version.

If you look at the larger version of Figure 6 you can see that there are quite a few signals present and their relative strength is indicated by the "brightness" of the traces.  A closer look will reveal something more:  You may notice that between the range of 550 and 1750 kHz, the deep background is rather "smooth", but outside this range it has a bit of granularity to it.  This is due to the attenuation of the BCB reject filter causing the background noise level within the 550-1700 kHz range to be below the sensitivity of the RTL-SDR dongle, but outside this range, the sensitivity is just high enough that the background noise on these frequencies is visible, allowing even relatively weak signals on the 160 meter band to be received.  Because of the limited dynamic range of the RTL-SDR, some of the weakest "local" AM stations aren't readily audible during the daytime - an inevitable consequence of the trade-off between strong-signal handling and weak-signal performance - but as can be seen in Figure 6 the spectrum is "chock full" of radio stations at night.

To achieve this result the RTL-SDR dongle is being run in "direct" mode and there is a 2.5 MHz low-pass filter and adjustable attenuator placed between it and the output of the BCB filter unit - the low-pass filter being used to remove the energy from those signals above the frequency range of interest.

Testing with the KiWiSDR:

The described filter was tested with a KiwiSDR as well and owing to its superior dynamic range and usable sensitivity, it could actually "hear" the ionospheric background noise through the filter - with the possible exception being frequencies immediately adjacent to the notches.

Without the filter, the KiwiSDR was on the "hairy edge" of overload during the daytime when several local AM stations are running 50 kW - and even if the A/D converter isn't being driven into clipping/overload, it could be argued that performance across the board could be subtly impacted.  The effects of the filter on the signal levels in the AM broadcast band and beyond can be seen in Figure 7, below:

Figure 7:
A screen capture of the waterfall and spectrum analyzer from a KiwiSDR showing signals/noise over the range of 400-2000 kHz.
As can be seen, the background noise is attenuated over the range of approximately 530-1750 kHz while the overall signal range of strongest AM broadcast band signals are pretty well-controlled.
Click on the image for a larger version.

Figure 7 shows the effects of the filter during the daytime.  The background noise can be seen "through" the filter which means that there are no stations that are being made inaudible because of it - but at the same time the signal levels of the strongest stations are "tamed" significantly, being reduced by 20-30dB via the tunable notches.


Conclusion:

As can be seen, it is possible to deal with wide signal ranges when using a receiving device that has intrinsically poor dynamic range - but it depends on several things:
  • The signal levels need to be fairly predictable.  In this case, the signal levels on the AM broadcast band are very consistent - at least during the the day when the majority of the local "powerhouse" daytime-only 50kW stations are on the air.
  • Outside the filter's range the signals are typically much weaker owing to the fact that they are being propagated via shortwave from relatively low-power transmitters.
  • The ability to manage the signal levels overall:  The "compressing" of the range of the signal strength of the "local" AM stations with band-reject and notch filters helps out a lot!
Admittedly, this method is quite complicated, but it is do-able and in spite of its complexity, it allows very inexpensive hardware to be used to cover a fairly wide frequency range - in fact, if the goal was to use an RTL-SDR dongle to cover just the AM broadcast band and there were any nearby AM stations, you'd have to apply at least some of the above techniques to make it work well if your goal was to allow the reception of both local and distant stations.

[End]

This page stolen from ka7oei.blogspot.com