Sunday, June 12, 2016

A 1:1 balun was the best choice for feeding the horizontal loop...

Years ago I bought a Heathkit SA-2060 (non "A" version) 2kW-rated antenna tuner at a local swap meet for a good price.  While not as heavy-duty as some of the venerable Collins or Viking tuners, it had a nice-sized roller inductor and a pair of large, wide-spaced variable capacitors, connecting in a typical "T" ("High-pass") configuration.
Figure 1:
The Heathkit SA-2060 tuner and (now) 1:1 balun feeding the 450 ohm
window line.  It no longer sits, on edge, in the window, as it had previously -
a much more convenient arrangement!

I have used this antenna tuner for years, taking it to Field Day and other than having to tighten some screws and adding a bit of lubrication when I got it, it has served me well, (seemingly) capably matching the 200-something foot circumference horizontal "lazy loop" antenna at my home QTH that is fed with 450 ohm window line.

A month or so ago I was doing so rewiring after having my main electrical panel replaced in conjunction with the installation of a PV (Solar) inverter system and to do this work I had to "open up" some wall and ceiling spaces in the room containing my ham shack - but this also meant that I had to disassemble and relocate much of what was in the shack to accommodate that which had to be moved out of the way.  While the "radio area" wasn't particularly disassembled for this task, I ended up piling a lot of stuff in that part of the room, essentially making it practically inaccessible.

One of the things that I did was to pull a brand, new 240 volt, 20 amp circuit for my Heathkit HL-2020 linear (really an SB-221 with a brown color scheme) and once I had the room more-or-less back together I reconfigured the amplifier for 240 volts (there were minor complications to this - perhaps another story) and I was ready to get back on the air.  From what I'd read, the combination of the higher mains voltage and the Peter Dahl transformer would provide a higher plate voltage under load along with higher output with slightly less drive - and testing with the dummy load, this appeared to be true.

For years my tuner had been sitting on edge in the window with the 450 ohm window line coming through an insulated gap, past the vinyl frame, and connecting directly to it.  In the rearrangement I'd needed to take the tuner out of the window and in the process one of the wires of the window line popped off - something that I noticed as I was preparing to test the amplifier under load.  Happening to have the receiver on at the time, I reconnected the leg of the balanced line and...

There was no difference in the signal strength of the received signals.

Something was definitely wrong here!  I would have expected that with one leg of the balanced line disconnected that I'd get at least an "S" unit or two difference in signal strength, but there was no obvious difference at all.  Grabbing a screwdriver I shorted the balanced line and, again, could hear no difference, either, so I connected my antenna analyzer and noted that while there was a good match through the tuner, it did not change much if there was one or two wires connected or if the "balanced" terminals were shorted together.


Now, I was curious.  It would appear that I'd been actually running the "loop" in a "T" type configuration with the downlead being (more or less) end-fed and the remainder of the antenna being a sort of distributed top hat.  I've never really had trouble working other stations, nor had I really experienced any "RF in the shack" issues as I had a pretty decent, short ground with heavy decoupling of the HF coax feeding to the tuner via a large chunk of ferrite scavenged from an old computer monitor.  In other words, I'd had no reason to question the operation of the balun itself or how it functioned.

The tuner's cover was immediately off and I was comparing the balun connection with that of an SA-2060A manual that I'd found online and the results was inconclusive:  If the wires had been properly identified and taped at the time of initial construction, it looked correct, but if not the only way to verify this was to remove the balun and check it with an ohmmeter.   

I regret that I didn't make a note of how the balun core was wired, but I do know that it wasn't at all right so I made the necessary changes and then tested the balun on the bench with the antenna analyzer and the other end of the balun terminated with a 200 ohm resistor.  Unlike the original configuration of the balun, according to the analyzer it was now working as it should, having a reasonable match to 50 ohms and going to infinity when resistor was shorted or removed.

Putting the balun back in the tuner and reassembling it I had to readjust from my previously-noted tuner settings to find a proper match (a good sign that the settings weren't the same, actually!) and I then checked it out with 100 watts on 40 meters.  Everything appeared to be fine, although the tuner struck me as a bit "touchy" in its adjustments as compared to before.

Firing up the amplifier I soon discovered that I couldn't tune it up without its "Plate" variable capacitor arcing over noisily.  Grabbing a "Cantenna" dummy load I verified that the amplifier itself was fine, but something else was wrong.  Turning the power all of the way down and then slowly up again I discovered that at around 200 watts of RF output the reflected power went up, suddenly equaling the forward power.  Popping the cover off the tuner again confirmed my suspicion:  The "output" capacitor in the tuner was arcing over.

What this meant was that the tuner was being asked to match something really awkward - but with my loop and given its length I thought it unlikely that the feedpoint impedance would be really high, but rather it was more likely that it was "low-ish" - probably below 100 ohms.

The problem with this is that I now had a properly-working balun that provided an upwards impedance transformation.  This meant that if I had, for example, a 50 ohm feedpoint resistance on my loop, the tuner would be "seeing" around 12.5 ohms.  This is bad news as making a transformation from 50 ohms to 12.5 ohms implies the likelihood of a high-Q configuration of the tuner itself which, in turn implies high voltage and high current which further implies high losses!

Wielding my antenna analyzer I connected it directly to the balance line:  Since the analyzer was hand-held and I was checking at "only" 40 meters I didn't think that it would really matter that it was properly "balanced" or not.  The readings indicated a resistive component of around 10 ohms with a reactance of around 180 ohms inductive, but in tuning around to other amateur bands I couldn't make much sense out of the readings and was particularly suspicious when none of the resistance values seemed to go much above 50-80 ohms.

Suspecting that without the "bandpass filter" effects of the tuner that I was the victim of an AM broadcast station a few miles away being detected by the reflectometer bridge in the analyzer and causing false readings, I dusted off my RX noise bridge and connected it to my FT-817 running on battery - this combination being comparatively immune to stray, off frequency RF and like the analyzer configuration that I'd used, more-or-less "balanced" without any obvious ground reference.  With that configuration I got a more sensible resistance reading of around 35 ohms and a reactance measurement in the area of 130 ohms, inductive.  If I took the 35 ohm reading seriously that would mean that the antenna tuner was trying to match something under 10 ohms through the balun!

Figure 2:
The exterior of the Balun Designs Model 1171t 1:1 "ATU" balun.
This model is equipped with studs on the top of its weatherproof case
for connection to a balanced feedline.
This brought to mind a discussion that I'd had with another amateur some time earlier.  He pointed out that it seemed silly that most baluns with tuners offered only a 4-fold impedance up-conversion, but it was likely that a typical antenna was more likely to see a lower impedance on most bands unless there was a configuration that was particularly prone to high impedance like a 1/2 wave end-fed wire (e.g. a "Zepp" antenna), a rhombic (don't most of us wish we could have one of those!) or a full-wave dipole.  What this meant was that in most cases that the average amateur would encounter, the tuner was going to be matching to substantially lower than 50 ohms resistive through the balun - something that is likely to cause problems like loss - which is invariably accompanied by heating - and high voltages, internally.  What had been a reasonable hypothetical scenario was manifesting itself as reality!

The clear solution in this case was to use a 1:1 balun, instead.  I had the choice of reconfiguring the existing 4:1 balun - which was now working properly - perhaps by rewinding it with some PTFE 50 ohm coax - but I decided, instead, to get another balun and keep the internal balun intact in the event that it would be needed (it's nice to have options!) as it could be easily inserted or removed from the circuit using a jumper on the rear panel.  Because I was intending that I be able to use the tuner/balun combination with my amplifier which was capable of the full 1500 watts output, I also knew that it needed to be both low loss and capable of handling very high voltages and high currents.

In perusing the various web sites and forums I looked at the possibility of making my own balun, but ultimately decided on the "1:1 ATU Balun" by a company called "Balun Designs." - link.  The products of this company not only had good reviews, but their web site was also impressive explaining in good, sensible detail why one balun was better than another for a particular application and also outlining situations where certain baluns that they sold should not be used and why.

Figure 3:
Inside the 1:1 balun.  It is wound with parallel, highly-insulated
enameled copper wire in the "Guanella" fashion - that is, the second
"half" of the windings cross over to minimize coupling
between the input and output to provide best isolation and to
minimize the "one turn" effect inherent with "normal" toroid
winding techniques.
The balun that I chose (Model #1171t) is a current balun which effectively operates in series with the signal path (unlike a more common "voltage" balun which typically resembles an autotransformer as in the case of a typical 4:1 balun) and is essentially a common-mode choke that isolates one side of the balun from the other by virtue of the bulk inductance of the core over which a transmission line is wound.  By suppressing the "common mode" aspects of the RF signals with a significant amount of inductance, the windings on the toroids effectively "choke" anything other than differential (balanced) currents and thus isolate one section of the feedline from the other - except for the equal-and-opposite RF that is supposed to be there!

While many of these current baluns are wound with PTFE coaxial cable to preserve the 50 ohm impedance, this particular balun was wound with what amounts to parallel-conductor transmission line consisting of enamel copper wire covered with PTFE spaghetti tubing.  What this means is that this "parallel transmission line" winding inside particular balun isn't particularly close to 50 ohms in its natural impedance (it's likely in the 70-100 ohm range) but that is of relatively little importance since it sits on the output of an antenna tuner:  As long as it is low loss and can withstand the expected voltages and currents, it would have minimal effect on the overall system efficiency.

When this balun arrived I connected it to the output of the SA-2060 tuner with a short (approx. 18") RG-8 style jumper and was easily able to tune the antenna with settings radically different than before - another good sign!  Finding that everything looked good on the analyzer, I hit it with 100 watts - then 1500 watts and had no problems at all.  I did notice that the window line became warm to the touch and the balun core and windings also became perceptibly warm, but by no means "hot" as the thermal image in Figure 4 depicts.

Figure 4:
A thermal image showing the heating of the balun and transmission
line.  As can be seen, the closer to the "output" of the balun, the
warmer the windings got, but after approx 20 seconds at 1100 watts of
RF on 40 meters their temperature stabilized.  The image
above depicts a maximum temperature inside the balun of less
than 120 degrees F (49C) with the feedline at approximately
105 F (41C) both being warm, but not "hot."   (The "warm" UHF
connector at the bottom appears thus as it is reflecting heat from
elsewhere in the shack.)  Considering that over 1kW
of RF is flowing, this amount of heating represents negligible loss - likely
less than that occurring within the tuner itself.
Of course, the amount of heating will depend both on the power level
and the amount of current flowing through the balun, and this depends
on the matching/impedance conditions encountered.
I also observed that if I disconnected just one side of the balanced line the signals on the band dropped by several S-units and the relative floor relative to signals came up while shorting the two caused signals to all but disappear - exactly what I was expecting to happen in a circuit that properly rejected common-mode signals.  When checking across the band at different times of day I also observed that the noise floor was 1-2 S-units lower than before and that the previously S-5 noise from the switching supply on the nearby DSL modem was now barely detectable at the S-2 noise floor on 40 meters:  Thus are the benefits of common-mode rejection in the prevention of electrical noises from the shack and the house's electrical system from being conducted onto the feedline/antenna and into the receiver.

As far as the warming of the window line I did some calculation and determined that the feedline was likely seeing a VSWR somewhere in the range of 8:1 to 20:1 or so, which meant that it was losing as much as 0.5 dB along its 30 foot run - a loss of up to 11% or in the area of 150 watts maximum at the test power of 1100 watts.  This is a small fraction (approximately 1/12th) of an "S" unit, but it would certainly explain the warmth!

A few days later I had the opportunity to check into a 40 meter round table with a group of friends across the western U.S. and conditions were abysmal, but not only could I hear all of the stations pretty well, one of the stations with the weaker signals reported that they could hear my just fine, with my signals being comparable to another station across town from me running about the same power - a reasonable indication that I wasn't burning up too much power in losses!

Final comment:
One of the first things that I do when I get gear at a swap meet - commercial, commercial kit or homebrew - is to check it out, making sure that all hardware is tight and electrical connections are solid, but I will admit that it never occurred to me to check to see that the balun was wired properly!

Tuesday, May 31, 2016

Audio breakup on the JBL 4315B speaker

Over the past several weeks I noticed a problem with one of my JBL 4315B 4-way speakers:  It sounded as though the woofer was "breaking up" - that is, there was occurring what sounded like some sort of distortion related to the travel of the cone in response to "heavier" bass content.

Figure 1:
The "business end" of the 4315B

Some background:

A year or so ago I had a similar problem with this same woofer:  The audio was breaking up very badly, so I put it on the workbench, connected it to the amplifier, and observed the same problem.  Having nothing to lose, I carefully removed the dust cover - a job made easier with the careful application of a heat gun set to "low" - and quickly saw that an aluminum stiffening ring (constructed with an open gap in its perimeter) just "outboard" of the voice coil (e.g. toward the "front" of the woofer) had broken loose from its original adhesive.

With this loose ring - or at least a portion of it being loose - the speaker's excursions via the force of the voice coil allowed a bit of physical distortion, causing the voice coil to rotate slightly away from its axis, permitting it to hit the magnet assembly.

To fix this, I laid the woofer on its face on the edge of a workbench, clamping it down to prevent it from falling off (and on to me!) and working from below to prevent debris from entering the magnet gap I removed the aluminum stiffening ring and scraped away the old, brittle adhesive from the inside of the voice coil assembly.  After this I righted the speaker and stuffed pieces of paper towel into the voice coil opening to prevent additional debris and uncured epoxy from getting into the coil-magnet gap.  To reinstall the metal ring, which I'd also cleaned of old adhesive, I used metal-filled epoxy ("J.B. Weld") making sure that the ring was straight and the adhesive uniformly distributed.  To assure that the ring would bond properly I jammed a small screwdriver in the gap in the ring to widen and wedge it tightly into place while the epoxy cured.

Figure 2:
More or less what the resistors looked like
when the cover was removed.
Click on the image for a larger version.
After allowing 48 hours for the epoxy to fully cure I again inverted the speaker, removing the piece of paper towel that I'd put in place to prevent debris and epoxy from fouling the voice coil.  Subsequent testing indicated that the woofer was, again, working normally so I replaced the dust cover with a new one, identical in size to the old, that I'd ordered previously, placing the speaker back into service.

Unfortunately it did not occur to me to take pictures of this repair until I was well into it, hence the "thousand words".

Back to present day:

Everything was working fine until recently and this recent 'bout of distortion initially let me to believe that the stiffening ring has broken loose, so I removed the woofer from the enclosure and the distortion suddenly went away.  This immediately indicated to me that it was unlikely that the stiffening ring had come lose as the speaker now in free air rather than in the tuned cabinet moved more freely that before outside its enclosure.

Suspecting that gravity may have caused the cone assembly to sag over time - a problem not too uncommon with larger drivers with foam surrounds - I rotated it 180 degrees, placed the woofer back into the speaker and the distortion reappeared immediately.  While the distortion was occurring I pressed gently at different places around the edge of the cone to see if I could cause it to get worse but I didn't find anything obvious:  Doing this - and finding one spot on which when pressed causes sudden, severe distortion - can be a helpful diagnostic to determine if the cone is off-center, either due to gravity-related sagging or some sort of damage to or alignment of the "spider", voice coil assembly and/or the surround.

Figure 3: 
This picture shows where the resistors' leads broke,
right at the body of the resistors.
At the top of the picture are some of the plastic
capacitors used in the crossover which seemed
not to be affected by the vibration.
Click on the image for a larger version.
Puzzling over this problem for a moment, with the woofer laying on the floor next to the speaker, I happened to notice that one of the plastic grommets emerging from the metal crossover box inside the speaker had popped out of place so I snapped it back into the hole - an action that was coincident with a sudden bout of distortion in the woofer.  On a hunch I started whacking the grille cover of the crossover with the handle of a screwdriver and observed that every time I did so I could hear a sort of "clicking" in the woofer.

Inside the '4315, just behind the woofer, a grille covers a series of power resistors and plastic capacitors comprising the crossover (additional components like the inductors are located in a separate compartment behind) and upon removing the cover I noticed that at least one of the 10 watt resistors behind was at an odd angle.  When I touched this resistor, the problem was obvious:  One of its leads had broken away from its body due to fatigue and the vibration of the woofer and was causing it to make intermittent contact.  Clearly these rather heavy 10 watt power resistors had been vibrating for years, eventually causing the connecting wire to break.

Inspecting the other resistors, the leads of two others broke away from the slightest touch meaning that I simply had to replace them all.

Figure 4: 
The new resistors in place.  The two 10
ohm resistors (brown, tubular) did not
have leads so heavy-gauge wire was
used to connect and support them.
Click on the image for a larger version.
Fortunately, these resistors were nothing special - just plain, ordinary 10 watt, 10% tolerance wirewound resistors of the values 2.4 ohms (1), 10 ohms (2), 20 ohms (2) and 50 ohms (1).  A quick rummage through my collection of power resistors indicated that while I could have "kludged" a repair that minute, I decided that I wanted to replace all similar resistors in both speakers and I made a shopping list for the next time I happened to visit the local electronics place.

A few days later I found myself at the local electronics store,  rummaging through their resistor collection.  I didn't find exactly what I wanted as they didn't have any of the rectangular "sand" 10 ohm, 10 watt units, but they had some of the same in ceramic tubular which would work with attached leads.  They also didn't seem to have any 47 or 50 ohm 10 watt units, but I found that they had plenty of 100 ohm, 6 watt resistors so I obtained twice as many of those so that I could parallel them to 50 ohm at 12 watts.  For the 2.4 ohm resistors I had my choice between 2 ohm and 2.7 ohm and chose the former since the lead length was better for mounting:  Since this resistor was simply in series with the upper midrange ("HF") driver, and there is "T" pad to adjust its level - I figured that this departure in value from the original 2.4 ohm 10% part to a 2 ohm 5% part was not of consequence.  (Various diagrams show slightly different factory values, anyway - none of which matched the factory-installed 2.4 ohm resistor, anyway!)

The repair job was pretty straightforward.  In order to get the slightly larger, ceramic-tubular 10 ohm resistors to work I soldered to them some #12 AWG connecting leads (to provide rigid support and prevent future vibration-related breakage) and folded over the tabs along with paralleling the two 100 ohm units such that the effective lead length was increased (see picture).

Figure 5:
The cover for the crossover in place.
Some "blobs" of black RTV were added
previously to suppress a buzzing caused by
a mechanical resonance of this cover at
certain musical note frequencies.
Click on the image for a larger version.
I quick test revealed that the repair was successful with all of the drivers on the speaker working as they should.

When I replaced the grille cover I was reminded another "fix" that I'd done many years ago:  Some blobs of RTV (silicone) sealant along the edges of the grille cover forming damping pads to eliminate an annoying buzzing noise that it would make when it resonated with particular bass notes.

With the repair of one speaker done I quickly moved on to its mate.  In it, I found no broken resistor leads, but one or two felt very "weak" indicating that metal fatigue had already been at work.

Finishing the resistor replacements I now had both speakers working properly, using "matched" components.

Final notes:

After a quick search I was able to find several different diagrams for the crossovers themselves, no doubt due to changes made during the production.  It was interesting to note that the precise values of some of the resistors in my speakers weren't exactly those on any of the versions of the diagrams which indicates that, perhaps, JBL didn't consider them to be extremely critical and used what was available from their suppliers.  (If one looks at the diagram one can tell that +/-10-15% isn't really likely to make much of a difference in the properties, anyway - at least not one that couldn't be compensated for by the adjustable pads...)

Monday, April 18, 2016

Combatting scintillation effects on optical voice links

One interesting aspect of the amateur radio hobby that is rarely discussed is the use of the "Above 275 GHz" bands.  While one might, at first, think that this might require some exotic "componentry" to use these wavelengths, to assume such would ignore the fact that this includes "optical" frequencies - which is to say, visible light.

Working with visible light has a tremendous advantage over other "frequencies" in that we have some built-in test equipment:  Our eyes.  While generally "uncalibrated" in terms of "frequency" and power, they are of great help in building, setting up and troubleshooting such equipment.

For years now lasers have been considered to be the primary source of optical transmitters - which makes sense for some of the following reasons:
  • Lasers are cool!
  • They may be easily modulated.
  • Lasers are cool!
  • "Out of the box" they produce nicely collimated beams.
  • Lasers are cool!
  • Low-power diode-based lasers are inexpensive and easy to use.
  • Lasers are cool!
While lasers are (almost) exclusively used for all types of fiber-optic based communications, one might ask oneself if they are equally useful/effective when the medium is the atmosphere rather than a stable, glass conduit?

The answer is:  It depends.

If one is going very short distances - perhaps up to a few hundred meters - the atmosphere can be largely ignored unless there is something that is causing severe attenuation of the signals (e.g. rain, snow or fog) but as the distances increase, even if there is not some sort of adverse condition causing such loss there are typically nonuniformities in the atmosphere caused by thermal discontinuities, wind, atmospheric particulates, etc. that causes additional disruption.

The fact that Lasers produce (generally) coherent beams in terms of frequency and phase - gas lasers usually more so than most semiconductor types - actually works against efforts in making a long-distance, viable communications link because the atmosphere causes phase disruptions along the path length resulting in rapid changes in amplitude due to both constructive and destructive interference of the wavefront.

In the past decade or so, high-power LEDs have become available with significant optical flux.  Unlike Lasers, LEDs do not produce a coherent wavefront and are generally less affected by such atmospheric phenomenon, as the video below demonstrates:

Figure 1:
Visual example of laser versus LED "lightbeam"

Admittedly, the example depicted in Figure 1 is somewhat unfair:  The transmit aperture of the laser used for this test was very small, a cross-sectional area of, perhaps, 3-10 square millimeters, while the aperture of the LED optical transmitter was on the order of 500 square centimeters.  Even if both light sources were of equal quality and type (e.g. both laser or both LED) that using the smaller-sized aperture would be at a disadvantage due to the lack of "aperture averaging" - that is, more subject to scintillation due to the small, angular size of the beam causing what is sometimes referred to as "local coherence" where even white light can, for brief, random intervals, take on the interference properties of coherent light:  It is this phenomenon that causes stars to twinkle - even briefly change color - while astronomical objects of larger apparent size such as planets usually do not twinkle.

Figure 2:
Adapter used for emission of laser light via the telescope.
Contained within is a laser diode modified to produce
a broad, fan pattern to illuminate the mirror of the

For an interesting article on the subject of scintillation, see "The Sizes of Stars" by Calvert - LINK.

Based on this one might conclude that the larger the aperture for emitting will reduce the likelihood that the overall beam will be disrupted by atmospheric effects - and one would be correct.  The use of a large-area aperture tends to reduce the degree of "local coherence" described in the Calvert article (linked above) while also providing a degree of "aperture averaging".  As an aside, this effect is also useful for receiving as well as can be empirically demonstrated by comparing the amount of star twinkle between the naked and aided eye:  Binoculars are usually large enough to observe this effect.

For a fairer comparison with more equal aperture sizes the above test was re-done using an 8 inch (approx. 20cm) reflector telescope that would be used to emit both laser and LED light.  To accomplish this I constructed two light emitters to be compatible with a standard 1-1/4 inch eyepiece mount - one using a 3-watt red LED and another device (depicted in Figure 2) using a laser diode module that was modified to produce a "fan" beam to illuminate the bulk of the mirror.

Both light sources were modulated using the same PWM optical modulator described in the article "A Pulse Width Modulator for High Power LEDs" - link - a device that has built-in tone generation capabilities.  Since the same PWM circuit was used for both emitters the modulation depth (nearly 100%) was guaranteed to be the same.

To "set up" this link, a full-duplex optical communications link was first established using Fresnel lens-based optical transceivers using LEDs and the optical receiver described in the article "A Highly Sensitive Optical Receiver Optimized for Speech Bandwidth" - link.  With the optical transmitters and receivers at both ends in alignment, the telescope was used as an optical telescope to train it on the far end, using the bright LED of the distant transmitter as a reference.  With the telescope approximately aligned, the LED emitter was then substituted for the eyepiece and approximately refocused to the effective optical plane of the LED.  Modulating the LED with a 1 kHz tone, this was used with an "audible signal level meter" that transmitted a tone back to me, the pitch of this tone being logarithmically proportional to the signal level permitting careful and precise adjustment of both focus and pointing.

For an article that describes, in detail, the pointing and setting-up of an optical audio link, refer to to "Using Laser Pointers For Free-Space Optical Communications: - LINK.

Now substituting the laser diode module for the LED emitter the same steps were repeated, the results indicating that the two produced "approximately" equal signal levels (e.g. optical flux at the "receive" end.)  Already we could tell, by ear, that the audio conveyed by the laser sounded much "rougher" as the audio clip in Figure 3, below, depicts.

Figure 3:
Audio example of laser versus LED "lightbeam"
communications over a 15 mile (24km) free-
space optical path.
Music:  "Children" by Robert Miles, used in
accordance with U.S. Fair Use laws.

Figures 4 and 5, below, depict the rapid amplitude variations using a transmitted 4 kHz tone as an amplitude reference over a "Free Space Optical" path of approximately 15 miles (24km).  The horizontal axis is time and the vertical axis is linear amplitude.

Note the difference in horizontal time scales between the depictions, below:

Figure 4:
Scintillation of the laser-transmitted audio (4 kHz tone).
The time span of this particular graph is just over 250 milliseconds (1/4 second)
Click on the image for a larger version.

Figure 5:
Scintillation on the LED-transmitted audio (4 kHz tone).
In contrast to the image in Figure 4, the time span of this amplitude representation is nearly 10 times
greater - that is, approximately 2 seconds.  The rate and amplitude of the scintillation-caused
fading are dramatically reduced.
Click on the image for a larger version.

Laser scintillation:

As can be seen from Figure 4 there is significant scintillation that occurs at a very rapid rate.  The reference of this image is, like the others, based on a full-scale 16 bit sample.  Analysis of the original audio file reveals several things:
  • While the "primary" period of scintillation is approximately 10 milliseconds (100Hz) but there is evidence that there are harmonics of this rate to at least 2.5 milliseconds (400 Hz) - but the limited temporal resolution of the test tone makes it difficult to resolve these faster rates.
  • Other strong scintillatory periods evident in the audio sample occur at approximate subharmonics of the "primary" scintillatory rate, such as 75 and 150 milliseconds.
  • The rate-of-change of amplitude during the scintillation is quite rapid:  Amplitude changes of over 30 dB can occur in just 20 milliseconds.
  • The overall depth of scintillation was noted to be over 40dB, with frequent excursions to this lower amplitude.  It was noted that this depth measurement was noise-limited owing to the finite signal-noise ratio of the received signal.
LED scintillation:

Figure 5 shows a typical example of scintillation from the LED using the same size emitter aperture as the laser.  Analysis of the original audio file shows several things:
  • The 10 millisecond "primary" scintillatory period observed in the Laser signal is pretty much nonexistent while the 20 millisecond subharmonic is just noticeable.
  • 150 and 300 millisecond periods seems to be dominant, with strong evidence of other periods in the 500 and 1000 millisecond period.
  • The rate-of-change of amplitude is far slower:  Changes of more than 10 dB did not usually occur over a shorter period than about 60 milliseconds.
  • The overall depth of scintillation was noted to be about 25 dB peak, but was more typically in the 15-18dB area.
One of the more interesting results of this experiment was how minimally the severe amplitude distortion experienced with the laser actually degraded the overall intelligibility of human speech.  While the tones and brief music clips were clearly badly distorted, it could be argued that with the segment including speech, the degree of that distortion was not as apparent.  Clearly the voice content was being badly "chopped up" by the severe amplitude fluctuations, but with the redundant nature of speech and the fact that the drop-outs were quite brief in comparison to the duration of speech elements (sounds, syllables) it is quite reasonable to be able to expect the brain to fill in the gaps and make sense of it all.

A "Scintillation Compensator":

Despite the redundant nature of the speech maintaining reasonable intelligibility, it became quite "fatiguing" to listen to audio distorted in this manner, so another device was wielded as part of an experiment:  The "Scintillation Compensator", the block diagram being depicted in Figure 6, below.

Figure 6:
Block diagram of the "Scintillation Compensator" system.
Click on the image for a larger version.
This system is essentially a "Keyed AGC" system using a low-level 4 kHz tone from the transmitter as an amplitude reference for a tracking gain cell at the receiver:  If the amplitude of the 4 kHz tone goes down, the gain of the audio is increased by the same amount and vice-versa.  The effect of this device is quite dramatic as the clip in Figure 7, below, demonstrates:

Figure 7:
Audio clip with a"Before" and "After" demonstration
of the "Scintillation Compensator" 
Music:  "Children" by Robert Miles, used
in accordance with U.S. Fair Use laws.

One of the more striking differences is that in the "before" portion, the background hum from city lights remained constant while in the "after" portion it varied tremendously, more clearly demonstrating the degree of the amplitude variation being experienced.  What is also interesting is that the latter portion of the clip is much "easier" (e.g. less fatiguing) to listen to:  Even though syllables are lost in the noise - being obliterated by hum rather than silence in the first part of the above clip - the fact that there is something present during those brief interruptions, even though it is hum, seems to appease the brain slightly and maintain "auditory continuity".

It should be pointed out that the "Scintillation Compensator" cannot possibly recover the portions of the signals that are too weak (e.g. lost in the thermal noise and/or interference from urban lighting) but only that it maintains the recovered signal at a constant amplitude.  In the first portion of the clip in Figure 7 it was the desired signal level that changed while in the second portion it was the background noise that changed.  In other words, in both examples given in Figure 7, the signal-to-noise ratio was the same in each case.

Practical uses for all of this stuff:

The most important point of this exercise was to demonstrate that a larger aperture reduces scintillation - although that point might be a bit obscured in the above discussion.  What was arguably more dramatic - and also important - was that the noncoherent light source seemed to be less susceptible to the vagaries of atmospheric disturbance.  This observation bears out similar testing done over the past several decades by many others, including Bell Labs and the works of Dr. Olga Korotkova.

For a brief bibliography and a more in-depth explanation of these effects visit the page "Modulated Light DX" - LINK - particularly the portion near the end of that page.

The reduction of scintillation has interesting implications when it comes to the ability to convey high-speed digital information across large distances using free-space optical means under typical atmospheric conditions.  Clearly, one of the more important requirements is that the signal level be maintained such that it is possible to recover information:  Too low, it will literally be "lost in the noise" and be unrecoverable.

As the demonstrations above indicate the "average" level may be adequate to maintain some degree of communications, but the rapid and brief decreases in absolute amplitude would punch "holes" in data being conveyed, regardless of the means of generating the light.  Combatting this would imply the liberal use of error-correction and recovery techniques such as Forward Error Correction (FEC) and interleaving of data over time - not to mention some interactive means by which "fills" for the re-sending of missing data could be requested.  The "'analog' analog" to these techniques is the aforementioned ability of the human brain to "fill in" and infer the missing bits of information.

While lasers are well-known to be "modulatable" at high rates, doing so for LEDs is a bit more problematic due to the much larger device sizes and commensurate increase in device capacitance.  To rapidly modulate an LED at an ever-high frequency would also imply an increase of "dV/dT" (e.g. rate of voltage change over time) which, given the capacitance of a particular device would also imply higher instantaneous currents within it, effectively reducing the average current that could be safely applied to it.  What this means is that it is likely that specialized configurations would required (e.g. drivers with fast rise-times at high current; structurally-small, high current/optical density LEDs etc.) to permit direct modulation of very high (10's of megabits) data rates.

Using the aforementioned techniques has rather limited utility when the free-space optical links extend out to many 10's of miles/kilometers owing largely to the vagaries of the atmosphere and the practical limits of optical flux with respect to "link margin" (e.g. the need to use safe and sane amounts of optical power to achieve adequate signal to recover information - particularly as the rate of transmission is increased) but it may be useful for experimentation.

Additional information on (more or less) related topics:

Wednesday, March 2, 2016

The solar saga - part 1: Avoiding interference (Why I did not choose microinverters!)

Back in November I decided to get some solar (photovoltaic) "grid tie" power generation installed at my house. I decided that the best place to install this was on the roof of my detached garage because:
  • The roof area of the garage was comparable to that of the house.
  • Much less tree shading than on the house.
  • Because it was not an occupied structure with no finished attic space, it was exempt from certain requirements (e.g. walkways around the panel areas, etc.) that would have reduced the available area for the installation of the panels.
  • It already had an existing, high-current circuit that was capable of being used for both source and sink of electrical current.
The only thing that I really had to do in the garage was to replace the 70's vintage Zinsco breaker panel with a more modern "load center" as a sub panel:  Doing so was a straightforward job that took only a few hours and cost less than $125 for all of the parts.

Unfortunately there was a significant snag to the "electrical" side of getting it connected to the utility grid via "Net Metering" (it's not "online" yet...) but that will have to wait for a later installment.

What kind of solar system?

In residential, grid-tie installations, two types of solar systems are most commonly found:
  • Series string.  This is where the panels are tied together and go to one, large power converter.  Many of these inverters have inputs for at least two, separate strings for redundancy, to accommodate different illumination profiles (e.g. "east versus west") and also to (statistically) increase efficiency.
  • Microinverter.  In this approach each, individual panel has its own, "private" power converter.
The series string approach is a bit older technology and its popularity is being overtaken by the microinverter approach since the latter is touted with the ability to extract more energy from the entire solar plant since the output from each, individual panel is optimized rather than relying on the "weakest link" from the bank of panels comprising the series string. With modern panels that are intrinsically well-matched, the "weakest link" issue is not as significant as it once was, but that's a topic for a later discussion.

I will say right now that I chose the series string approach for a very practical reason:

Radio Frequency Interference (RFI).

Interference from microinverters:

Let me spin time back to mid 2013 when I saw on an email group a plea from a local amateur (Ham) radio operator for help to analyze a problem that he was having.

He'd had installed a sizable solar plant (approx. 3 dozen panels), each with an Enphase M190 microinverter and suddenly found that he faced a tremendously increased noise floor on both HF and VHF.  By the time that he and I "connected" he had come to some arrangement with the manufacturer and/or installer to install "ferrite beads" (at their expense) on the microinverters' leads in an attempt to mitigate the problem.

He asked me to come over to verify the nature of the interference and its approximate magnitude, prior to the installation of the ferrite devices, and I arranged to do so.

When I arrived, he demonstrated the problem:  When receiving on his HF dipole, which spanned over a portion of his roof and solar panel farm, he experienced 4-6 S-units (20-40dB) of additional noise from the microinverters, depending on frequency.  The noise was that of typical AC mains-coupled switching supplies, being grouped in spectral "bunches" every 10's or hundreds of kHz or so (I don't recall the spacing) on the lower bands (75, 40 meters) and by the time one got to 15 meters it was pretty much just an even "smear" of noise across the spectrum.  By switching to AM, it was apparent that the noise itself had an amplitude-modulated component related to the mains frequency that was not readily apparent when listening on SSB.

The problem was also apparent on 2 meters where low-level spurious signals emanated by these devices were intercepted by his rooftop antenna and would open the squelch and/or mask weaker signals - including those of some of the more distant repeaters.

Analyzing the problem:

For this visit I'd brought along with my FT-817 portable, all-band, all-mode transceiver with a small 2 meter Yagi antenna, a small shielded "H" field loop for localizing signal sources and a specialized 2-meter DF antenna/receiver, to be used with the Yagi, and in switching to 2 meter SSB mode using the rubber duck antenna on the FT-817 I could hear a myriad of low-level carriers as I tuned up and down the band.

Stepping out onto the roof we approached the solar system and I wielded my other gear:  The DF receiver/antenna combination showed the source of the signals - on any random 2 meter frequency - to be that of the solar array. Switching to the combination of the FT-817 and the small, shielded H-loop I was able to localize the conductors from which the energy was being radiated:  Not only did it seem to be coming from the AC power mains cables connecting everything together, but also the frames and the front surfaces of the solar panels themselves, indicating likely egress on both the AC and DC sides of the microinverters.

Part 15 compliance?

At this point one might ask how such a product appeared on the market if it caused interference:  Doesn't FCC Part 15 "protect" against that?


First of all, it is worth re-reading a portion of the text from Part 15 that I'm sure that you have noted somewhere on a device or in a manual that you have laying around.  Quoting from FCC Part 15, section 105 subpart (b):

This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation.
This equipment generates, uses and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications.
However, there is no guarantee that interference will not occur in a particular installation.
(The emphasis is mine.)

The above speaks for itself!

It should be observed that while Part 15 limits the amount of incidental RF energy that can be emitted/radiated/conducted from electronic devices to a certain level, that level is NOT zero!  The fact is that a device may be perfectly legal in its amount of emission, but still be detectable, under the right circumstances, from a significant distance.  In this particular situation, there were at least three things going against our solar system owner:
  • He was in very close proximity to the microinverters and solar panels.  As noted previously, his antennas for HF and VHF were either on the roof, or crossed part of it.
  • HF operation, by its nature, involves rather weak, narrowband signals.  This makes it even more likely that similar low signals emanated from devices would be noticeable and obvious and that broadband noise could be quite apparent.
  • His solar system comprised approximately three dozen panels.  What this means is that each of those microinverters is, by itself, radiating its own, set amount of interference.  If you take the number as 36, this means that as a system, the total amount of energy being radiated by all of those microinverters put together will be increased by nearly 16 dB - that's nearly 3 S-units!  Practically speaking those inverters nearest the antenna(s) will cause the most problem due to proximity, but you can certainly see that many devices in one location are likely to exacerbate the issue overall.
I had no way to accurately measure the emitted signals from the microinverters to determine if they were compliant with part 15 or not, but I'm willing to believe that a widely-sold product such as an Enphase M190 microinverter had been tested and found to be in compliance by reputable people.

Figure 1:
A look inside the newer, Enphase M250, a model newer than the M190's
described as causing interference problems.  At the moment the jury is still
out if the M250 (or M215) is much "cleaner" than the older M190 in terms
of radiated energy.  While some decoupling - possibly filtering - is visible
on the AC mains connection at the bottom, no inline chokes are
apparent from the top-of-board view on the DC (solar panel) side - only
some capacitors that appear to bypass it to ground (e.g. the case.)
This M250 was given to me by an installer after it had failed in the field.
Click on the image for a larger version.
We discussed what it would take to make this microinverters completely quiet and I knew a way:  Completely enclosing each microinverter in a metal box with L/C Pi filters on both the DC input and AC output leads.  Proper L/C filtering of the input and output along with appropriate capacitive bypassing so that not only does RF energy not escape from the unit, but it also offers little/no potential for RF currents generated within to appear differentially between the DC input and AC output leads.

I have discussed similar interference-elimination measures related to switching power supplies in my August 18, 2014 post, "Completely Containing Switching Power Supply RFI" - link.  This method can be completely effective in reducing the interference level of such devices to undetectable levels.

It would have been nice if if there was available a weathertight box into which each microinverter could be mounted, along with a separate set of filtered input and output power connections.  The design of such a device would be slightly complicated by the fact that the Enphase units communicate via their powerline connections, but it was likely that this could be accommodated in the filter design.

I was quite sure that such an after market product did not exist at the time and even if it did, it would be prohibitively expensive, particularly when multiplied several dozen times!

My host asked me if I thought that the installation of ferrites on the input and output leads would help:  I thought that it might help a little bit on VHF and UHF, but that I couldn't see it having any useful effect on HF - but I hoped that I was wrong!

As I left this ham's house I had my FT-817 connected to my vehicle's antenna, listening in SSB mode on 2 meters and I could hear the low-level signals from his solar array from a distance of nearly two blocks, line-of-sight.

Post ferrite installation:

A few weeks later I got an email from this same ham stating that the ferrites had been installed on the microinverters.  To do this, it was necessary to (practically!) un-install and re-install the entire system as very few could be reached from the roof, requiring a lift to access.

Did it help?

Not that he could tell.

Is his situation unique?

Apparently not.

There are many anecdotes of amateur radio operators facing terrible interference issues after they - or their neighbors - install a microinverter-type solar system.  Once such instance is documented in the following thread on Reddit:
Neighbors just got solar - They gifted me with S-9 RFI  - link

Another case was documented several years ago on the "Ham Nation" Web TV show (Episode #65) where the only way to reduce the problem to a tolerable level was to relocate the antenna some distance away from the house-mounted microinverter system, at the far end of the lot.

A link to the webcast of Ham Nation episode #65 may be found here:  Link  (The relevant portion starts at 16:40.)

Since the original posting of this article a write-up appeared in the April, 2016 QST magazine that details another ham's battles with RFI from a solar electric system.  While this system was not microinverter-based, it used devices called "optimizers" that work on similar principles to the microinverters in that high-frequency switching supplies are used to maximize the amount of power available from the array.

Why the ferrites didn't/won't work:

There is a misconception amongst some that loading wires with ferrites will stop the ingress/egress of RF signals.

This does not happen.

By putting a piece of ferrite on a conductor one increases the effective impedance at a given frequency, but that impedance is not infinite, and the effectiveness of the ferrite depends on several things:
  • The characteristic impedance (real, complex) of the conductor on which it is placed at specific frequencies (it varies all over the map!) 
  • The size of the ferrite (length, diameter, etc.)
  • The material type (permeability)
  • The frequency
  • How many "turns" of the conductor may be passed through the ferrite.
For retrofits, the answer to last one is generally easy:  One turn, as that is all that may be accommodated with a typical "split core" ferrite that is installed simply by placing it over a wire.  As was certainly the case with the Enphase units, the connecting wires were simply too short to allow additional turns of wire even if the ferrite device were sized to allow it.

In general, ferrites have greater efficacy with increasing frequency, but this is not surprising since their mechanism is generally that of adding a bit of inductive reactance to the conductor on which they are placed - but this also explains why "snap on" or split ferrites are part of a futile attempt when one attempts to solve HF-related noise issues:
They simply cannot provide enough reactance to attenuate by the needed 10-30dB to solve most severe interference situations at HF!
Figure 2:
The outside of the same Enphase M250 as shown in Figure 1,
above, showing connecting cables:  Not much room
to place large ferrites on these - much less multiple turns!
(The cables on the M190 are of similar length.)
Click on the image for a larger version.

The reason for this is immediately apparent if one studies the specifications of a typical snap-on ferrite such as the Amidon 2x31-4181p2 link.  Here are some typical specifications for this rather large piece of ferrite:
  • I.D:  0.514" (13mm);  O.D.:  1.22" (31mm) ;  Length:  1.55" (39mm)
  • Material type:  31 (1-300 MHz, typical)
  • Reactance of device, typical:  25 ohms at 1 MHz, 100 ohms at 10 MHz, 156 ohms at 25 MHz, 260 ohms as 100 and 250 MHz
As you can see, the impedance is stated as 100 ohms at 10 MHz.  Being generous, let us apply that figure to the 40 meter band where we can see that if this were applied to a line that had a 50 ohms characteristic impedance, we might (theoretically, simplistically) expect to see somewhere in the area of 8-16dB of additional attenuation caused by the loss induced by this device - but that is only 1-3 "S" units, and that represents only a "good case" scenario.  In the case of the aforementioned situation it would have taken several more "S" units to reduce the noise to the point where it was not highly disruptive.

What is more likely to happen is that the interconnecting wires will have wildly varying impedances at different frequencies - some higher, some lower - and the this will have a dramatic effect on the efficacy of this reduction.  In the case of the ham that I had visited I would not have been surprised that if a plot had been taken of the noise versus frequency, its "shape" would have been dramatically altered by the addition of the ferrite devices and, overall, the amount of radiated energy (interference) would have been measurably reduced.  The problem was that the level was so high to begin with that knocking it down by, say, 90% (10dB, or just under 2 S-units) still represented a terrible situation!

The Amidon device noted above is a rather large device and at least three of them would be required for each microinverter (one for each DC lead, one for the AC connection) and the expense of these devices - not to mention the installation (36 microinverters would require 108 ferrite devices!) - could really add up!

It should go without saying that a smaller ferrite - although less expensive - will have even less effect than a larger one!

Ferrite devices such as described are often more useful for preventing RF from getting into devices:  Increasing the impedance on the connecting leads and wires may not only improve the efficacy of already-existing RFI protection devices such as bypass capacitors, but they can also break up loops through which high RF currents induced by a local transmitter might be passing "through" a device.  In these case the moderate effect of their added impedance may well be enough to adequately mitigate RF ingress issues.

Remember:  With RF ingress it is often the case that knocking down the RF energy by 6-12 dB will be enough to mitigate the issue whereas the amount of "hash" emitted by the microinverters would likely need to be reduced by more than 20 dB to make it undetectable.

"Grounding" won't help either:

Reading some of the correspondence in the Reddit posting (above) there is mention of "grounding" to eliminate/reduce RFI from these units:  To assume that "grounding" would likely solve or mitigate this problem would be to assume incorrectly!

The problem, again, is that RF energy appears to be conducted from the input and output (DC and AC, respectively) coupling wires which, themselves, can act as antennae:  "Grounding" the case - which would also "ground" the safety ground on the AC output - is not really going to help.

If the unit is installed according to code, there should already be a "ground" attached at the panels, anyway - but this wire connection, which is likely to be 10's of feet (several meters) between the roof and the Earth or grounding point is going to look like a "ground" only at DC and low frequencies - such as those found on the AC mains!

Any wire that is several feet long - grounded or not - is going to act as an antenna.

What this means is that it is entirely possible that at least some of the RF interference being radiated by the inverter is going to be conducted along the grounded metal structures (such as the solar panels and the frames) and wires in addition to the AC mains wiring.

Again, the proper way to contain such RF energy within the confines of the circuitry was discussed above:  Proper L/C filtering of the input and output along with appropriate capacitive bypassing so that not only does RF energy not escape from the unit, but it also offers little/no potential for RF currents generated within to appear differentially between the DC input and AC output leads.

The upshot:

If you are getting interference from a microinverter system - either your own, or your neighbors, is there anything you can do?

Since the installation of ferrites will have minimal effect on HF, the answer would seem to be "No, not really", aside from converting to a series-string system, or installing a series-string system, instead.

In the case of the ham operator that I visited, he mitigated the situation somewhat by moving his HF antenna as far away from his house as he could (which wasn't very far considering that he had limited space on his city lot) which helped slightly.  Nighttime was the only time during which he could completely quell the interference by turning off the breaker feeding the solar array, but during the day there was nothing he could do:  If either solar illumination or AC mains power was available to the microinverters they seemingly caused the same amount of interference, whether they were under load or not!

Are newer microinverters better/quieter?

It has been reported that the Enphase M190 microinverter has been obsoleted and has been replaced with newer models that are more reliable and more "RF Quiet".  On this second point, the jury seems to be out:  Anecdotally, there seem to be about as many reports of the newer models (from various manufacturers) causing interference as not, so the reports are rather confused.

I know at least two amateur operators with newer-model Enphase inverters (M215, M250) but they report other extenuating circumstances (e.g. their microinverter PV system is located some distance from their antennas and/or they already had notable interference from other sources before installing the solar power system) that they cannot say for certain whether or not there is a problem caused by their system.  At some point I hope to personally visit at least one of those installations in the coming months.

Series String inverters and interference:

While less efficient overall and somewhat less expensive up front, I decided to use a series-string inverter system.  From direct observation and reports by people that I know and trust I knew that units made by Sunnyboy and Fronius could be reasonably expected to cause little or no interference on their own.  Additionally, were an interference issue to arise, having a single point at which to filter (e.g. one large box with a relatively small number of input and output leads) I was quite confident that it would be possible to add additional filtering if necessary.

To be sure, one might (theoretically) lose up to 10-20% or so peak efficiency with a series-string system as opposed to a Microinverter that optimizes for each, individual panel, but considering the comparatively low cost of panels these days and the lower "up front" cost for a series-string inverter system, one can usually afford to "up size" the system slightly to compensate.

(Comment:  As noted previously, series string "optimizers" have been observed to cause significant RFI since their basic principle of operation would lend to them tendencies to produce unwanted "hash" unless well-designed.)

Maintaining the various systems:

Anecdotally, from both owners and maintainers of microinverter-based systems, it is not uncommon to experience the failure of several of the microinverters after a only few years, the rate-of-failure (apparently) following somewhat of a "bathtub" curve:  Several die early on, there is often a period of relative stability, and then they start to fail in greater numbers after several more years.

While these devices (microinverters) seem to have a good warranty, the issue comes about replacing the microinverter that is in the "middle of everything" on the roof.  On a roof with a moderate-to-steep pitch it may be necessary to use equipment such as a lift to be able to safely access the failed inverter - and it may be necessary to "de-install" several of the surrounding panels to gain access.  In other words, it will likely be many times the cost of the microinverter itself ($125-$300) in equipment rental, time and labor just to replace it.  For this reason it seems that many people simply allow several of them to fail before "calling out the troops":   Having several panels (effectively) offline at a time is something that detracts from the proclaimed efficiency benefit of the Microinverter scheme!

The large, series-string inverters appear to be extremely reliable, having excellent track records (at least for Fronius and Sunnyboy - the two brands with which I have any familiarity).  The obvious down side is if there were a failure with the converter, it would likely take a large portion - or all - of the production off line, but the replacement of the device is comparatively easy and would likely not be more than a couple times the total cost (parts plus equipment rental plus labor) of replacing a small handful of microinverters!

What about failures of solar panels?  Modern panels contain diodes that "wire around" sections that have failed or shaded, so unless a catastrophic failure occurs that completely removes it from the circuit, one will lose, at most, the capacity of the entire panel:  This is true with both microinverter and series-string configurations.)

Fortunately solar panels have been around for decades and have been proven to be quite reliable and rugged in terms of durability.  If a failure in a solar electric system is going to occur, the solar panel itself is less likely to be the problem unless the problem is actual, physical damage.

(Note:  There may be warranty coverages or service plans that mitigate the costs related to such maintenance, but since they vary wildly with installers and manufacturers, they are not covered here.)

Final comments:

Each system has its advantages and trade-offs:  In my case a primary concern was the avoidance of interference.  Since the advent of digital TV - and because fewer people listen to the radio or even have off-air TV these days - they likely wouldn't notice (or would care!) about interference issues that appear to be common with the microinverter approach.

One can always hope that newer microinverters will become increasingly quiet, but for now that seems not the case - if not in reality, certainly in perception.

In the next installment I'll talk a bit more about the installation of my system - trials and tribulations...

Saturday, February 6, 2016

Using "Ultracapacitors" as a power conditioner and ballast for transient high-power loads (or "How to run your HF rig from D-cells" - sort of...)

The problem of "high impedance" power sources:

The title serves to illustrate a problem frequently encountered when trying to power a device that operates with a high peak current:  Your energy storage - or your power source - may have plenty of energy capacity, but not enough current capability!

One such example of a power source that has plenty of capacity, but rather limited power capability is that of the "cigarette lighter" outlet in a typical vehicle:  As long as the engine is running, you can pull power - but not too much:  More than 10-15 amps is likely to blow a fuse, and even if you were to replace the original fuse with, say, a 20-30 amp fuse (not smart!) the rather light gauge wiring would likely result in a voltage drop that would render a typical 100 watt HF rig unusable on voice peak.

For another, more extreme example let us consider a set of alkaline "D" cells, referring first to some online data:
  • The "Energizer" D-cell data sheet - link
  • "Duracell" D-cell data sheet - link.  Does not include direct amp-hour rating, but such may be inferred from the graphs presented.
  • "'D' Battery" Wikipedia page - link.
Please note:  Manufacturer's links sometimes change - you may have to resort to an internet search if one or more of the above links do not work.

One thing that jumps out is that a "D" cell has somewhere between 5 and 30 amp-hours per cell, but if you study the graphs, you'll also note that this apparent capacity drops like a rock with increasing current.  Why is this?

At least part of this is due to internal resistance.  If we examine the data for a typical alkaline "D" cell we see that on a per-cell basis that the internal resistance is 0.15-0.3 ohms per cell when it is "fresh", but this increases by 2 or 3-fold near the end of life of the cell (e.g. <=0.9 volts/cell) and increases dramatically - and very quickly - at still-lower voltages.  Interestingly, the manufacturer's data used to include graphs of internal cell resistance, but these seem to have disappeared in recent years.

If we take a general number of 0.2 ohms/cell and expand that to a 10-cell series battery pack we get a resistance of 2 ohms which means that if we attempt to pull even one amp we will lose 2 volts - and this doesn't take into account the contact and wiring losses related to these batteries!

If you look at the graphs that relate battery capacity to discharge current you will notice something else:  If you draw twice the current, your apparent capacity - in "run time" decreases by more than half and if you convert this to amp-hours, the more current drawn, the fewer available amp-hours.

These two facts together tell us two things:
  • We cannot draw too much current or else resistive losses will cause excess voltage drop.
  • Higher current consumption will cause a marked drop in available amp-hour capacity.
This second point is often referred to as "Peukert's Law" (Wikipedia article here - link).  While Peukert's law was derived to describe this effect with lead-acid cells, a similar phenomenon happens with cells such as alkalines.

As you may have inferred from the title of the article, our particular application implies a usage where the typical, resting (or even average) current consumption is quite low but the peak current consumption could be very high.  Clearly, with a string of "D" cells, alone, while we may have enough theoretical capacity to provide power for operation, we cannot tolerate the peak currents!

A power source with good longevity:

What we need is a low-impedance power source and as it turns out almost any type of rechargeable cell - whether it is lead-acid, NiCd, NiMH or lithium - has a lower impedance than alkaline cells so the obvious question that one might ask is why not use one of those other types?

The obvious problem with lead-acid is that of longevity:  If you own a lead-acid battery that is more than three years old (and certainly more than five!) it is likely lost a significant percentage (at least 30%) of its rated capacity - probably more unless it has been treated really well (e.g. controlled temperature at or below 70F, 21C) and always kept at a proper floating voltage when not in use (e.g. 13.5-13.7 volts for a "12 volt" battery at nominal room temperature.)

On the other hand, modern alkaline cells will retain the vast majority of the capacity for at least 5 years, just sitting on a shelf - a period of time that is beyond the likely useful lifetime of either lead-acid or most rechargeable lithium-ion cells!  What's more is that alkaline cells are readily available practically anywhere in the world and they come "fully charged".  To be sure, there are other types of "primary" (non-rechargeable) cells that have excellent shelf life such as certain types of lithium, but these are much less-available and would likely "break the bank" if you were to buy a set with comparable capacity!

A low-impedance (voltage) source:

An advantage of Lead-Acid, NiCd, Lithium Ion and some NiMH cell types is that they have quite low internal resistance compared Alkaline cells:  Even an aging lead acid battery that is near the end of its useful life may seem to be "OK" based on a load test as its internal resistance can remain comparatively low even though it may have lost most of its storage capacity.

One could ask, then, why not simply parallel Alkaline cells, with their ready availability, long shelf life and high storage capacity with one of these other cell types and get the best of both worlds?  In theory you could - if you had some sort of charge control circuitry that was capable of efficiently meting out the energy from the alkaline pack and using it to supplement the "other" storage medium (e.g. lead-acid, lithium-ion, etc.) but you cannot simply connect the two types in parallel and expect to efficiently utilize the available power capacity of both types of storage cell - this, due to the wildly different voltage and charge requirements.

Even if you do use a fairly small-ish (e.g. 7-10 amp-hour) lead-acid or lithium-ion battery pack, even though its internal resistance may be low compared to that of alkaline packs, it likely cannot source the 15-20 amp current peaks of, say, a 100 watt SSB transceiver without excess voltage drop, particularly if it isn't brand new.

This is where the use of "Ultracapacitors" come in.

In recent years these devices have become available on the market and for reasonable prices.  These capacitors, typically with maximum voltages in the range of 2.5-2.7 volts per unit, may have capacitance values as high as several thousand Farads in a reasonably small package while at the same time offering very low internal resistance - often in the units of milliohms.  What this means is that from this capacitor one may pull many 10's of amps of current while losing only a small percentage of the energy in heat:  Indeed, many of these capacitors have current ratings in the hundreds of amps!

What this means is that we can use these capacitors to deliver the high, peak currents while our power source delivers a much lower average current.

The difference between peak and average current - and power:

With a simple "thought experiment" we can easily realize that with our transmitter, one second at 100 watts is (about the) same total power as ten seconds at 10 watts.  If, in each case, we averaged the power over ten seconds, we would realize something else:  In both cases, the average power is 10 watts per second.

Clearly, the instantaneous power requirements for a radio operating at 10 watts output are different than at 100 watts:  For the former you'll likely need 4-5 amps of current, but for the latter you'll need 18-25 amps (at 12 volts or so, in each case.)

Here, we have a problem:  If we have a given resistance somewhere in our DC supply, we lose much more power at, say, 18 amps than at 5 amps according to the equation:

P = I2R

In other words, the ratio of the power loss is equal to the square of the ratio of the current differences - in other words:

182 / 52 = 12.96 (or 13)

That means that power losses at 18 amps are 13-fold worse than those at 5 amps.

Clearly, the best way to mitigate this is with heavy cables to minimize resistance, but what if your power source is, by its very nature, fairly high in resistance in its own right?

This is where the "ultracapacitors" can be useful.  By acting as a reservoir to handle peak currents, but rely on the battery - whatever form it may take - to make up the average.

With SSB (Single SideBand) transmission we have an (almost) ideal situation:
  • There is no RF power when we aren't saying anything
  • The amount of RF power is proportional with voice peaks, and
  • Speech has comparatively rare peaks with a lot of empty spaces and lower-energy voice components interspersed.
  • SSB is 6-12 times more power efficient in conveying voice than FM and occupying 1/3-1/4th of the space (bandwidth) as one FM signal.
In other words, when we are transmitting with SSB, the average power of a hypothetical 100 watt transmitter will be much less than 100 watts.

Compare this with 100 watt FM transmitter:
  • When you are "keying down" it is always putting out 100 watts, no matter what your voice is doing.
  • 100 watts of FM is less effective in conveying voice than even 10 watts of SSB and it takes at least 3 to 4 times as much bandwidth as an SSB signal.
One obvious take-away is that if you are in an emergency, battery power communications situation where you need to communicate on simplex and find that it takes 20-50 watts of FM power, you are probably making a big mistake sticking with FM in the first place as you could do better with 5 watts of SSB or less if the capability exists, but I digress...

For the purposes of this discussion the point that I was really trying to make was the fact that the use of these "ballast" capacitors is appropriate only for relatively low duty-cycle modes such as SSB or, possibly CW:  If you tried this with FM or digital modes such as PSK31 the long duty cycle (e.g. key-down) would quickly drain the energy stored in the capacitors and they would "disappear", putting the load back on the battery bank.

This technique is not new:  For many years now one has been able to buy banks of capacitors intended for high-power car audio amplifier installations that provide that instantaneous burst of current required for the "thud" of bass without causing a similar, instantaneous voltage drop on the power supply.  Originally using physically large banks of "computer grade" electrolytic capacitors, these systems are now much smaller and lighter, using the aforementioned "Ultracapacitors".

There are also devices on the amateur radio market that do this:  Take as an example the MFJ-4403 power conditioner (link).  This device uses ultracapacitors in series to achieve approximately 5 Farads of capacitance across the output leads of the device, allowing high peak currents to be pulled and "smoothing" out the average the instantaneous overall voltage drop that can cause a fuse to blow and/or the radio to malfunction due to too-low voltage.

Now, a few weasel words:
  • The device(s) described on this page can/do involve high currents and voltages that could cause burns, injury, fire and even death if improperly handled if reasonable/proper safety precautions are not taken and good building techniques are not followed.
  • The device(s) described are prototypes.  While they do work, they may (likely!) have some design peculiarities (bugs!) that are unknown, in addition to those documented.
  • There are no warranties expressed or implied and the author cannot be held responsible for any injury/damage that might result.  Your mileage may vary.
  • You have been warned!

How this may be done:
Figure 1:
The capacitor bank/power conditioner with 53-1/3 Farads,
in a package slightly larger than a 7 amp-hour, 12 volt
lead-acid battery.  How much energy is actually
contained in 53.33F at 13 volts?  Theoretically, about
the same as just one alkaline AA cell.
Click on the image for a larger version.

For a variety of reasons (physics, practicality) cannot buy a "16 volt" ultracapacitor:  Any device that you will find that has a voltage rating above 2.7 (or, in some cases 3.something volts) is really a module consisting of several lower-voltage capacitors in series.  What is more, you cannot simply throw capacitors in series as there is no guarantee that the voltage will always divide equally amongst them unless there circuitry is included to make this so.

Another consideration is that if you have such a device - a large bank of capacitance - that is discharged, you cannot simply connect it across an existing power source because a (theoretically) infinite amount of current will flow from the power source into the capacitors, if their voltage is lower, to force equilibrium.  Practically speaking, if you were connect the capacitor bank "suddenly" to the power source, if the resistance of the wires themselves didn't serve to limit the current you'd likely blow the fuse, trip the breaker and/or cause the power supply to go into some sort of overcurrent (or shut down) mode - none of which are at all helpful.

Finally, this capacitor bank will be (theoretically) capable of sinking or sourcing hundred of amps if applied to a high current source/shorted out - perfectly capable of burning even heavy gauge wire, so some sort of protection is obviously needed.

The diagram in Figure 2, below, provides these functions.
Figure 2:
Schematic diagram of the capacitor bank/power ballast/conditioner and charging
circuit - see the text for a circuit description.
Click on the image for a larger version.

Circuit description:

Capacitors C1-C6 are "Ultracapacitors":  Their exact values are not important, but they should all be identical values and models/part numbers.  In the example of the MFJ-4403, six 25 Farad capacitors are used yielding a total of 4-1/6 Farads while the drawing in Figure 2 depicts six 350 Farad capacitor being used to yield a total of 58-1/3 Farads.  The greater the capacitance, the more energy storage, but also the longer the "charge" time for a given amount of current and, of course, the larger the size and the higher the initial cost of the capacitors themselves.

Zener diodes D1-D6, each being 2.7 volt, 1.3 watt units, are placed across each capacitor to help equalize the voltage.  As is the nature of Zener diodes they will be at least partially conducting at lower voltage than nominal, the current increasing dramatically as their rated voltage is approached - which, itself, can vary significantly.

Originally, I experimented with the use of a series-connected resistor, diode and green LED across each capacitor to equalize the voltage as depicted by components Ra, Da and LEDa.   In this circuit the LED, a normal, old-fashioned indicator-type "non-high brightness" LED was used, taking advantage of its 2.1 volt threshold voltage along with the 0.6 volt drop of an ordinary diode with a 5.1 ohm resistor in series to provide a "knee".  While this circuit did work, providing a handy, visual indication of a "full charge" state of each of the six capacitors, it did not/could not safely conduct enough current to strongly force equalization of the capacitors' voltages.

The Zener diodes, with their maximum current of more than 400mA, as compared to 15-25mA for the LED-based circuit, seemed to be more effective.  I left the LED-based circuit in place after constructing the prototype since there was no reason to remove them and the illumination of the LEDs served to indicate during testing that the capacitors are charging up with the equal illumination being generally indicative of equal charge distribution.

It will be noted that the "top" of the capacitor bank is connected to the positive side of the power source, "straight through", at all times via TH1 and TH2, current-inrush limiters.  Because of this, when this unit is "off" it is, for all practical purposes, transparent, consuming no current.  These devices (TH1, TH2) act as self-resetting fuses, limiting the current to a sane amount, somewhere in the 30-50 amp region, if the output (or input!) is shorted:  Ordinary "slow blow" fuses could be used here, but if so, the advice is to keep spares on hand!

It is only the "bottom" of the capacitor string that is connected/disconnected to enable or disable the "ballast" (and filtering) capability of this circuit:  When "off" the bottom of the capacitor bank is allow to float upwards.

If the capacitors are discharged when switch SW1 is turned ON while power is applied, the first thing that happens is that the gate of Q1 is turned on via resistor R3.  When this happens current flows through R1, but when its drop exceeds approximately 0.6 volts - a voltage commensurate with approximately 2.5 amps of "charge" current - transistor Q3 begins to be turned on, drawing down Q1's gate voltage.  When this circuit reaches equilibrium only 2-3 amps will flow through Q1, R1 and TH3 and thus charging the capacitor bank comparatively slowly and preventing a "dead short" that would likely blow fuses and/or causing a power supply to "fold back".

Figure 3:
The "control board" with the charge control/regulator circuit.  In this prototype R1 was implemented using a pair of 0.5 ohm 15 watt resistors since I didn't have a single 0.22-0.25 ohm, 1-5 watt resistor on-hand.
As can be seen, Q1 and Q2 are bolted to the lid of the die-cast box for heat-sinking.  In the lower-left corner can be seen the heavy wires that connect to the input/output and to the capacitor bank, along with the 30-amp current limiting devices, all connected on an "island" of copper cut out of the piece of circuit board material.
The rest of the circuit is constructed on a combination of the "Me Squares" and islands cut into the piece of circuit board.
Click on the image for a larger version.

While in this equilibrium state the gate voltage on Q1 is necessarily reduced to keep it partially "off", to maintain the current at approximately 2.5 amps, and there will be a voltage drop across R3:  This voltage drop is detected by Q5, a PNP transistor via R4 which, if Q2 is turned on, will also be turned on which, in turn, turns on LED2, the "Charging" LED which also turns on Q4 which, in turn pinches off the drive to the main capacitor bank switch, Q2, forcing it off.

(In the event of a circuit malfunction, self-resetting fuse TH3 limits the maximum current through Q1 to 5 amps before "blowing", at which point the current will be reduced to a few 10's to 100's of milliamps.)

Once the capacitor bank has become (mostly) charged and the voltage across it is nearly the same as the applied voltage, the charging current will begin to drop and as it does, transistor Q3 will start to turn off, causing less voltage drop across R3 in an attempt to make Q1 conduct more.  At some point, when the capacitors have reached full charge and current flow starts to decrease, Q1 will be "fully" on (e.g. its gate voltage approaching "V+_SW") and as this happens the voltage drop across R1 will have decreased to practically nothing.  When this drop is less than approximately 0.6 volts, Q1 will turn off completely and the voltage at the "bottom" of R1 (the side connected to the gate of Q1) will be equal to that of "V+_SW" and this will cause transistor Q5 to turn off.

Once Q5 turns off, the "Charging" LED will also turn off, as will Q4 and the voltage on the gate of Q2, being pulled up by R5 and "slowed" by capacitor C8, will start to rise.  As the gate voltage on Q2 crosses the "on" threshold it will conduct strongly, connecting the bottom of the capacitor bank to ground with only a few milliohms of resistance.

If switch SW1 is turned off, the voltage at "V+_SW" drops and via R5, C8 and the voltage at the gate of Q2 drops, turning it off and disconnecting the capacitor bank.

Construction details:

The ultracapacitors were wired together outside the enclosure (a Hammond 1590 series die-cast box) using multiple, folded pieces of #12 AWG bare wire, both for low ohmic resistance and mechanical support.  Additional pieces of wire were used on the capacitors' electrically-isolated mounting pins for spacing and support when the two banks of three were arranged to be parallel to each other, terminals facing - but separated by a safe distance as seen in the pictures.  The balancing circuits were also installed across the capacitors at this time.
Figure 4:
The bank of six, 350 Farad, 2.7 volt capacitors mounted in the bottom of the Hammond die-cast box.  The capacitors are connected together using multiple, folded pieces of #12 AWG copper wire that also provide mechanical support.
Not obvious from the picture, the heavy "bridge" between the left and right bank at the bottom of the picture in the center is insulated from the metal box, to prevent shorting, by a piece of clear plastic from a discarded "blister pack" product package that was heat-formed around the screw-boss and then secured in place with RTV.
On the left may be seen the"on/off" switch and the two indicator LEDs while the "power in/out" leads, using #12 flexible speaker wire, are visible on the lower-right.  Across each capacitor may be seen "Ra, LEDa and Da" depicted in the
schematic and discussed in the text, originally used for capacitor balancing.
Click on the image for a larger version.

Prior to mounting the capacitors in the bottom of the box the holes for the LEDs and switches were drilled/filed - this to eliminate the possibility of the tools damaging them during work.

The two parallel banks of three series capacitors were prepared and then placed in the bottom of the box, held in place with RTV (Silicone (tm) seal).

There is no etched circuit board.  The actual control circuitry is mounted on the lid, with Q1 and Q2 being bolted to the lid itself for heat-sinking using electrical insulating hardware (washer, grey insulating pads) that were scavanged from a dead PC power supply.  The circuit itself was constructed on a piece of glass-epoxy circuit board material.

Without making a circuit board, there are several ways that the circuit could have been constructed, but I chose to use a variation of the "Manhattan" style which involved islands of copper.  In some instances - such as for the large resistor(s) comprising the 0.25 ohm unit on Q1 and for the connections of the power in/out leads and TH1 and TH2, islands of copper were isolated on the board by first drawing them out with a pen and then slitting both sides of a narrow (1/16th-1/8th of an inch) trace with a sharp utility knife and straight edge and then using the heat from a soldering iron to aid in the lifting of the narrow strip to isolate the area of board.

For other portions of the circuit I used "Me Squares" available from (link):   These are small pieces of glass-epoxy circuit board material with nice squares etched on them that one glues down using cyanoacrylate (e.g. "super") glue and then uses as points for soldering and connection.

The nice thing about these "Me Squares" is that they are very thin, look nice and are very flat - which makes them easy to solder and glue down, but one could also cut out squares of ordinary circuit board material and solder those down, instead, provided that they were also made flat on the back side and de-burred for maximal surface contact and adhesion.  Finally, one could use the utility knife and isolate islands - or even use an "island cutter" tool - to produce isolated lands on the piece of circuit board material itself.

The main reasons for using this technique were that it was quick, and also that it was surface-mountable:  Had I wired it on perforated board or even made a conventional circuit board with through-hole parts I would have had to stand it off from the lid to insulate the circuit from it:  Using this technique the board itself was simply bolted flat to the lid and was quick to wire up.

For interconnects between the circuitry on the lid short lengths of #12 AWG stranded wire were used for the high-current leads connecting the capacitors and input/output leads and much smaller (#24-#28 AWG or so) for the wires that connect to the switch and LEDs.  For the "outside world" connections 30 amp Anderson Power Pole (tm) connectors were used mounted in the standard manner.

A few caveats with this circuit:
  • This circuit consumes some current (at least a few 10's of milliamps - maybe more) whenever it is set to "on", even after the capacitors have equalized.  What this means is that it will slowly drain your battery if the switch is left in the "on" position and because of this it is recommended that one switch it to the "on" position ONLY when intermittent, high current is going to be needed.  In other words, if you are going to receive (only), leave it "off", turning it "on" only if you plan to transmit, knowing that it may take 10's of seconds for the capacitors to charge.
  • If the power source cannot deliver the amount of current required during the "charge" cycle - or if it briefly "blinks" - the source voltage will sag and it will likely switch from "charge" mode to "operate" mode and connect the ultracapacitors directly across the power source.  If the power source has limited current in the first place, this will simply mean a "dip" in voltage while the capacitor bank charges, but if this occurs with a high-current source that has had a momentary glitch, a premature switch to "operate" mode could, in theory, blow a fuse.  This premature switching would likely happen if you had this connected via a cigarette lighter and started a vehicle while it was in a charge phase.
  • This capacitor bank should be treated like a battery:  If it is shorted out you will get lots of current - more than enough to burn open small wires and blow fuses - or even burn open large wires or small tools if you do an "oops" on the unfused/unprotected side of the circuit, or neglect to include any fusing/current protection!
  • Neither this capacitor bank (or any battery) should ever be placed on the output with any power supply that has a "crowbar" voltage protect circuit (such as is present on many power supplies such as the Astron "RS" and "RM" lines) as a power line or transient on the output could cause the crowbar to trigger and short the output of the power supply - including the capacitor bank or whatever battery you might have on the output.  If this happens, expect significant damage to the power supply!
  • This sort of device must be housed within a rugged enclosure/container.  The picture shows the prototype being built into a Hammond 1590 series die-cast aluminum box that is both very rugged and also provides heat sinking for Q1 and Q2.  In the unlikely event of a catastrophic failure due to a wiring problem or a capacitor going bad this enclosure will not melt and is likely to contain the "mess" - even if it were to get very hot!
  • If used in a vehicle, this circuit should be disconnected/disabled when the vehicle is started since these capacitors can supply "car starting" current on their own and it is possible to blow fuses, pop breakers, damage switches, circuits, relays and other mischief if the current from the capacitor bank were to "back-feed" through vehicle wiring that was not designed for that sort of current!
  • It is the nature of power FETs such as Q1 and Q2 to have an intrinsic "reverse diode" - even when turned off.  Be aware that if it so-happens that the bottom of capacitor bank is more negative than the "ground" (Battery -) side, these diodes - particularly the one in Q2 - will conduct!
I have no doubt that this circuit could be improved a bit:  It was designed and put together over just two evenings in preparation for the 2015 "Homebrew Night" meeting of the Utah Amateur Radio Club - including the time it took for the RTV to set up and paint on the enclosure to dry!

Possible uses:

While it is possible to use this to allow an HF rig to be powered from a set of D cells, it is more practical to use smaller lead-acid or lithium-ion packs as the primary power source and use the capacitor bank as the "ballast" to supply the peak currents.

Figure 5:
One end of the enclosure showing the on/off switch and
the two indicator LEDs.  The red "charge" LED indicates that the capacitor
bank is charging (at 2-3 amps) and the "power conditioning" capability
is not available until it has completed.  With completely discharged
capacitors this process takes a "minute or two", depending on the
voltage and the capacitance of the bank.
Click on the image for a larger version.
It may also be used in a vehicle to allow an HF transceiver to be powered from a cigarette lighter connection by reducing the average current drawn by it and thus keeping the average voltage higher to prevent the radio from shutting down/misbehaving on voice peaks.  When used in the comparatively "dirty" electrical environment of a vehicle it will go a long way to remove spikes from solenoids and motors, not to mention alternator whine.

I'm certain that this device could be improved, but it seems to function as it is.


How usable is it, in the real world?

It depends a lot on how - and with which radio - you plan to use it.  For example, many older-vintage Kenwood HF transceivers will fail to function properly (e.g. operate with distorted audio, "FMing" of the signal, etc.) much below 12-12.5 volts while more modern, compact HF radios like the Yaseu FT-100 and FT-857 will happily run at 10-11 volts - perhaps at reduced output power - but fine otherwise.  The upshot is that if you are considering a radio to be operated from "marginal" power sources be certain that you have done your research and consider how your candidate radios operate at low supply voltage and how/if they degrade "gracefully" or not!

How about running an HF rig from alkaline "D" cells?  As it turns out I can happily transmit 100 watt (peak) SSB using my old Yaesu FT-100 with the device described on this page using 10 "D" cells in series.

To do this effectively, one must minimize the contact resistance of the battery contacts which pretty much rules out using cheap, spring-loaded plastic battery holders which, by themselves, can have almost as much resistance as the cells themselves. Aside from spot-welding tabs onto the alkaline cells (the heat of soldering would likely cause some damage and slight loss of capacity) the best holders are aluminum with heavy bus bars and the fewest number of springs and contacts (e.g. multiple cells directly in series) such as the Keystone four-cell holders, models 158 (two of them) along with one two-cell holder, Keystone model 186.

One cannot "key down" with a carrier without significant voltage sag, but the FT-100 seems to work OK on typical SSB with voice peaks - but under such heavy loads don't expect to get much longevity before the alkaline cells' internal resistance increases:  As noted previously not all radios (such as older Kenwood mobiles) behave so well at lower voltages, so do your homework!

As detailed in the article, a more practical use of this sort of device is as a "power conditioner" to help compensate for voltage sags due to resistance in the interconnecting cables, somewhat underrated power sources, aging battery packs and/or "small"-ish batteries.

* * *

How about using this with a solar power source?

Assuming that voltage from the panel is regulated to a safe value (15 volts or below) the capacitor bank could, in theory, maintain voltage if the solar array provided at least the average current, but considering that solar illumination can vary wildly due to time of day, sun angle, clouds and shadows it would be recommended that additional storage capacity (remember that 53-1/2 Farads has only the theoretical storage capacity of a single "AA" alkaline cell!) be used as well, such as a 7-20 amp-hour lead-acid or lithium-ion based pack - but this sort of system could well be the basis of another article.