Monday, April 18, 2016

Combatting scintillation effects on optical voice links

One interesting aspect of the amateur radio hobby that is rarely discussed is the use of the "Above 275 GHz" bands.  While one might, at first, think that this might require some exotic "componentry" to use these wavelenghts, to assume such would ignore the fact that this includes "optical" frequencies - which is to say, visible light.

Working with visible light has a tremendous advantage over other "frequencies" in that we have some built-in test equipment:  Our eyes.  While generally "uncalibrated" in terms of "frequency" and power, they are of great help in building, setting up and troubleshooting such equipment.

For years now lasers have been considered to be the primary source of optical transmitters - which makes sense for some of the following reasons:
  • Lasers are cool!
  • They may be easily modulated.
  • Lasers are cool!
  • "Out of the box" they produce nicely collimated beams.
  • Lasers are cool!
  • Low-power diode-based lasers are inexpensive and easy to use.
  • Lasers are cool!
While lasers are (almost) exclusively used for all types of fiber-optic based communications, one might ask oneself if they are equally useful/effective when the medium is the atmosphere rather than a stable, glass conduit?

The answer is:  It depends.

If one is going very short distances - perhaps up to a few hundred meters - the atmosphere can be largely ignored unless there is something that is causing severe attenuation of the signals (e.g. rain, snow or fog) but as the distances increase, even if there is not some sort of adverse condition causing such loss there are typically nonuniformities in the atmosphere caused by thermal discontinuities, wind, atmospheric particulates, etc. that causes additional disruption.

The fact that Lasers produce (generally) coherent beams in terms of frequency and phase - gas lasers usually more so than most semiconductor types - actually works against efforts in making a long-distance, viable communications link because the atmosphere causes phase disruptions along the path length resulting in rapid changes in amplitude due to both constructive and destructive interference of the wavefront.

In the past decade or so, high-power LEDs have become available with significant optical flux.  Unlike Lasers, LEDs do not produce a coherent wavefront and are generally less affected by such atmospheric phenomenon, as the video below demonstrates:

Figure 1:
Visual example of laser versus LED "lightbeam"
communications.

Admittedly, the example depicted in Figure 1 is somewhat unfair:  The transmit aperture of the laser used for this test was very small, a cross-sectional area of, perhaps, 3-10 square millimeters, while the aperture of the LED optical transmitter was on the order of 500 square centimeters.  Even if both light sources were of equal quality and type (e.g. both laser or both LED) that using the smaller-sized aperture would be at a disadvantage due to the lack of "aperture averaging" - that is, more subject to scintillation due to the small, angular size of the beam causing what is sometimes referred to as "local coherence" where even white light can, for brief, random intervals, take on the interference properties of coherent light:  It is this phenomenon that causes stars to twinkle - even briefly change color - while astronomical objects of larger apparent size such as planets usually do not twinkle.

Figure 2:
Adapter used for emission of laser light via the telescope.
Contained within is a laser diode modified to produce
a broad, fan pattern to illuminate the mirror of the
telescope.

For an interesting article on the subject of scintillation, see "The Sizes of Stars" by Calvert - LINK.

Based on this one might conclude that the larger the aperture for emitting will reduce the likelihood that the overall beam will be disrupted by atmospheric effects - and one would be correct.  The use of a large-area aperture tends to reduce the degree of "local coherence" described in the Calvert article (linked above) while also providing a degree of "aperture averaging".  As an aside, this effect is also useful for receiving as well as can be empirically demonstrated by comparing the amount of star twinkle between the naked and aided eye:  Binoculars are usually large enough to observe this effect.

For a fairer comparison with more equal aperture sizes the above test was re-done using an 8 inch (approx. 20cm) reflector telescope that would be used to emit both laser and LED light.  To accomplish this I constructed two light emitters to be compatible with a standard 1-1/4 inch eyepiece mount - one using a 3-watt red LED and another device (depicted in Figure 2) using a laser diode module that was modified to produce a "fan" beam to illuminate the bulk of the mirror.

Both light sources were modulated using the same PWM optical modulator described in the article "A Pulse Width Modulator for High Power LEDs" - link - a device that has built-in tone generation capabilities.  Since the same PWM circuit was used for both emitters the modulation depth (nearly 100%) was guaranteed to be the same.

To "set up" this link, a full-duplex optical communications link was first established using Fresnel lens-based optical transceivers using LEDs and the optical receiver described in the article "A Highly Sensitive Optical Receiver Optimized for Speech Bandwidth" - link.  With the optical transmitters and receivers at both ends in alignment, the telescope was used as an optical telescope to train it on the far end, using the bright LED of the distant transmitter as a reference.  With the telescope approximately aligned, the LED emitter was then substituted for the eyepiece and approximately refocused to the effective optical plane of the LED.  Modulating the LED with a 1 kHz tone, this was used with an "audible signal level meter" that transmitted a tone back to me, the pitch of this tone being logarithmically proportional to the signal level permitting careful and precise adjustment of both focus and pointing.

For an article that describes, in detail, the pointing and setting-up of an optical audio link, refer to to "Using Laser Pointers For Free-Space Optical Communications: - LINK.

Now substituting the laser diode module for the LED emitter the same steps were repeated, the results indicating that the two produced "approximately" equal signal levels (e.g. optical flux at the "receive" end.)  Already we could tell, by ear, that the audio conveyed by the laser sounded much "rougher" as the audio clip in Figure 3, below, depicts.

Figure 3:
Audio example of laser versus LED "lightbeam"
communications over a 15 mile (24km) free-
space optical path.
Music:  "Children" by Robert Miles, used in
accordance with U.S. Fair Use laws.

Figures 4 and 5, below, depict the rapid amplitude variations using a transmitted 4 kHz tone as an amplitude reference over a "Free Space Optical" path of approximately 15 miles (24km).  The horizontal axis is time and the vertical axis is linear amplitude.

Note the difference in horizontal time scales between the depictions, below:

Figure 4:
Scintillation of the laser-transmitted audio (4 kHz tone).
The time span of this particular graph is just over 250 milliseconds (1/4 second)
Click on the image for a larger version.


Figure 5:
Scintillation on the LED-transmitted audio (4 kHz tone).
In contrast to the image in Figure 4, the time span of this amplitude representation is nearly 10 times
greater - that is, approximately 2 seconds.  The rate and amplitude of the scintillation-caused
fading are dramatically reduced.
Click on the image for a larger version.

Laser scintillation:

As can be seen from Figure 4 there is significant scintillation that occurs at a very rapid rate.  The reference of this image is, like the others, based on a full-scale 16 bit sample.  Analysis of the original audio file reveals several things:
  • While the "primary" period of scintillation is approximately 10 milliseconds (100Hz) but there is evidence that there are harmonics of this rate to at least 2.5 milliseconds (400 Hz) - but the limited temporal resolution of the test tone makes it difficult to resolve these faster rates.
  • Other strong scintillatory periods evident in the audio sample occur at approximate subharmonics of the "primary" scintillatory rate, such as 75 and 150 milliseconds.
  • The rate-of-change of amplitude during the scintillation is quite rapid:  Amplitude changes of over 30 dB can occur in just 20 milliseconds.
  • The overall depth of scintillation was noted to be over 40dB, with frequent excursions to this lower amplitude.  It was noted that this depth measurement was noise-limited owing to the finite signal-noise ratio of the received signal.
LED scintillation:

Figure 5 shows a typical example of scintillation from the LED using the same size emitter aperture as the laser.  Analysis of the original audio file shows several things:
  • The 10 millisecond "primary" scintillatory period observed in the Laser signal is pretty much nonexistent while the 20 millisecond subharmonic is just noticeable.
  • 150 and 300 millisecond periods seems to be dominant, with strong evidence of other periods in the 500 and 1000 millisecond period.
  • The rate-of-change of amplitude is far slower:  Changes of more than 10 dB did not usually occur over a shorter period than about 60 milliseconds.
  • The overall depth of scintillation was noted to be about 25 dB peak, but was more typically in the 15-18dB area.
One of the more interesting results of this experiment was how minimally the severe amplitude distortion experienced with the laser actually degraded the overall intelligibility of human speech.  While the tones and brief music clips were clearly badly distorted, it could be argued that with the segment including speech, the degree of that distortion was not as apparent.  Clearly the voice content was being badly "chopped up" by the severe amplitude fluctuations, but with the redundant nature of speech and the fact that the drop-outs were quite brief in comparison to the duration of speech elements (sounds, syllables) it is quite reasonable to be able to expect the brain to fill in the gaps and make sense of it all.

A "Scintillation Compensator":

Despite the redundant nature of the speech maintaining reasonable intelligibility, it became quite "fatiguing" to listen to audio distorted in this manner, so another device was wielded as part of an experiment:  The "Scintillation Compensator", the block diagram being depicted in Figure 6, below.

Figure 6:
Block diagram of the "Scintillation Compensator" system.
Click on the image for a larger version.
This system is essentially a "Keyed AGC" system using a low-level 4 kHz tone from the transmitter as an amplitude reference for a tracking gain cell at the receiver:  If the amplitude of the 4 kHz tone goes down, the gain of the audio is increased by the same amount and vice-versa.  The effect of this device is quite dramatic as the clip in Figure 7, below, demonstrates:

Figure 7:
Audio clip with a"Before" and "After" demonstration
of the "Scintillation Compensator" 
Music:  "Children" by Robert Miles, used
in accordance with U.S. Fair Use laws.

One of the more striking differences is that in the "before" portion, the background hum from city lights remained constant while in the "after" portion it varied tremendously, more clearly demonstrating the degree of the amplitude variation being experienced.  What is also interesting is that the latter portion of the clip is much "easier" (e.g. less fatiguing) to listen to:  Even though syllables are lost in the noise - being obliterated by hum rather than silence in the first part of the above clip - the fact that there is something present during those brief interruptions, even though it is hum, seems to appease the brain slightly and maintain "auditory continuity".

It should be pointed out that the "Scintillation Compensator" cannot possibly recover the portions of the signals that are too weak (e.g. lost in the thermal noise and/or interference from urban lighting) but only that it maintains the recovered signal at a constant amplitude.  In the first portion of the clip in Figure 7 it was the desired signal level that changed while in the second portion it was the background noise that changed.  In other words, in both examples given in Figure 7, the signal-to-noise ratio was the same in each case.

Practical uses for all of this stuff:

The most important point of this exercise was to demonstrate that a larger aperture reduces scintillation - although that point might be a bit obscured in the above discussion.  What was arguably more dramatic - and also important - was that the noncoherent light source seemed to be less susceptible to the vagaries of atmospheric disturbance.  This observation bears out similar testing done over the past several decades by many others, including Bell Labs and the works of Dr. Olga Korotkova.

For a brief bibliography and a more in-depth explanation of these effects visit the page "Modulated Light DX" - LINK - particularly the portion near the end of that page.

The reduction of scintillation has interesting implications when it comes to the ability to convey high-speed digital information across large distances using free-space optical means under typical atmospheric conditions.  Clearly, one of the more important requirements is that the signal level be maintained such that it is possible to recover information:  Too low, it will literally be "lost in the noise" and be unrecoverable.

As the demonstrations above indicate the "average" level may be adequate to maintain some degree of communications, but the rapid and brief decreases in absolute amplitude would punch "holes" in data being conveyed, regardless of the means of generating the light.  Combatting this would imply the liberal use of error-correction and recovery techniques such as Forward Error Correction (FEC) and interleaving of data over time - not to mention some interactive means by which "fills" for the re-sending of missing data could be requested.  The "'analog' analog" to these techniques is the aforementioned ability of the human brain to "fill in" and infer the missing bits of information.

While lasers are well-known to be "modulatable" at high rates, doing so for LEDs is a bit more problematic due to the much larger device sizes and commensurate increase in device capacitance.  To rapidly modulate an LED at an ever-high frequency would also imply an increase of "dV/dT" (e.g. rate of voltage change over time) which, given the capacitance of a particular device would also imply higher instantaneous currents within it, effectively reducing the average current that could be safely applied to it.  What this means is that it is likely that specialized configurations would required (e.g. drivers with fast rise-times at high current; structurally-small, high current/optical density LEDs etc.) to permit direct modulation of very high (10's of megabits) data rates.

Using the aforementioned techniques has rather limited utility when the free-space optical links extend out to many 10's of miles/kilometers owing largely to the vagaries of the atmosphere and the practical limits of optical flux with respect to "link margin" (e.g. the need to use safe and sane amounts of optical power to achieve adequate signal to recover information - particularly as the rate of transmission is increased) but it may be useful for experimentation.

Additional information on (more or less) related topics:



Wednesday, March 2, 2016

The solar saga - part 1: Avoiding interference (Why I did not choose microinverters!)

Back in November I decided to get some solar (photovoltaic) "grid tie" power generation installed at my house. I decided that the best place to install this was on the roof of my detached garage because:
  • The roof area of the garage was comparable to that of the house.
  • Much less tree shading than on the house.
  • Because it was not an occupied structure with no finished attic space, it was exempt from certain requirements (e.g. walkways around the panel areas, etc.) that would have reduced the available area for the installation of the panels.
  • It already had an existing, high-current circuit that was capable of being used for both source and sink of electrical current.
The only thing that I really had to do in the garage was to replace the 70's vintage Zinsco breaker panel with a more modern "load center" as a sub panel:  Doing so was a straightforward job that took only a few hours and cost less than $125 for all of the parts.

Unfortunately there was a significant snag to the "electrical" side of getting it connected to the utility grid via "Net Metering" (it's not "online" yet...) but that will have to wait for a later installment.

What kind of solar system?

In residential, grid-tie installations, two types of solar systems are most commonly found:
  • Series string.  This is where the panels are tied together and go to one, large power converter.  Many of these inverters have inputs for at least two, separate strings for redundancy, to accommodate different illumination profiles (e.g. "east versus west") and also to (statistically) increase efficiency.
  • Microinverter.  In this approach each, individual panel has its own, "private" power converter.
The series string approach is a bit older technology and its popularity is being overtaken by the microinverter approach since the latter is touted with the ability to extract more energy from the entire solar plant since the output from each, individual panel is optimized rather than relying on the "weakest link" from the bank of panels comprising the series string. With modern panels that are intrinsically well-matched, the "weakest link" issue is not as significant as it once was, but that's a topic for a later discussion.

I will say right now that I chose the series string approach for a very practical reason:

Radio Frequency Interference (RFI).

Interference from microinverters:

Let me spin time back to mid 2013 when I saw on an email group a plea from a local amateur (Ham) radio operator for help to analyze a problem that he was having.

He'd had installed a sizable solar plant (approx. 3 dozen panels), each with an Enphase M190 microinverter and suddenly found that he faced a tremendously increased noise floor on both HF and VHF.  By the time that he and I "connected" he had come to some arrangement with the manufacturer and/or installer to install "ferrite beads" (at their expense) on the microinverters' leads in an attempt to mitigate the problem.

He asked me to come over to verify the nature of the interference and its approximate magnitude, prior to the installation of the ferrite devices, and I arranged to do so.

When I arrived, he demonstrated the problem:  When receiving on his HF dipole, which spanned over a portion of his roof and solar panel farm, he experienced 4-6 S-units (20-40dB) of additional noise from the microinverters, depending on frequency.  The noise was that of typical AC mains-coupled switching supplies, being grouped in spectral "bunches" every 10's or hundreds of kHz or so (I don't recall the spacing) on the lower bands (75, 40 meters) and by the time one got to 15 meters it was pretty much just an even "smear" of noise across the spectrum.  By switching to AM, it was apparent that the noise itself had an amplitude-modulated component related to the mains frequency that was not readily apparent when listening on SSB.

The problem was also apparent on 2 meters where low-level spurious signals emanated by these devices were intercepted by his rooftop antenna and would open the squelch and/or mask weaker signals - including those of some of the more distant repeaters.

Analyzing the problem:

For this visit I'd brought along with my FT-817 portable, all-band, all-mode transceiver with a small 2 meter Yagi antenna, a small shielded "H" field loop for localizing signal sources and a specialized 2-meter DF antenna/receiver, to be used with the Yagi, and in switching to 2 meter SSB mode using the rubber duck antenna on the FT-817 I could hear a myriad of low-level carriers as I tuned up and down the band.

Stepping out onto the roof we approached the solar system and I wielded my other gear:  The DF receiver/antenna combination showed the source of the signals - on any random 2 meter frequency - to be that of the solar array. Switching to the combination of the FT-817 and the small, shielded H-loop I was able to localize the conductors from which the energy was being radiated:  Not only did it seem to be coming from the AC power mains cables connecting everything together, but also the frames and the front surfaces of the solar panels themselves, indicating likely egress on both the AC and DC sides of the microinverters.

Part 15 compliance?

At this point one might ask how such a product appeared on the market if it caused interference:  Doesn't FCC Part 15 "protect" against that?

No!

First of all, it is worth re-reading a portion of the text from Part 15 that I'm sure that you have noted somewhere on a device or in a manual that you have laying around.  Quoting from FCC Part 15, section 105 subpart (b):

This equipment has been tested and found to comply with the limits for a Class B digital device, pursuant to part 15 of the FCC Rules. These limits are designed to provide reasonable protection against harmful interference in a residential installation.
This equipment generates, uses and can radiate radio frequency energy and, if not installed and used in accordance with the instructions, may cause harmful interference to radio communications.
However, there is no guarantee that interference will not occur in a particular installation.
(The emphasis is mine.)

The above speaks for itself!

It should be observed that while Part 15 limits the amount of incidental RF energy that can be emitted/radiated/conducted from electronic devices to a certain level, that level is NOT zero!  The fact is that a device may be perfectly legal in its amount of emission, but still be detectable, under the right circumstances, from a significant distance.  In this particular situation, there were at least three things going against our solar system owner:
  • He was in very close proximity to the microinverters and solar panels.  As noted previously, his antennas for HF and VHF were either on the roof, or crossed part of it.
  • HF operation, by its nature, involves rather weak, narrowband signals.  This makes it even more likely that similar low signals emanated from devices would be noticeable and obvious and that broadband noise could be quite apparent.
AND
  • His solar system comprised approximately three dozen panels.  What this means is that each of those microinverters is, by itself, radiating its own, set amount of interference.  If you take the number as 36, this means that as a system, the total amount of energy being radiated by all of those microinverters put together will be increased by nearly 16 dB - that's nearly 3 S-units!  Practically speaking those inverters nearest the antenna(s) will cause the most problem due to proximity, but you can certainly see that many devices in one location are likely to exacerbate the issue overall.
I had no way to accurately measure the emitted signals from the microinverters to determine if they were compliant with part 15 or not, but I'm willing to believe that a widely-sold product such as an Enphase M190 microinverter had been tested and found to be in compliance by reputable people.

Figure 1:
A look inside the newer, Enphase M250, a model newer than the M190's
described as causing interference problems.  At the moment the jury is still
out if the M250 (or M215) is much "cleaner" than the older M190 in terms
of radiated energy.  While some decoupling - possibly filtering - is visible
on the AC mains connection at the bottom, no inline chokes are
apparent from the top-of-board view on the DC (solar panel) side - only
some capacitors that appear to bypass it to ground (e.g. the case.)
This M250 was given to me by an installer after it had failed in the field.
Click on the image for a larger version.
We discussed what it would take to make this microinverters completely quiet and I knew a way:  Completely enclosing each microinverter in a metal box with L/C Pi filters on both the DC input and AC output leads.  Proper L/C filtering of the input and output along with appropriate capacitive bypassing so that not only does RF energy not escape from the unit, but it also offers little/no potential for RF currents generated within to appear differentially between the DC input and AC output leads.

I have discussed similar interference-elimination measures related to switching power supplies in my August 18, 2014 post, "Completely Containing Switching Power Supply RFI" - link.  This method can be completely effective in reducing the interference level of such devices to undetectable levels.

It would have been nice if if there was available a weathertight box into which each microinverter could be mounted, along with a separate set of filtered input and output power connections.  The design of such a device would be slightly complicated by the fact that the Enphase units communicate via their powerline connections, but it was likely that this could be accommodated in the filter design.

I was quite sure that such an after market product did not exist at the time and even if it did, it would be prohibitively expensive, particularly when multiplied several dozen times!

My host asked me if I thought that the installation of ferrites on the input and output leads would help:  I thought that it might help a little bit on VHF and UHF, but that I couldn't see it having any useful effect on HF - but I hoped that I was wrong!

As I left this ham's house I had my FT-817 connected to my vehicle's antenna, listening in SSB mode on 2 meters and I could hear the low-level signals from his solar array from a distance of nearly two blocks, line-of-sight.

Post ferrite installation:

A few weeks later I got an email from this same ham stating that the ferrites had been installed on the microinverters.  To do this, it was necessary to (practically!) un-install and re-install the entire system as very few could be reached from the roof, requiring a lift to access.

Did it help?

Not that he could tell.

Is his situation unique?

Apparently not.

There are many anecdotes of amateur radio operators facing terrible interference issues after they - or their neighbors - install a microinverter-type solar system.  Once such instance is documented in the following thread on Reddit:
Neighbors just got solar - They gifted me with S-9 RFI  - link

Another case was documented several years ago on the "Ham Nation" Web TV show (Episode #65) where the only way to reduce the problem to a tolerable level was to relocate the antenna some distance away from the house-mounted microinverter system, at the far end of the lot.

A link to the webcast of Ham Nation episode #65 may be found here:  Link  (The relevant portion starts at 16:40.)


Since the original posting of this article a write-up appeared in the April, 2016 QST magazine that details another ham's battles with RFI from a solar electric system.  While this system was not microinverter-based, it used devices called "optimizers" that work on similar principles to the microinverters in that high-frequency switching supplies are used to maximize the amount of power available from the array.

Why the ferrites didn't/won't work:


There is a misconception amongst some that loading wires with ferrites will stop the ingress/egress of RF signals.

This does not happen.

By putting a piece of ferrite on a conductor one increases the effective impedance at a given frequency, but that impedance is not infinite, and the effectiveness of the ferrite depends on several things:
  • The characteristic impedance (real, complex) of the conductor on which it is placed at specific frequencies (it varies all over the map!) 
  • The size of the ferrite (length, diameter, etc.)
  • The material type (permeability)
  • The frequency
  • How many "turns" of the conductor may be passed through the ferrite.
For retrofits, the answer to last one is generally easy:  One turn, as that is all that may be accommodated with a typical "split core" ferrite that is installed simply by placing it over a wire.  As was certainly the case with the Enphase units, the connecting wires were simply too short to allow additional turns of wire even if the ferrite device were sized to allow it.

In general, ferrites have greater efficacy with increasing frequency, but this is not surprising since their mechanism is generally that of adding a bit of inductive reactance to the conductor on which they are placed - but this also explains why "snap on" or split ferrites are part of a futile attempt when one attempts to solve HF-related noise issues:
They simply cannot provide enough reactance to attenuate by the needed 10-30dB to solve most severe interference situations at HF!
Figure 2:
The outside of the same Enphase M250 as shown in Figure 1,
above, showing connecting cables:  Not much room
to place large ferrites on these - much less multiple turns!
(The cables on the M190 are of similar length.)
Click on the image for a larger version.

The reason for this is immediately apparent if one studies the specifications of a typical snap-on ferrite such as the Amidon 2x31-4181p2 link.  Here are some typical specifications for this rather large piece of ferrite:
  • I.D:  0.514" (13mm);  O.D.:  1.22" (31mm) ;  Length:  1.55" (39mm)
  • Material type:  31 (1-300 MHz, typical)
  • Reactance of device, typical:  25 ohms at 1 MHz, 100 ohms at 10 MHz, 156 ohms at 25 MHz, 260 ohms as 100 and 250 MHz
As you can see, the impedance is stated as 100 ohms at 10 MHz.  Being generous, let us apply that figure to the 40 meter band where we can see that if this were applied to a line that had a 50 ohms characteristic impedance, we might (theoretically, simplistically) expect to see somewhere in the area of 8-16dB of additional attenuation caused by the loss induced by this device - but that is only 1-3 "S" units, and that represents only a "good case" scenario.  In the case of the aforementioned situation it would have taken several more "S" units to reduce the noise to the point where it was not highly disruptive.

What is more likely to happen is that the interconnecting wires will have wildly varying impedances at different frequencies - some higher, some lower - and the this will have a dramatic effect on the efficacy of this reduction.  In the case of the ham that I had visited I would not have been surprised that if a plot had been taken of the noise versus frequency, its "shape" would have been dramatically altered by the addition of the ferrite devices and, overall, the amount of radiated energy (interference) would have been measurably reduced.  The problem was that the level was so high to begin with that knocking it down by, say, 90% (10dB, or just under 2 S-units) still represented a terrible situation!

The Amidon device noted above is a rather large device and at least three of them would be required for each microinverter (one for each DC lead, one for the AC connection) and the expense of these devices - not to mention the installation (36 microinverters would require 108 ferrite devices!) - could really add up!

It should go without saying that a smaller ferrite - although less expensive - will have even less effect than a larger one!

Comment:
Ferrite devices such as described are often more useful for preventing RF from getting into devices:  Increasing the impedance on the connecting leads and wires may not only improve the efficacy of already-existing RFI protection devices such as bypass capacitors, but they can also break up loops through which high RF currents induced by a local transmitter might be passing "through" a device.  In these case the moderate effect of their added impedance may well be enough to adequately mitigate RF ingress issues.

Remember:  With RF ingress it is often the case that knocking down the RF energy by 6-12 dB will be enough to mitigate the issue whereas the amount of "hash" emitted by the microinverters would likely need to be reduced by more than 20 dB to make it undetectable.

"Grounding" won't help either:

Reading some of the correspondence in the Reddit posting (above) there is mention of "grounding" to eliminate/reduce RFI from these units:  To assume that "grounding" would likely solve or mitigate this problem would be to assume incorrectly!

The problem, again, is that RF energy appears to be conducted from the input and output (DC and AC, respectively) coupling wires which, themselves, can act as antennae:  "Grounding" the case - which would also "ground" the safety ground on the AC output - is not really going to help.

If the unit is installed according to code, there should already be a "ground" attached at the panels, anyway - but this wire connection, which is likely to be 10's of feet (several meters) between the roof and the Earth or grounding point is going to look like a "ground" only at DC and low frequencies - such as those found on the AC mains!

Any wire that is several feet long - grounded or not - is going to act as an antenna.

What this means is that it is entirely possible that at least some of the RF interference being radiated by the inverter is going to be conducted along the grounded metal structures (such as the solar panels and the frames) and wires in addition to the AC mains wiring.

Again, the proper way to contain such RF energy within the confines of the circuitry was discussed above:  Proper L/C filtering of the input and output along with appropriate capacitive bypassing so that not only does RF energy not escape from the unit, but it also offers little/no potential for RF currents generated within to appear differentially between the DC input and AC output leads.

The upshot:

If you are getting interference from a microinverter system - either your own, or your neighbors, is there anything you can do?

Since the installation of ferrites will have minimal effect on HF, the answer would seem to be "No, not really", aside from converting to a series-string system, or installing a series-string system, instead.

In the case of the ham operator that I visited, he mitigated the situation somewhat by moving his HF antenna as far away from his house as he could (which wasn't very far considering that he had limited space on his city lot) which helped slightly.  Nighttime was the only time during which he could completely quell the interference by turning off the breaker feeding the solar array, but during the day there was nothing he could do:  If either solar illumination or AC mains power was available to the microinverters they seemingly caused the same amount of interference, whether they were under load or not!

Are newer microinverters better/quieter?

It has been reported that the Enphase M190 microinverter has been obsoleted and has been replaced with newer models that are more reliable and more "RF Quiet".  On this second point, the jury seems to be out:  Anecdotally, there seem to be about as many reports of the newer models (from various manufacturers) causing interference as not, so the reports are rather confused.

I know at least two amateur operators with newer-model Enphase inverters (M215, M250) but they report other extenuating circumstances (e.g. their microinverter PV system is located some distance from their antennas and/or they already had notable interference from other sources before installing the solar power system) that they cannot say for certain whether or not there is a problem caused by their system.  At some point I hope to personally visit at least one of those installations in the coming months.


Series String inverters and interference:

While less efficient overall and somewhat less expensive up front, I decided to use a series-string inverter system.  From direct observation and reports by people that I know and trust I knew that units made by Sunnyboy and Fronius could be reasonably expected to cause little or no interference on their own.  Additionally, were an interference issue to arise, having a single point at which to filter (e.g. one large box with a relatively small number of input and output leads) I was quite confident that it would be possible to add additional filtering if necessary.

To be sure, one might (theoretically) lose up to 10-20% or so peak efficiency with a series-string system as opposed to a Microinverter that optimizes for each, individual panel, but considering the comparatively low cost of panels these days and the lower "up front" cost for a series-string inverter system, one can usually afford to "up size" the system slightly to compensate.

(Comment:  As noted previously, series string "optimizers" have been observed to cause significant RFI since their basic principle of operation would lend to them tendencies to produce unwanted "hash" unless well-designed.)

Maintaining the various systems:

Anecdotally, from both owners and maintainers of microinverter-based systems, it is not uncommon to experience the failure of several of the microinverters after a only few years, the rate-of-failure (apparently) following somewhat of a "bathtub" curve:  Several die early on, there is often a period of relative stability, and then they start to fail in greater numbers after several more years.

While these devices (microinverters) seem to have a good warranty, the issue comes about replacing the microinverter that is in the "middle of everything" on the roof.  On a roof with a moderate-to-steep pitch it may be necessary to use equipment such as a lift to be able to safely access the failed inverter - and it may be necessary to "de-install" several of the surrounding panels to gain access.  In other words, it will likely be many times the cost of the microinverter itself ($125-$300) in equipment rental, time and labor just to replace it.  For this reason it seems that many people simply allow several of them to fail before "calling out the troops":   Having several panels (effectively) offline at a time is something that detracts from the proclaimed efficiency benefit of the Microinverter scheme!

The large, series-string inverters appear to be extremely reliable, having excellent track records (at least for Fronius and Sunnyboy - the two brands with which I have any familiarity).  The obvious down side is if there were a failure with the converter, it would likely take a large portion - or all - of the production off line, but the replacement of the device is comparatively easy and would likely not be more than a couple times the total cost (parts plus equipment rental plus labor) of replacing a small handful of microinverters!

What about failures of solar panels?  Modern panels contain diodes that "wire around" sections that have failed or shaded, so unless a catastrophic failure occurs that completely removes it from the circuit, one will lose, at most, the capacity of the entire panel:  This is true with both microinverter and series-string configurations.)

Fortunately solar panels have been around for decades and have been proven to be quite reliable and rugged in terms of durability.  If a failure in a solar electric system is going to occur, the solar panel itself is less likely to be the problem unless the problem is actual, physical damage.

(Note:  There may be warranty coverages or service plans that mitigate the costs related to such maintenance, but since they vary wildly with installers and manufacturers, they are not covered here.)

Final comments:

Each system has its advantages and trade-offs:  In my case a primary concern was the avoidance of interference.  Since the advent of digital TV - and because fewer people listen to the radio or even have off-air TV these days - they likely wouldn't notice (or would care!) about interference issues that appear to be common with the microinverter approach.

One can always hope that newer microinverters will become increasingly quiet, but for now that seems not the case - if not in reality, certainly in perception.


In the next installment I'll talk a bit more about the installation of my system - trials and tribulations...

Saturday, February 6, 2016

Using "Ultracapacitors" as a power conditioner and ballast for transient high-power loads (or "How to run your HF rig from D-cells" - sort of...)

The problem of "high impedance" power sources:

The title serves to illustrate a problem frequently encountered when trying to power a device that operates with a high peak current:  Your energy storage - or your power source - may have plenty of capacity, but not enough current capability!

One such example of a power source that has plenty of capacity, but rather limited power capability is that of the "cigarette lighter" in a typical vehicle:  As long as the engine is running, you can pull power - but not too much:  More than 10-15 amps is likely to blow a fuse, and even if you were to replace the original fuse with, say, a 20-30 amp fuse (not smart!) the rather light gauge wiring would likely result in a voltage drop that would render a typical 100 watt HF rig unusable on voice peak.


For another, more extreme example let us consider a set of alkaline "D" cells, referring first to some online data:
  • The "Energizer" D-cell data sheet - link
  • "Duracell" D-cell data sheet - link.  Does not include direct amp-hour rating, but such may be inferred from the graphs presented.
  • "'D' Battery" Wikipedia page - link.
Please note:  Manufacturer's links sometimes change - you may have to resort to an internet search if one or more of the above links do not work.

One thing that jumps out is that a "D" cell has somewhere between 5 and 30 amp-hours per cell, but if you study the graphs, you'll also note that this apparent capacity drops like a rock with increasing current.  Why is this?

At least part of this is due to internal resistance.  If we examine the data for a typical alkaline "D" cell we see that on a per-cell basis that the internal resistance is 0.15-0.3 ohms per cell when it is "fresh", but this increases by 2 or 3-fold near the end of life of the cell (e.g. <=0.9 volts/cell) and increases dramatically - and very quickly - at still-lower voltages.  Interestingly, the manufacturer's data used to include graphs of internal cell resistance, but these seem to have disappeared in recent years.

If we take a general number of 0.2 ohms/cell and expand that to a 10-cell series battery pack we get a resistance of 2 ohms which means that if we attempt to pull even one amp we will lose 2 volts - and this doesn't take into account the contact and wiring losses related to these batteries!

If you look at the graphs that relate battery capacity to discharge current you will notice something else:  If you draw twice the current, your apparent capacity - in "run time" decreases by more than half and if you convert this to amp-hours, the more current drawn, the fewer available amp-hours.

These two facts together tell us two things:
  • We cannot draw too much current or else resistive losses will cause excess voltage drop.
  • Higher current consumption will cause a marked drop in available amp-hour capacity.
This second point is often referred to as "Peukert's Law" (Wikipedia article here - link).  While Peukert's law was derived to describe this effect with lead-acid cells, a similar phenomenon happens with cells such as alkalines.

As you may have inferred from the title of the article, our particular application implies a usage where the typical, resting (or even average) current consumption is quite low but the peak current consumption could be very high.  Clearly, with a string of "D" cells, alone, while we may have enough theoretical capacity to provide power, we cannot tolerate the peak currents!

A power source with good longevity:

What we need is a low-impedance power source, and as it turns out almost any type of rechargeable cell - whether it is lead-acid, NiCd, NiMH or lithium - has a lower impedance than alkaline cells so the obvious question that one might ask is why not use one of those other types?

The obvious problem with lead-acid is that of longevity:  If you own a lead-acid battery that is more than three years old (and certainly more than five!) it is likely lost a significant percentage (at least 30%) of its rated capacity - probably more unless it has been treated really well (e.g. controlled temperature at or below 70F, 21C) and always kept at a proper floating voltage when not in use (e.g. 13.5-13.7 volts for a "12 volt" battery at nominal room temperature.)

On the other hand, modern alkaline cells will retain the vast majority of the capacity for at least 5 years, just sitting on a shelf - a period of time that is beyond the likely useful lifetime of either lead-acid or most rechargeable lithium-ion cells!  What's more is that alkaline cells are readily available practically anywhere in the world and they come "fully charged".  To be sure, there are other types of "primary" (non-rechargeable) cells that have excellent shelf life such as certain types of lithium, but these are much less-available and would likely "break the bank" if you were to buy a set with comparable capacity!

A low-impedance (voltage) source:

An advantage of Lead-Acid, NiCd, Lithium Ion and some NiMH cell types is that they have quite low internal resistance compared Alkaline cells:  Even an aging lead acid battery that is near the end of its useful life may seem to be "OK" based on a load test as its internal resistance can remain comparatively low even though it may have lost most of its storage capacity!

One could ask, then, why not simply parallel Alkaline cells, with their ready availability, long shelf life and high storage capacity with one of these other cell types and get the best of both worlds?  In theory you could - if you had some sort of charge control circuitry that was capable of efficiently meting out the energy from the alkaline pack and using it to supplement the "other" storage medium (e.g. lead-acid, lithium-ion, etc.) but you cannot simply connect the two types in parallel and expect to efficiently utilize the available power capacity of both types of storage cell - this, due to the wildly different voltage and charge requirements.

Even if you do use a fairly small-ish (e.g. 7-10 amp-hour) lead-acid or lithium-ion battery pack, even though its internal resistance may be low compared to that of alkaline packs, it likely cannot source the 15-20 amp current peaks of, say, a 100 watt SSB transceiver without excess voltage drop, particularly if it isn't brand new.

This is where the use of "Ultracapacitors" come in.

In recent years these devices have become available on the market and for reasonable prices.  These capacitors, typically with maximum voltages in the range of 2.5-2.7 volts per unit, may have capacitance values as high as several thousand Farads in a reasonably small package while at the same time offering very low internal resistance - often in the units of milliohms.  What this means is that from this capacitor one may pull many 10's of amps of current while losing only a small percentage of the energy in heat:  Indeed, many of these capacitors have current ratings in the hundreds of amps!

What this means is that we can use these capacitors to deliver the high, peak currents while our power source delivers a much lower average current.

The difference between peak and average current - and power:

With a simple "thought experiment" we can easily realize that with our transmitter, one second at 100 watts is (about the) same total power as ten seconds at 10 watts.  If, in each case, we averaged the power over ten seconds, we would realize something else:  In both cases, the average power is 10 watts per second.

Clearly, the instantaneous power requirements for a radio operating at 10 watts output are different than at 100 watts:  For the former you'll likely need 4-5 amps of current, but for the latter you'll need 18-25 amps (at 12 volts or so, in each case.)


Here, we have a problem:  If we have a given resistance somewhere in our DC supply, we lose much more power at, say, 18 amps than at 5 amps according to the equation:

P = I2R

In other words, the ratio of the power loss is equal to the square of the ratio of the current differences - in other words:

182 / 52 = 12.96 (or 13)

That means that power losses at 18 amps are 13-fold worse than those at 5 amps.

Clearly, the best way to mitigate this is with heavy cables to minimize resistance, but what if your power source is, by its very nature, fairly high in resistance in its own right?

This is where the "ultracapacitors" can be useful.  By acting as a reservoir to handle peak currents, but rely on the battery - whatever form it may take - to make up the average.

With SSB (Single SideBand) transmission we have an (almost) ideal situation:
  • There is no RF power when we aren't saying anything
  • The amount of RF power is proportional with voice peaks, and
  • Speech has comparatively rare peaks with a lot of empty spaces and lower-energy voice components interspersed.
  • SSB is 6-12 times more power efficient in conveying voice than FM and occupying 1/3-1/4th of the space (bandwidth) as one FM signal.
In other words, when we are transmitting with SSB, the average power of a hypothetical 100 watt transmitter will be much less than 100 watts.

Compare this with 100 watt FM transmitter:
  • When you are "keying down" it is always putting out 100 watts, no matter what your voice is doing.
  • 100 watts of FM is less effective in conveying voice than even 10 watts of SSB and it takes at least 3 to 4 times as much bandwidth as an SSB signal.
One obvious take-away is that if you are in an emergency, battery power communications situation where you need to communicate on simplex and find that it takes 20-50 watts of FM power, you are probably making a big mistake sticking with FM in the first place as you could do better with 5 watts of SSB or less, but I digress...

For the purposes of this discussion the point that I was really trying to make was the fact that the use of these "ballast" capacitors is appropriate only for relatively low duty-cycle modes such as SSB or, possibly CW:  If you tried this with FM or digital modes such as PSK31 the long duty cycle (e.g. key-down) would quickly drain the energy stored in the capacitors and they would "disappear", putting the load back on the battery bank.

This technique is not new:  For many years now one has been able to buy banks of capacitors intended for high-power car audio amplifier installations that provide that instantaneous burst of current required for the "thud" of bass without causing a similar, instantaneous voltage drop on the power supply.  Originally using physically large banks of "computer grade" electrolytic capacitors, these systems are now much smaller and lighter, using the aforementioned "Ultracapacitors".

There are also devices on the amateur radio market that do this:  Take as an example the MFJ-4403 power conditioner (link).  This device uses ultracapacitors in series to achieve approximately 5 Farads of parallel capacitance across the output leads of the device, allowing high peak currents to be pulled and "smoothing" out the average the instaneous overall voltage drop that can cause a fuse to blow and/or the radio to malfunction due to too-low voltage.

Now, a few weasel words:
  • The device(s) described on this page can/do involve high currents and voltages that could cause burns, injury, fire and even death if improperly handled if reasonable/proper safety precautions are not taken and good building techniques are not followed.
  • The device(s) described are prototypes.  While they do work, they may (likely!) have some design peculiarities (bugs!) that are unknown, in addition to those documented.
  • There are no warranties expressed or implied and the author cannot be held responsible for any injury/damage that might result.  You mileage may vary.
  • You have been warned!

How this may be done:
Figure 1:
The capacitor bank/power conditioner with 53-1/3 Farads,
in a package slightly larger than a 7 amp-hour, 12 volt
lead-acid battery.  How much energy is actually
contained in 53.33F at 13 volts?  Theoretically, about
the same as just one alkaline AA cell.
Click on the image for a larger version.

For a variety of reasons (physics, practicality) cannot buy a "16 volt" ultracapacitor:  Any device that you will find that has a voltage rating above 2.7 (or, in some cases 3.something volts) is really a module consisting of several lower-voltage capacitors in series.  What is more, you cannot simply throw capacitors in series as there is no guarantee that the voltage will always divide equally amongst them unless there circuitry is included to make this so.

Another consideration is that if you have such a device - a large bank of capacitance - that is discharged, you cannot simply connect it across an existing power source because a (theoretically) infinite amount of current will flow from the power source into the capacitors, if their voltage is lower, to force equilibrium.  Practically speaking, if you were connect the capacitor bank "suddenly" to the power source, if the resistance of the wires themselves didn't serve to limit the current you'd likely blow the fuse, trip the breaker and/or cause the power supply to go into some sort of overcurrent (or shut down) mode - none of which are at all helpful.

Finally, this capacitor bank will be (theoretically) capable of sinking or sourcing hundred of amps if applied to a high current source/shorted out - perfectly capable of burning even heavy gauge wire, so some sort of protection is obviously needed.

The diagram in Figure 2, below, provides these functions.
Figure 2:
Schematic diagram of the capacitor bank/power ballast/conditioner and charging
circuit - see the text for a circuit description.
Click on the image for a larger version.

Circuit description:

Capacitors C1-C6 are "Ultracapacitors":  Their exact values are not important, but they should all be identical values and models/part numbers.  In the example of the MFJ-4403, six 25 Farad capacitors are used yielding a total of 4-1/6 Farads while the drawing in Figure 2 depicts six 350 Farad capacitor being used to yield a total of 58-1/3 Farads.  The greater the capacitance, the more energy storage, but also the longer the "charge" time for a given amount of current and, of course, the larger the size and the higher the initial cost of the capacitors themselves.

Zener diodes D1-D6, each being 2.7 volt, 1.3 watt units, are placed across each capacitor to help equalize the voltage.  As is the nature of Zener diodes they will be at least partially conducting at lower voltage than nominal, the current increasing dramatically as their rated voltage is approached - which, itself, can vary significantly.

Originally, I experimented with the use of a series-connected resistor, diode and green LED across each capacitor to equalize the voltage as depicted by components Ra, Da and LEDa.   In this circuit the LED, a normal, old-fashioned indicator-type "non-high brightness" LED was used, taking advantage of its 2.1 volt threshold voltage along with the 0.6 volt drop of an ordinary diode with a 5.1 ohm resistor in series to provide a "knee".  While this circuit did work, providing a handy, visual indication of a "full charge" state of each of the six capacitors, it did not/could not safely conduct enough current to strongly force equalization of the capacitors' voltages.

The Zener diodes, with their maximum current of more than 400mA, as compared to 15-25mA for the LED-based circuit, seemed to be more effective.  I left the LED-based circuit in place after constructing the prototype since there was no reason to remove them and the illumination of the LEDs served to indicate during testing that the capacitors are charging up with the equal illumination being generally indicative of equal charge distribution.

It will be noted that the "top" of the capacitor bank is connected to the positive side of the power source, "straight through", at all times via TH1 and TH2, current-inrush limiters.  Because of this, when this unit is "off" it is, for all practical purposes, transparent, consuming no current.  These devices (TH1, TH2) act as self-resetting fuses, limiting the current to a sane amount, somewhere in the 30-50 amp region, if the output (or input!) is shorted:  Ordinary "slow blow" fuses could be used here, but if so, the advice is to keep spares on hand!

It is only the "bottom" of the capacitor string that is connected/disconnected to enable or disable the "ballast" (and filtering) capability of this circuit:  When "off" the bottom of the capacitor bank is allow to float upwards.

If the capacitors are discharged when switch SW1 is turned ON while power is applied, the first thing that happens is that the gate of Q1 is turned on via resistor R3.  When this happens current flows through R1, but when it exceeds approximately 0.6 volts - a voltage commensurate with approximately 2.5 amps of "charge" current - transistor Q3 begins to be turned on, drawing down Q1's gate voltage.  When this circuit reaches equilibrium only 2-3 amps will flow through Q1, R1 and TH3 and thus charging the capacitor bank comparatively slowly and preventing a "dead short".

Figure 3:
The "control board" with the charge control/regulator circuit.  In this prototype R1 was implemented using a pair of 0.5 ohm 15 watt resistors since I didn't have a single 0.22-0.25 ohm, 1-5 watt resistor on-hand.
As can be seen, Q1 and Q2 are bolted to the lid of the die-cast box for heat-sinking.  In the lower-left corner can be seen the heavy wires that connect to the input/output and to the capacitor bank, along with the 30-amp current limiting devices, all connected on an "island" of copper cut out of the piece of circuit board material.
The rest of the circuit is constructed on a combination of the "Me Squares" and islands cut into the piece of circuit board.
Click on the image for a larger version.

While in this equilibrium state the gate voltage on Q1 is necessarily reduced to keep it partially "off", to maintain the current at approximately 2.5 amps, and there will be a voltage drop across R3:  This voltage drop is detected by Q5, a PNP transistor via R4 which, if Q2 is turned on, will also be turned on which, in turn, turns on LED2, the "Charging" LED which also turns on Q4 which, in turn pinches off the drive to the main capacitor bank switch, Q2, forcing it off.

(In the event of a circuit malfunction, self-resetting fuse TH3 limits the maximum current through Q1 to 5 amps before "blowing", at which point the current will be reduced to a few 10's to 100's of milliamps.)

Once the capacitor bank has become (mostly) charged and the voltage across it is nearly the same as the applied voltage, the charging current will begin to drop and as it does, transistor Q3 will start to turn off, causing less voltage drop across R3 in an attempt to make Q1 conduct more.  At some point, when the capacitors have reached full charge and current flow starts to decrease, Q1 will be "fully" on (e.g. its gate voltage approaching "V+_SW") and as this happens the voltage drop across R1 will have decreased to practically nothing.  When this drop is less than approximately 0.6 volts, Q1 will turn of completely and the voltage at the "bottom" of R1 (the side connected to the gate of Q1) will be equal to that of "V+_SW" and this will cause transistor Q5 to turn off.

Once Q5 turns off, the "Charging" LED will also turn off, as will Q4 and the voltage on the gate of Q2, being pulled up by R5 and "slowed" by capacitor C8, will start to rise.  As the gate voltage on Q2 crosses the "on" threshold it will conduct strongly, connecting the bottom of the capacitor bank to ground with only a few milliohms of resistance.

If switch SW1 is turned off, the voltage at "V+_SW" drops and via R5, C8 and the voltage at the gate of Q2 drops, turning it off and disconnecting the capacitor bank.

Construction details:

The ultracapacitors were wired together outside the enclosure (a Hammond 1590 series die-cast box) using multiple, folded pieces of #12 AWG bare wire, both for low ohmic resistance and mechanical support.  Additional pieces of wire were used on the capacitors' support pins for spacing and support when the two banks of three were arranged to be parallel to each other, terminals facing - but separated by a safe distance as seen in the pictures.  The balancing circuits were also installed across the capacitors at this time.
Figure 4:
The bank of six, 350 Farad, 2.7 volt capacitors mounted in the bottom of the Hammond die-cast box.  The capacitors are connected together using multiple, folded pieces of #12 AWG copper wire that also provide mechanical support.
Not obvious from the picture, the heavy "bridge" between the left and right bank at the bottom of the picture in the center is insulated from the metal box, to prevent shorting, by a piece of clear plastic from a discarded "blister pack" product package that was heat-formed around the screw-boss and then secured in place with RTV.
On the left may be seen the"on/off" switch and the two indicator LEDs while the "power in/out" leads, using #12 flexible speaker wire, are visible on the lower-right.  Across each capacitor may be seen "Ra, LEDa and Da" depicted in the
schematic and discussed in the text, originally used for capacitor balancing.
Click on the image for a larger version.

Prior to mounting the capacitors in the bottom of the box the holes for the LEDs and switches were drilled/filed - this to eliminate the possibility of the tools damaging them during work.

The two parallel banks of three series capacitors were prepared and then placed in the bottom of the box, held in place with RTV (Silicone (tm) seal).

There is no etched circuit board.  The actual control circuitry is mounted on the lid, with Q1 and Q2 being bolted to the lid itself for heat-sinking using electrical insulating hardware (washer, grey insulating pads) that were scavanged from a dead PC power supply.  The circuit itself was constructed on a piece of glass-epoxy circuit board material.

Without making a circuit board, there are several ways that the circuit could have been constructed, but I chose to use a variation of the "Manhattan" style which involved islands of copper.  In some instances - such as for the large resistor(s) comprising the 0.25 ohm unit on Q1 and for the connections of the power in/out leads and TH1 and TH2, islands of copper were isolated on the board by first drawing them out with a pen and then slitting both sides of a narrow (1/16th-1/8th of an inch) trace with a sharp utility knife and straight edge and then using the heat from a soldering iron to aid in the lifting of the narrow strip to isolate the area of board.

For other portions of the circuit I used "Me Squares" available from QRPMe.com (link):   These are small pieces of glass-epoxy circuit board material with nice squares etched on them that one glues down using cyanoacrylate (e.g. "super") glue and then uses as points for soldering and connection.

The nice thing about these "Me Squares" is that they are very thin, look nice and are very flat - which makes them easy to solder and glue down, but one could also cut out squares of ordinary circuit board material and solder those down, instead, provided that they were also made flat on the back side and de-burred for maximal surface contact.  Finally, one could use the utility knife and isolate islands - or even use an "island cutter" tool - to produce isolated lands on the piece of circuit board material itself.

The main reasons for using this technique were that it was quick, and also that it was surface-mountable:  Had I wired it on perforated board or even made a conventional circuit board with through-hole parts I would have had to stand it off from the lid to insulate the circuit from it:  Using this technique the board itself was simply bolted flat to the lid and was quick to wire up.

For interconnects between the circuitry on the lid short lengths of #12 AWG stranded wire were used for the high-current leads connecting the capacitors and input/output leads and much smaller (#24-#28 AWG or so) for the wires that connected to the switch and LEDs.  For the "outside world" connections 30 amp Anderson Power Pole (tm) connectors were used mounted in the standard manner.

A few caveats with this circuit:
  • This circuit consumes some current (at least a few 10's of milliamps - maybe more) whenever it is set to "on", even after the capacitors have equalized.  What this means is that it will slowly drain your battery if the switch is left in the "on" position and because of this it is recommended that one switch it to the "on" position ONLY when intermittent, high current is going to be needed.  In other words, if you are going to receive (only), leave it "off", turning it "on" only if you plan to transmit, knowing that it may take 10's of seconds for the capacitors to charge.
  • If the power source cannot deliver the amount of current required during the "charge" cycle - or if it briefly "blinks" - the source voltage will sag and it will likely switch from "charge" mode to "operate" mode and connect the ultracapacitors directly across the power source.  If the power source has limited current in the first place, this will simply mean a "dip" in voltage while the capacitor bank charges, but if this occurs with a high-current source that has had a momentary glitch, a premature switch to "operate" mode could, in theory, blow a fuse.  This premature switching would likely happen if you had this connected via a cigarette lighter and started a vehicle while it was in a charge phase.
  • This capacitor bank should be treated like a battery:  If it is shorted out you will get lots of current - more than enough to burn open small wires and blow fuses - or even burn open large wires or small tools if you do an "oops" on the unfused/unprotected side of the circuit, or neglect to include any fusing/current protection!
  • Neither this capacitor bank (or any battery) should ever be placed on the output with any power supply that has a "crowbar" voltage protect circuit (such as is present on many power supplies such as the Astron "RS" and "RM" lines) as a power line or transient on the output could cause the crowbar to trigger and short the output of the power supply - including the capacitor bank or whatever battery you might have on the output.  If this happens, expect significant damage to the power supply!
  • This sort of device must be housed within a rugged enclosure/container.  The picture shows the prototype being built into a Hammond 1590 series die-cast aluminum box that is both very rugged and also provides heat sinking for Q1 and Q2.  In the unlikely event of a catastrophic failure due to a wiring or a capacitor going bad this enclosure will not melt and is likely to contain the "mess" - even if it were to get very hot!
  • If used in a vehicle, this circuit should be disconnected/disabled when the vehicle is started since these capacitors can supply "car starting" current on their own and it is possible to blow fuses, pop breakers, damage switches, circuits, relays and other mischief if the current from the capacitor bank were to "back-feed" through wiring that was not designed for that sort of current!
  • It is the nature of power FETs such as Q1 and Q2 to have an intrinsic "reverse diode" - even when turned off.  Be aware that if it so-happens that the bottom of capacitor bank is more negative than the "ground" (Battery -) side, these diodes - particularly the one in Q2 - will conduct!
I have no doubt that this circuit could be improved a bit:  It was designed and put together over just two evenings in preparation for the 2015 "Homebrew Night" meeting of the Utah Amateur Radio Club - including the time it took for the RTV to set up and paint on the enclosure to dry!

Possible uses:

While it is possible to use this to allow an HF rig to be powered from a set of D cells, it is more practical to use smaller lead-acid or lithium-ion packs as the primary power source and use the capacitor bank as the "ballast" to supply the peak currents.

Figure 5:
One end of the enclosure showing the on/off switch and
the two indicator LEDs.  The red "charge" LED indicates that the capacitor
bank is charging (at 2-3 amps) and the "power conditioning" capability
is not available until it has completed.  With completely discharged
capacitors this process takes a "minute or two", depending on the
voltage and the capacitance of the bank.
Click on the image for a larger version.
It may also be used in a vehicle to allow an HF transceiver to be powered from a cigarette lighter connection by reducing the average current drawn by it and thus keeping the average voltage higher to prevent the radio from shutting down/misbehaving on voice peaks.  When used in the comparatively "dirty" electrical environment of a vehicle it will go a long way to remove spikes from solenoids and motors, not to mention alternator whine.

I'm certain that this device could be improved, but it seems to function as it is.

Conclusions:

How usable is it, in the real world?

It depends a lot on how - and with which radio - you plan to use it.  For example many older-vintage Kenwood HF transceivers will fail to function properly (e.g. operate with distorted audio, "FMing" of the signal, etc.) much below 12-12.5 volts while more modern, compact HF radios like the Yaseu FT-100 and FT-857 will happily run at 10-11 volts - perhaps at reduced output power - but fine otherwise.  The upshot is that if you are considering a radio to be operated from "marginal" power sources be certain that you have done your research and consider how your candidate radios operate at low supply voltage and how/if they degrade "gracefully" or not!

How about running an HF rig from alkaline "D" cells?  As it turns out I can happily transmit 100 watt (peak) SSB using my old Yaesu FT-100 with the device described on this page using 10 "D" cells in series.

To do this effectively, one must minimize the contact resistance of the battery contacts which pretty much rules out using cheap, spring-loaded plastic battery holders which, by themselves, can have almost as much resistance as the cells themselves. Aside from spot-welding tabs onto the alkaline cells (the heat of soldering would likely cause some damage and slight loss of capacity) the best holders are aluminum with heavy bus bars and the fewest number of springs and contacts (e.g. multiple cells directly in series) such as the Keystone four-cell holders, models 158 (two of them) along with one two-cell holder, Keystone model 186.

One cannot "key down" with a carrier without significant voltage sag, but the FT-100 seems to work OK on typical SSB with voice peaks - but under such heavy loads don't expect to get much longevity before the cells' internal resistance increases:  As noted previously not all radios (such as older Kenwood mobiles) behave so well at lower voltages, so do your homework!

As detailed in the article, a more practical use of this sort of device is as a "power conditioner" to help compensate for voltage sags due to resistance in the interconnecting cables, somewhat underrated power sources, aging battery packs and/or "small"-ish batteries.

* * *

How about using this with a solar power source?

Assuming that voltage from the panel is regulated to a safe value (15 volts or below) the capacitor bank could, in theory, maintain voltage if the solar array provided at least the average current, but considering that solar illumination can vary wildly due to time of day, sun angle, clouds and shadows it would be recommended that additional storage capacity (remember that 53-1/2 Farads has only the theoretical storage capacity of a single "AA" alkaline cell!) be used as well, such as a 7-20 amp-hour lead-acid or lithium-ion based pack - but this sort of system could well be the basis of another article!

Tuesday, January 19, 2016

Homebrew Drake TR-3 power supply

Years ago I was a teenager with a novice license and I got for a combination Christmas/Birthday present a "brand new" (to me) Drake TR-3 transceiver - but it had no power supply.

What to do?

Fortunately, I had the original manual and in it, there was the schematic of the original Drake AC-3 power supply, reproduced below for convenience.

Figure 1:
A schematic diagram of the original Drake power supply.
Click on the image for a larger version.
It looked simple enough.  At first I puzzled over the plate and low voltage (250 volt) supply portions of the diagram, noting the seemingly-odd wiring of the diodes and capacitors, but the circuit description noted that these were voltage doublers and a quick check with my ARRL Radio Amateur's Handbook explained how this circuit worked.

Where to get the parts, then?

The chassis was no problem:  Several years prior I'd picked up a Dynaco 40 audio amplifier (or similar) with a blown power transformer (and melted tubes) for just a few dollars at a thrift store.  I'd built into this case my "Novice" transmitter, a crystal-controlled (no VFO!) unit using a12BY7A driving a 12DQ6 design, slightly modified from that which had appeared in (then old) 1970 ARRL Radio Amateur's Handbook that I'd borrowed from my Junior High School library and with this I used a Heathkit SB-303 receiver that I'd bought from my Elmer with lawn-mowing/misc. money.  I'd used that combination, along with a few random crystals that I'd managed to scrounge/warp (another story!) on 40 and 15 meters for 6-7 months.

Now that I had the TR-3, my plan was to scrap the homebrew transmitter and use its case for the power supply, taking me off the air.  Both itching to use my "new" TR-3 and knowing that I'd suffer "withdrawl" when I disassembled my only working transmitter, I wanted to complete this power supply project quickly.

Lining up parts:

Having collected parts for several years by then I rummaged around and found several power transformers, some of which I'd bought for $1-$2 each at a local thrift store, apparently having been removed from tube-type TVs or audio amplifiers (Who would bother doing that?  The store just had them on a shelf with stickers on them!)  Some of these transformers had clearly come from large, console tube-type color TVs as they were quite large and had quite a few windings on them - but these windings were unmarked aside from generally inscrutable wire colors.

In looking at the voltage requirements for the Drake TR-3 I saw that I would need around 650 volts at up to (approx.) 600 milliamps, peak, for the plate supply on the final amplifier, 250 volts for the "other" circuitry within the radio and a nominal -60 volt supply, adjustable down to -45 volts for the grid bias on the finals and, of course, 12-13 volts for the filament string which, for the TR-3 - designed to be a mobile radio - consisted of a series-parallel arrangement of both 6 and 12 volt tubes.

In rummaging around my transformer collection I expected, at first, that the plate supply would be difficult, but that turned out not to be so - the difficulty would be the 250 volt supply.

Not knowing how the thrift-store transformers were wired and not having faith that their color codes would follow any of the "standards" mentioned in the ARRL handbook I first took an ohmmeter and mapped out all of the wires, noting which seemed to be connected to which and, as well as I could, the resistances of these windings.  This, alone, was useful as I now had a pretty good idea as to which ones where the likely high voltage windings (highest resistance), where the center-taps likely were, which ones might be the filament windings (lowest resistance) and finally, which one might be the primary windings.

You probably noticed a lot of "likely" and "might be" statements in the above paragraph and this meant that I from resistance measurements alone, I could not be sure that I had properly identified everything.  There was only one thing to do:  Connect it to mains power.

Even as a teenager I knew enough to not connect it directly to a wall outlet, so I decided on a two-step process.  Having a 12 volt AC transformer kicking around, I used that as the proxy for 120 volts, knowing that worst case, I'd throw only a few amps into the wrong place, the current limited by this power-limited transformer - and that is exactly what I did with a large, black transformer that had probably seen previous service in a console color TV.  Some of my guesses weren't exactly right, but after just a few minutes I found a medium-resistance winding that I'd suspected to be the primary and upon applying 12 volts to it found a winding that was suspiciously 1.2 volts and another that was in the 15 volt range - plus a few other miscellaneous taps and windings.

Now I was confident that I could "light it up" from the 120 volt mains, so I rummaged around in my dad's electrical box and found a porcelain light socket and wired it in series with the (believed) primary of the transformer, pulled a 60 watt bulb from a reading lamp and plugged it in.

Sure enough, the 60 watt bulb did not light, indicating that there was no short or gross misidentification of a wire and there was a 12.6 volt winding suitable for the filament and another winding with 155 volts and shorting either of these out caused the bulb to illuminate to what appeared to be full brilliance.  Shunting the bulb I briefly shorted the 12.6 volt winding and noted that transformer hummed noisily while the lights in the room dimmed slightly, indicating, in a rough fashion, that it was capable of supplying quite a bit of current and I surmised that it would be able to supply the filament string for the TR-3.

The 250 volt supply:

I was now wondering what I could do with the 155 volt winding? - clearly intended for some "low voltage" stuff within the TV such as IF, audio and tuners but with bridge rectification, capacitive input, from the center tap the most I could hope to get would be 200-220 volts - and that was without a load.  Using the voltage doubler method was out of the question since not only would the voltage be too high, I didn't have any suitable capacitors on-hand and being a poor teenager, I didn't have any money left over at that moment to get any!  All I had were the original "can" capacitors from the Dynaco 40 (and possibly something else) and since the cans had to be grounded, they weren't suitable for use as a doubler since one of the capacitors (both of which had to be identical) had to be "floating".

What I really needed was a 200-ish volt winding on which I could use a bridge rectifier that, when filtered with a capacitor, would get me within the target range of 250 volts, under load.

Rummaging through my pile of junk I found just the thing... sort of...  What I did find was a fairly large isolation transformer with 120 volt primary/secondary capable of at least 100 watts.  In series with the other transformer I could obtain about 275 volts AC - but this would be too much, at least for a capacitor-input power supply.  As it happened, the same piece of equipment (I have no idea what it was supplied to be) that contained the isolation transformer also had several large chokes that I managed to "ring out" to around 6.2 Henries (at no current - it likely "swings" to a lower inductance when a DC current passes through them) and by placing this in series with the full-wave rectified output I would not get the 1.4x (or so) peak voltage from the series combination, but rather something lower. 

Very carefully, I constructed the "250 volt" power supply, wiring the two transformers' "high voltage" windings in "boost".  Upon turning it on I (somewhat expectedly) got a bit over 300 volts, so I did more rummaging and found, on the same piece of equipment that had yielded the isolation transformer and choke, a 20 watt, 750 ohm "slider" type of adjustable resistor and I placed that in series with the choke.  In doing a bit of quick math and knowing approximately how much current the TR-3 pulled from the 250 volt line, I figured that I could afford to burn a few watts:  I just hoped that its voltage would be adequately stable.

Comment:  Had I the correct capacitors, I could have voltage-doubled the output of the isolation transformer, but I would have still needed a pretty hefty transformer to supply the 12.6 volt filament rail, so my "transformer count" would have remained the same based on the parts on hand at the time.

The Plate supply:

The plate supply was an easy one.  Somewhere - I don't know where - I'd managed to scrounge a Triad P-3A plate transformer with a 5 volt rectifier tube winding (no use to me) and a high voltage secondary of 300-0-300 volts.  I wasn't quite sure of the current rating of this transformer as it just said "300-0-300" and "0.3 amps" on the nameplate so I made the assumption (probably wrong!) that I could wire it as 600 volts, use a bridge rectifier, and safely get 300 mA.  I did know that a 600 VAC source would yield 800-900 volts DC, unloaded - a bit much for the three 12JB6 tubes in the final amplifier which "want" closer to 650 volts under load

The solution?  Another choke.  That same piece of equipment that had already yielded the isolation transformer, 750 ohm adjustable resistor also had another, identical 6.2 Henry choke on-board so I threw that in series with the output of the bridge rectifier's output.

In reading about choke input power supplies I knew that while I'd get lower operating voltage, the unloaded voltage would likely be fairly close the that of a capacitor-input power supply.  From what I could gather from my ARRL handbook it appeared that the 12JB6 tubes would be able to handle 800-ish volts on the plates at idle without difficulty, but I was wondering how much it would drop under load - and how quickly?

A bonus of the choke-input was also that it offered a bit of relief to the plate transformer that I knew that I would be overtaxing:  At maximum power output, the three tubes in the final amplifier would be pulling around 600 milliamps at 650 volts or so (according to the Drake manual).  By using a choke input the "peak current" at the top of the AC cycle where the filter capacitor charging would occur was significantly alleviated, effectively improving the power factor from "awful" to "not so bad."  but the degree by which the effective capacity of the transformer would be increased was a bit vague.

It also occurred to me at the time that the 1N540 diodes that I used for the supply, rated for only 250 mA at 400 volts, were somewhat "marginal" for the plate supply when 600mA was being pulled, despite the fact that this current was being split between two legs:  The choke input and its reduction of the very high, peak repetitive currents has likely helped preserve these diodes over time.

Another point was that it somewhat "decoupled" the output of the power supply from the transformer:  Brief, high-current voice peaks would be immediately sourced from the capacitors first while being "stretched out" (averaged) as current and magnetic field builds in the choke.  I figured that that this should, in theory, work for SSB where the peaks are typically brief, but less-so for CW.  Time would tell.

The grid bias supply:

The final supply voltage needed was a low-current negative supply that was adjustable from approximately -45 to -65 volts.  In staring at the schematic I didn't see any obvious place from which I could tap a stable, isolated source of 70-120 volts of AC:  In theory, I could have coupled from one of the other transformers - most likely on the 250 volt supply - but I figured that I wanted the bias supply to be reasonably stable.

Going back to the junk box I found a small transformer that was probably from an old, tube-type UHF TV converter that had a 6 volt filament winding and a 120 volt secondary.  Interested only in the high voltage secondary knew that I could probably deal with 120-130 volt DC that I'd get from the 120 volt winding, half-wave rectify and resistively drop it to the desired -65 to -45 volt range.

This was pretty straightforward:  Just a resistive divider with some capacitive filtering.  When wired up, it seemed to perform as expected.
Figure 2:
The (recently reverse-engineered) diagram of the as-built power supply, built in 1983.
The 1N540 are old (1966 date codes) 0.25 amp, 400 volt diodes, of which I had many at the time - likely
marginal when 600 mA of plate current is being pulled!  Each leg of diodes
on the 600 volt supply could likely be replaced with a single 1N4007 - although I'd probably
use two in series.  D17 is a tiny, 200 volt glass diode of (presently) unknown designation.
Click on the image for a larger version.

About the mechanical construction:


As mentioned previously, the enclosure for this power supply had previously been used for my homebrew Novice transmitter - and before that, a Dynaco 40 audio amplifier (I think - the small PCB was marked as "Dynaco".)

On the deck of the chassis there really wasn't quite enough room for all of the components:  On the top deck, the transformers were cheek-to-jowel, abutting each other with no room for anything else, but I managed to squeeze everything in.  Underneath  I wired the rest of the components - diodes, filter capacitors, resistors - using parts that I scrounged from my somewhat limited junk box.

With the circuits seemingly operating under no load, I obtained the correct "Jones" connector for the TR-3, wired a cable harness to plug into the octal socket that happened to already be present on the amplifier chassis, plugged it in and turned it on.

Testing:

Figure 3:
The underneath of the power supply.  The inside of the bottom cover of the
supply (not visible in any of the pictures) is covered with two layers of high-
quality cloth "gaffers" tape for insulation.  The two 100uF, 450 volt plate
filter capacitors are visible along the bottom edge.  When I was building
the supply I accidentally drilled a hole in one of them, apparently without
damaging the foil inside and patched it with RTV (silicone) seal since
I couldn't afford to get another capacitor at the time!
That was back in 1983 and both capacitors are still "good".
Click in the image for a larger version.
Amazingly, nothing blew up!  The filaments lit up and the voltage strings came up to their proper voltages - within reasonable tolerances, anyway, at least once the adjustable resistors and rheostats were tweaked.

Connecting the output of the TR-3 to a 150 watt light bulb (I didn't own a large dummy load at the time) I keyed up in CW, dipped-and-loaded and observed the beautiful brilliance of RF-driven tungsten while noting that the at full load, the plate voltage sagged down from 850 volts to 625-650 volt range - approximately what the Drake manual specified that it should be.

I was on the air!

A curious problem - and a fix:

I was still a novice at this point so I used it only on CW - which was slightly awkward since the TR-3 has no sidetone.  At first I would tune the transmit frequency with my Heathkit SB-303 and use that to monitor myself, but this was quite awkward since it required another, separate speaker - very inconvenient if you were typically using headphones to avoid bothering others in the house.

I then noticed something about the power supply:  It hummed loudly.  With all of those transformers crammed into a steel box, this was to be expected, but what was particularly strange was that it hummed loudest when I was receiving, becoming nearly silent when I was keyed down.

What I also noticed was that if I left the power supply on for more than an hour or hour and a half, I started to smell hot enamel - the tell-tale indication that something transformer-related was getting very warm.

For several months I would operate CW by resting my feet on the power supply, on the floor, and use the absence of hum in lieu of the sidetone - and I got pretty good at it!  After an hour or so, when the power supply got too hot, I would then shut it off for a while an allow it to cool down before turning it on again.

After a few months of this I finally I decided that I would find out what was going on.  Without the power supply connected to the radio I discovered that if I disconnected either one of two transformers that were next to each other the hum would disappear:  Could their magnetic fields be bucking each other?

On a whim I reversed the primary and secondary leads of one large of the transformers (the one with fewest leads, of course) and the hum that was present when receiving was gone - it only hummed when keyed down, as it should:  I may have also done some rearrangement of the transformers on the top deck to better-separate their magnetic fields, but memory fails me on this point.

When I was done I could now could leave the rig and power supply on all day and it would barely heat up:  It took only a session or two for me to get used to the the "non-inverted" power supply hum CW sidetone!

Figure 4:
The top deck of the power supply showing the four AC
transformers, two chokes and two capacitors.  This picture
was taken after vacuuming out an inch or so of dust.
Starting from the bottom-let and working clockwise:  The plate
transformer, two filter capacitors for the 250 volt supply, the input
choke for the 250 volt supply, the filament/175 volt transformer,
the isolation transformer, the input choke for the plate supply, and
the transformer for the bias supply (the small transformer, wedged in.)
Click in the image for a larger version.
The next year I upgraded from Novice to Advanced class and used the Drake on SSB where it and the power supply worked very well.  Later, I even managed save enough money to buy a brand new, replacement AC-4 power transformer - the same as the one in the original Drake AC-3 power supply - but I never did get around to rebuilding that old power supply with its plethora of magnetics:  I still have the new-in-a-box AC-4 power transformer somewhere!

Final comments:

This power supply still exist as evidenced by the fact that the pictures on this web page were taken very recently.  Both this power supply - and the Drake TR-3, now over a half-century old - still work fine despite my having owned it decades (and some minor repair, replacement of a dried-out capacitors in the power supply) despite the fact that I've obviously abused the plate transformer.

As can be seen, some (many?) aspects of the construction techniques used are a bit "iffy" and were I to build it today - after the benefit of decades of experience and the availability of other materials I would certainly do it differently.

Is it dangerous?

Figure 5:
The power supply, assembled, showing the side with the grid bias
adjust control (far left) and the pilot light.
Click on the image for a larger version.
Not very - at least with the covers in place.  Some of the "flakiness" of the construction methods involved (e.g. excessive use of  unsecured "flying leads", components that could be better-secured, using some bits of electrical tape to insulate wire instead of heat-shrink tubing in a few places - heat shrink tubing was a bit rarer when this was built) would contribute more to reliability than issues with safety and I would most certainly not build it (exactly) this way again!

Despite the fact that this power supply been bounced around the country several times, subject to very high humidity for several years (hence some of the rust!) and, of somewhat "interesting" design it continues to function reliably.  If it does ever blow up I do have a spare AC-4 power transformer kicking around.

But then again, there's EvilBay...